• @Churbleyimyam@lemm.ee
    link
    fedilink
    English
    3718 months ago

    I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.

    Some people are gonna lose a lot of other people’s money over it.

    • @themurphy@lemmy.ml
      link
      fedilink
      English
      1238 months ago

      Definitely. Many companies have implemented AI without thinking with 3 brain cells.

      Great and useful implementation of AI exists, but it’s like 1/100 right now in products.

      • @floofloof@lemmy.ca
        link
        fedilink
        English
        588 months ago

        If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.

        At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.

      • @PerogiBoi@lemmy.ca
        link
        fedilink
        English
        448 months ago

        My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.

        “We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.

        A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha

        • verity_kindle
          link
          fedilink
          English
          108 months ago

          I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.

        • @themurphy@lemmy.ml
          link
          fedilink
          English
          218 months ago

          That’s an even worse ‘use case’ than I could imagine.

          HR should be one of the most protected fields against AI, because you actually need a human resource.

          And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.

    • @spiderman@ani.social
      link
      fedilink
      English
      208 months ago

      Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.

      • @Lost_My_Mind@lemmy.world
        link
        fedilink
        English
        118 months ago

        Ya know what pfoduct MIGHT be better with AI?

        Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.

        How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???

        IS THAT TOO MUCH TO ASK???

    • @SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      858 months ago

      Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.

      • @slaacaa@lemmy.world
        link
        fedilink
        English
        598 months ago

        “You might lose all your money, but that is a risk I’m willing to take”

        • visionairy AI techbro talking to investors
        • @SlopppyEngineer@lemmy.world
          link
          fedilink
          English
          138 months ago

          Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.

          • verity_kindle
            link
            fedilink
            English
            98 months ago

            Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?

            • @slaacaa@lemmy.world
              link
              fedilink
              English
              158 months ago

              You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.

              • @Aceticon@lemmy.world
                link
                fedilink
                English
                108 months ago

                Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.

                Certainly it neatly explains a lot.

                • @rottingleaf@lemmy.world
                  link
                  fedilink
                  English
                  168 months ago

                  Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.

          • @Churbleyimyam@lemm.ee
            link
            fedilink
            English
            68 months ago

            If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.

      • @barsoap@lemm.ee
        link
        fedilink
        English
        58 months ago

        OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.

    • peto (he/him)
      link
      fedilink
      English
      368 months ago

      A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.

      Investors going to bubble though.

    • @SLVRDRGN@lemmy.world
      link
      fedilink
      English
      148 months ago

      I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.

      How are producers/consumers okay with everything being so mediocre??

      • @MajorHavoc@programming.dev
        link
        fedilink
        English
        25 months ago

        How are producers/consumers okay with everything being so mediocre??

        “You’re always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It’s so human!” - The Good Place

      • @Churbleyimyam@lemm.ee
        link
        fedilink
        English
        68 months ago

        How are producers/consumers okay with everything being so mediocre??

        I’m not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.

    • Riskable
      link
      fedilink
      English
      28 months ago

      My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.

      • Riskable
        link
        fedilink
        English
        118 months ago

        Market shows that hype is a cycle and the AI hype is nearing its end.

          • Riskable
            link
            fedilink
            English
            37 months ago

            When one of two things happens:

            • A new hype starts to replace it (can happen fast though!)
            • The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)

            When “AI” hype dies down we are likely to see “AI” removed from various topics because enough people know and understand the hyped parent topic. It’ll just be “image generation”, “video generation”, “generated text”, etc.

    • @Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      78 months ago

      There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.

      So we have… All this. All this nonsense. All because of stupid managers.

    • @rottingleaf@lemmy.world
      link
      fedilink
      English
      78 months ago

      Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

      Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

    • Lightor
      link
      fedilink
      English
      37 months ago

      It’s the new block chain or NFT hype, they think it’s magic.

      • @MajorHavoc@programming.dev
        link
        fedilink
        English
        15 months ago

        But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)

  • @oyo@lemm.ee
    link
    fedilink
    English
    1608 months ago

    LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

      • @treadful@lemmy.zip
        link
        fedilink
        English
        138 months ago

        They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.

        Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.

        • @Quereller@lemmy.one
          link
          fedilink
          English
          68 months ago

          That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.

      • @Blackmist@feddit.uk
        link
        fedilink
        English
        558 months ago

        And the system doesn’t know either.

        For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

        • @xantoxis@lemmy.world
          link
          fedilink
          English
          358 months ago

          Accurate.

          No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

          • @Blackmist@feddit.uk
            link
            fedilink
            English
            188 months ago

            The worst for me was a fairly simple programming question. The class it used didn’t exist.

            “You are correct, that class was removed in OLD version. Try this updated code instead.”

            Gave another made up class name.

            Repeated with a newer version number.

            It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

            • @nilloc@discuss.tchncs.de
              link
              fedilink
              English
              57 months ago

              So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

              From what I’ve seen you’ll need an iron stomach.

      • @GBU_28@lemm.ee
        link
        fedilink
        English
        4
        edit-2
        8 months ago

        With proper framework, decent assertions are possible.

        1. It must cite the source and provide the quote, not just a summary.
        2. An adversarial review must be conducted

        If that is done, the work on the human is very low.

        That said, it’s STILL imperfect, but this is leagues better than one shot question and answer

        • @Aceticon@lemmy.world
          link
          fedilink
          English
          4
          edit-2
          8 months ago

          Except LLMs don’t store sources.

          They don’t even store sentences.

          It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.

          And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.

          Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.

          Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.

          • @GBU_28@lemm.ee
            link
            fedilink
            English
            2
            edit-2
            8 months ago

            The proper framework does, with data store, indexing and access functions.

            The cutting edge work is absolutely using LLMs in post-rag pipelines.

            Consumer grade chat interfaces def do not do this.

            Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.

  • Lvxferre
    link
    fedilink
    English
    1758 months ago

    As I mentioned in another post, about the same topic:

    Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

    • @barsquid@lemmy.world
      link
      fedilink
      English
      258 months ago

      Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

    • @DudeDudenson@lemmings.world
      link
      fedilink
      English
      138 months ago

      Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

      • @blarth@thelemmy.club
        link
        fedilink
        English
        268 months ago

        I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

        • @nossaquesapao@lemmy.eco.br
          link
          fedilink
          English
          21
          edit-2
          8 months ago

          That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed off!

          • @Krauerking@lemy.lol
            link
            fedilink
            English
            48 months ago

            We aren’t allowed new things. That might change their perfectly balanced money making machine.

            And making search worse so it can pretend to be an ex is not what I or anyone is looking for in the search box.

      • Riskable
        link
        fedilink
        English
        58 months ago

        To be fair, I love my dog but he has the same output 🤷

    • @Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      28 months ago

      More like “instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it’s supposed to”. Or just plain false advertising.

      Either way, not a good look and I’m glad it’s not just us lemmings who care.

  • @xinayder@infosec.pub
    link
    fedilink
    English
    128 months ago

    Yet companies are manipulating survey results to justify the FOMO jump to AI bandwagon. I don’t know where companies get the info that people want AI (looking at you Proton).

  • @qx128@lemmy.world
    link
    fedilink
    English
    698 months ago

    I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.

    Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 🏆
    link
    fedilink
    English
    13
    edit-2
    8 months ago

    It’s because consumers aren’t the dumbasses these companies think they are and we all know that the AI being shoved into everything fucking sucks worse than the systems we had before “AI.”

  • @OfficerBribe@lemm.ee
    link
    fedilink
    English
    158 months ago

    They just don’t get it. Once everyone will use AI toilet and AI toothbrush they will sing a different tune.

  • @sevan@lemmy.ca
    link
    fedilink
    English
    148 months ago

    I’ve been applying similar thinking to my job search. When I see AI listed in a job description, I immediately put the company into one of 3 categories:

    1. It is an AI company that may go out of business suddenly within the next few years leaving me unemployed and possibly without any severance.
    2. Management has drank the Kool-Aid and is hoping AI will drive their profit growth, which makes me question management competence. This also has a high likelihood of future job loss, but at least they might pay severance.
    3. The buzzword was tossed in to make the company look good to investors, but it is not highly relevant to their business. These companies get a partial pass for me.

    A company in the first two categories would need to pay a lot to entice me and I would not value their equity offering. The third category is understandable, especially if the success of AI would threaten their business.

  • @teamevil@lemmy.world
    link
    fedilink
    English
    88 months ago

    I absolutely hate having to scroll past garbage AI answers I don’t care to see, nor would I trust

  • Captain Aggravated
    link
    fedilink
    English
    568 months ago

    “AI” is certainly a turn-off for me, I would ask a salesman “do you have one that doesn’t have that?” and I will now enumerate why:

    1. LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it’s nothing but an extremely elaborate and expensive party trick. I don’t want actual services like web searches replaced with elaborate party tricks.

    2. In a lot of cases it’s being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word “smart” to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn’t any more “AI” than my 90’s microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I’d like to see fail so spectacularly that people in the advertising business go missing over it.

    3. I already avoided “smart” appliances and will avoid “AI” appliances for the same reasons: The “smart” functionality doesn’t actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they’ll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there’s no reason the “smart” functionality couldn’t run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than “then we don’t get your data.”

    4. AI is apparently consuming more electricity than air conditioning. In fact, I’m not convinced that power consumption isn’t the selling point they’re pushing at board meetings. “It’ll keep our friends in the pollution industry in business.”

  • @Honytawk@lemmy.zip
    link
    fedilink
    English
    308 months ago

    AI has some pretty good uses.

    But in the majority of junk on the market it is nothing but marketing bloatware.

  • Todd Bonzalez
    link
    fedilink
    English
    458 months ago

    Give me a bunch of open AI models and a big GPU to play with and I’ll have a great time. It’s a wild world out there.

    Shove a bunch of AI nonsense in my face when I didn’t ask for it and I’m throwing your product out a window.