Over half of all tech industry workers view AI as overrated::undefined

    • @Paddzr@lemmy.world
      link
      fedilink
      English
      181 year ago

      Which is fair. AI at work is great but it only does fairly simple things. Nothing i can’t do myself but saves my sanity and time.

      It’s all i want from it and it delivers.

        • @NightAuthor@lemmy.world
          link
          fedilink
          English
          01 year ago

          You say it’s “far” but 70 years ago a simple calculator was the size of a house. The power of my desktop from 10 years ago is beat by my phone, hell maybe even my watch.

          You know, you code, compute is improving rapidly, and even when it slows vertical scaling, it’s still horizontally scaling. All the while software is getting more efficient, developing new capabilities and techniques which only bring on even more innovation.

          It compounds. At this point I think the only limiting factor is how much faith the rich and powerful put in AI’s ability to make them richer. The more they invest, the faster it’ll grow.

  • @Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    81 year ago

    Overrated? Compared to what AGI that does not exist yet? Overhyped though? Absolutely.

    We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today. So many thumbnails and writeups have used AI that to say it is overrated it a bit absurd unless you were expecting it to be be AGI, then yes the AI today is overrated, but it does not matter as you are consuming it still.

    • @aesthelete@lemmy.world
      link
      fedilink
      English
      81 year ago

      We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today.

      And this AI content that you’re consuming, is that an improvement?

      If not maybe it’s uh, what’s the word? Overrated.

        • @aesthelete@lemmy.world
          link
          fedilink
          English
          51 year ago

          Lol look at the glorious thumbnails!

          The text is dreadful. It’s somehow worse than the sweatshop content it replaced.

  • @shirro@aussie.zone
    link
    fedilink
    English
    39
    edit-2
    1 year ago

    Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.

    The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.

    I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.

    • @Barack_Embalmer@lemmy.world
      link
      fedilink
      English
      11 year ago

      ML has already had a huge impact on the world (for better or worse), to the extent that Yann LeCun proposes that the tech giants would crumble if it disappeared overnight. For several years it’s been the core of speech-to-text, language translation, optical character recognition, web search, content recommendation, social media hate speech detection, to name a few.

      • @shirro@aussie.zone
        link
        fedilink
        English
        11 year ago

        ML based handwriting recognition has been powering postal routing for a couple of decades. ML completely dominates some areas and will only increase in impact as it becomes more widely applicable. Getting any technology from a lab demo to a safe and reliable real world product is difficult and only more so when there are regulatory obstacles and people being dragged around by vehicles.

        For the purposes of raising money from investors it is convenient to understate problems and generate a cult of magical thinking about technology. The hype cycle and the manipulation of the narrative has been fairly obvious with this one.

        • @agent_flounder@lemmy.world
          link
          fedilink
          English
          01 year ago

          AI seems to have gone through periods of relative stagnation punctuated by leaps forward. Neural networks were the next big thing when I was in college in the late 80s. Then fuzzy logic. Computer vision was limited maybe 30 years ago but has had some surges due to new algorithms and faster processors. Bayesian algorithms (Hidden Markov Models etc) got big fighting spam but helped a lot with speech to text (STT). LLMs are the next big leap forward from that area of research. I think we still have a number of major leaps to go before we have an AGI, though. But if LLMs follow the same progression as text to speech (TTS) or STT, in 10-20 years it will be impressively good.

      • @randon31415@lemmy.world
        link
        fedilink
        English
        -81 year ago

        The tech is here, the problem is risk management. Like, we’ve had the ability to have self-driving cars for almost a decade. Like catholic priests and pedophilia, they are much less likely to crash compared to the common man. But the assumption is that they never crash, so when they do everyone makes a big deal about it.

        Think of all the B.S. documentation reports people have to write that no one reads. LLMs could easily handle those, but do you want to risk it if those reports actually become important?

  • @tweeks@feddit.nl
    link
    fedilink
    English
    81 year ago

    Well, it depends on your bubble I guess. But personally I’d say it’s underrated and overrated at the same time, but mostly underrated.

    It depends on your expectations and way of usage in your toolbox I’d say. It keeps surprising me weekly how fast progress is. But we get used to it perhaps.

  • @kromem@lemmy.world
    link
    fedilink
    English
    121 year ago

    In my experience, well over half of tech industry workers don’t even understand it.

    I was just trying to explain to someone on Hacker News that no, the “programmers” of LLMs do not in fact know what the LLM is doing because it’s not being programmed directly at all (which even after several rounds of several people explaining still doesn’t seem to have sunk in).

    Even people that do understand the tech more generally pretty well are still remarkably misinformed about it in various popular BS ways, such as that it’s just statistics and a Markov chain, completely unaware of the multiple studies over the past 12 months showing that even smaller toy models are capable of developing abstract world models as long as they can be structured as linear representations.

    It’s to the point that unless it’s in a thread explicitly on actual research papers where explaining nuances seem fitting I don’t even bother trying to educate the average tech commentators regurgitating misinformation anymore. They typically only want to confirm their biases anyways, and have such a poor understanding of specifics it’s like explaining nuanced aspects of the immune system to anti-vaxxers.

  • @fosforus@sopuli.xyz
    link
    fedilink
    English
    -3
    edit-2
    1 year ago

    Over half of all tech industry workes don’t know how to use AI, I guess? I’m sure there’s some percentage of people whose jobs are advanced enough that AI will not help there, but I bet my kidney that it’s not more than half.

    10-20 years ago boomers scoffed at people who used Google instead of reading manuals or figuring it out by themselves.

  • @ParsnipWitch@feddit.de
    link
    fedilink
    English
    63
    edit-2
    1 year ago

    It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.

    My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.

    He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.

    Too many people see LLMs as authorities they just aren’t…

    • @Spedwell@lemmy.world
      link
      fedilink
      English
      91 year ago

      It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.

      I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.

    • @kromem@lemmy.world
      link
      fedilink
      English
      61 year ago

      Too many people see LLMs as authorities they just aren’t…

      This is more a ‘human’ problem than an ‘AI’ problem.

      In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.

      Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”

      Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.

      Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.

  • danque
    link
    fedilink
    English
    31 year ago

    It’s not the magic that all people think it is. They even warn you that the facts might not be true facts.

      • @VintageTech@sh.itjust.works
        link
        fedilink
        English
        391 year ago

        Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.

        • @Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          20
          edit-2
          1 year ago

          Sounds like we need to align on triple underscoring the double-bottom line for all stakeholders. Let’s hammer a steak in the ground here and craft a narrative that drives contingency through the process space for F24 while synthesising synergy from a cloudshaping standooint in a parallel tranche. This journey is really all about the art of the possible after all so lift and shift a fit for purpose best practice and hit the ground running on our BHAG.

    • SineSwiper
      link
      fedilink
      English
      281 year ago

      No SQL, block chain, crypto, metaverse, just to name a few recent examples.

      AI is overhyped, but it is, so far, more useful than any of those other examples, though.

      • @PieMePlenty@lemmy.world
        link
        fedilink
        English
        41 year ago

        These are useful technologies if used when called for. They aren’t all in one solutions like the smart phone killing off cameras, pdas, media players… I think if people looked at them as tools which fix specific problems, we’d all be happier.