• @HootinNHollerin@lemmy.dbzer0.com
    link
    fedilink
    English
    27
    edit-2
    2 months ago

    The amount of people spamming ‘deepseek’ on YouTube comments and live streams is insane. Definitely have a shit load of shadow funding

    • Alphane MoonOP
      link
      fedilink
      English
      262 months ago

      While I tend to avoid conspiracy theory type thinking, the nature of modern social makes it very easy to run astroturfing/botting campaigns. It’s reasonable to be suspicious.

    • paraphrand
      link
      fedilink
      English
      132 months ago

      I find the online cheerleading for AI and AGI strange. It feels like a frothing mob rooting for the unleashing of a monster at times.

      • @spookex@lemmy.world
        link
        fedilink
        English
        3
        edit-2
        2 months ago

        I mean, a lot of it is just people who started using chatgpt to do some simple and boring task (writing an email, CV, or summarizing an article) and started thinking that it’s the best thing since sliced bread.

        I would know that since I’m a university student. I know the limitations of current AI stuff so I can cautiously use it for certain tasks and don’t trust the output to be correct. Meanwhile, my friend thought that he was making chatgpt better at answering his multiple choice economics quiz by telling it which of the answers it gave was wrong…

    • @1984@lemmy.today
      link
      fedilink
      English
      5
      edit-2
      2 months ago

      Seems to actually be some press about it too, I was surprised to see Bbc, Reuters and new york post write about it.

      But yeah, it’s very interesting what they have made here.

  • @ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    262 months ago

    I kind of suspect this is as much about A.I. progress hitting a wall as anything else. It doesn’t seem like any of the LLMs are improving much between versions anymore. The U.S. companies were just throwing more compute (and money/electricity) at the problem and seeing small gains but it’ll be awhile before the next breakthrough.

    Kind of like self-driving cars during their hype cycle. They felt tantalizingly close 10 years ago or so but then progress stalled and it’s been a slow grind ever since.

    • @fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      English
      132 months ago

      I think with a lot of technologies the first 95% is easy but the last 5% becomes exponentially harder.

      With LLMs though I think the problem is conflating them with other forms of intelligence.

      They’re amazingly good at forming sentences, but they’re unable to do real actual work.

        • @fine_sandy_bottom@discuss.tchncs.de
          link
          fedilink
          English
          12 months ago

          Yeah. I really dislike this “rule” because it’s commonly espoused by motivational speakers and efficiency “experts” saying you make 80% of your money from 20% of your time.

          It sounds great if you’ve never heard it before but in practice it just means “be more efficient” and is not really actionable.

  • BlackLaZoR
    link
    fedilink
    302 months ago

    IMO they’re way too much fixated on making a single model AGI.

    Some people tried to combine multiple specialized models (voice recognition + image recognition + LLM, + controls + voice synthesis) to get quite compelling results.

    https://www.youtube.com/watch?v=7Fa3_rH4NcQ

      • BlackLaZoR
        link
        fedilink
        62 months ago

        80% time she’s just a bot, but there are these flashes of brilliance that makes me think we’re closer to general purpose intelligence than we think

        And this is just one dude using commercially available tooling. Well funded company could do infinitely better, if they were willing to give up some of the political correctness when training the model

        EDIT: When he removed the word filter last time it got really hilarious quickly

        https://www.youtube.com/watch?v=0g-I_1jBJXM

        • @DragonTypeWyvern@midwest.social
          link
          fedilink
          English
          52 months ago

          What I am 100% certain of, because humanity is terrible, is that if a true AI is created that fact will be ignored for being inconvenient to profit seeking.

          • drthunder
            link
            fedilink
            English
            42 months ago

            I wonder what the mildest thing a true AI could tell the oligarchs to do that would make them shut it down. Giving 10% of their wealth away and not in a tax dodge way? Stop funding fascists?

  • shoulderoforion
    link
    fedilink
    232 months ago

    I mean I get the DeepSeek launch exposes what NVIDIA and OPENAI have been pushing as the only roadmap to AI as incorrect, but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster? Not sure why the selloff occurred, it’s like someone got a PC to post quicker with a x286, and everybody said hey those x386 sure do look nice, but we’re gonna fool around with these instead.

    • Avid Amoeba
      link
      fedilink
      English
      22 months ago

      but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster?

      Not that much if the problem is NP-hard and they were already hitting against the asymptote.

    • magic_lobster_party
      link
      fedilink
      282 months ago

      I believe this will ultimately be good news for Nvidia, terrible news for OpenAI.

      Better access to software is good for hardware companies. Nvidia is still the world leader when it comes to delivering computing power for AI. That hasn’t changed (yet). All this means is that more value can be made from Nvidia gpus.

      For OpenAI, their entire business model is based on the moat they’ve built around ChatGPT. They made a $1B bet on this idea - which they now have lost. All their competitive edge is suddenly gone. They have no moat anymore!

      • shoulderoforion
        link
        fedilink
        52 months ago

        well, it is 2025, a billion dollars isn’t what it used to be, a trillion is something

    • Yerbouti
      link
      fedilink
      English
      62 months ago

      The fact that you can run it locally with good perfomance on 4+ years old machine (an M1 Max for example), is not exactly a good news for them. I think deepseek just made their 500 billion investment project, which was already absurd, incredibly stupid. I’m gonna say it again, the GAFAM economy is based on a whole lot of nothing. Now more then even, we can the web back and destroy their system. Fuck the tech-bros and their oligarch friend.

    • @demesisx@infosec.pub
      link
      fedilink
      English
      12
      edit-2
      2 months ago

      The reason for the correction is that the “smart money” that breathlessly invested billions on the assumption that CUDA is absolutely required for a good AI model is suddenly looking very incorrect.

      I had been predicting that AMD would make inroads with their OpenCL but this news is even better. Reportedly, DeepSeek doesn’t even necessarily require the use of either OpenCL or CUDA.

  • @sudo42@lemmy.world
    link
    fedilink
    English
    592 months ago

    There is no downside to lying these days. Yet the public seems surprised that all they see is lying.

    • @radau@lemmy.dbzer0.com
      link
      fedilink
      English
      262 months ago

      So many people don’t even question it. Talk loud and confidently enough and that’s the bar for most unfortunately.

      TikTok, Instagram and similar are great examples of this, initially you think wow cool I’m seeing all of these new things and getting so much info. Then you see someone come up on a topic you know something about and the facade breaks when all they do is spew misinformation that attracts a crowd (usually via fear).

      • @4am@lemm.ee
        link
        fedilink
        English
        102 months ago

        And then the hordes of sycophant DinkDonkers repeat their detritus over every comment thread they can

  • @Reygle@lemmy.world
    link
    fedilink
    English
    52 months ago

    I hope that normal people will now realize how full of sh*t he is. They won’t, but DON’T TAKE THIS FROM ME

  • pachrist
    link
    fedilink
    English
    722 months ago

    There’s no way I believe that Deepseek was made for the $5m figure I’ve seen floating around.

    But that doesn’t matter. If it cost $15m, $50m, $500m, or even more than that, it’s probably worth it to take a dump in Sam Altman’s morning coffee.

  • @futatorius@lemm.ee
    link
    fedilink
    English
    442 months ago

    How about that: venture capitalists don’t know what’s going on in the market any more than anyone else does. They’re just arrogant because they have metric shit-tons of money.

  • Alphane MoonOP
    link
    fedilink
    English
    382 months ago

    Nevertheless, like the funding-hungry CEO he is, Altman quickly turned the thread around to OpenAI promising jam tomorrow, with the execution of the firm’s roadmap, amazing next-gen AI models, and “bringing you all AGI and beyond.”

    AGI and beyond?

    • Dem Bosain
      link
      fedilink
      English
      412 months ago

      If you throw billions of dollars at a problem, you will always get the most expensive solution.

    • Nightwatch Admin
      link
      fedilink
      English
      14
      edit-2
      2 months ago

      Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.

      Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.

      • Alphane MoonOP
        link
        fedilink
        English
        322 months ago

        I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.

      • @floofloof@lemmy.ca
        link
        fedilink
        English
        24
        edit-2
        2 months ago

        The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they’re not confident about achieving the real thing:

        The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. (Source)

        Given this definition, when they say they’ll achieve AGI and beyond, they simply mean they’ll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.