As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • @FarceMultiplier@lemmy.ca
    link
    fedilink
    English
    172 years ago

    So, who are the up and comers? Not every company in the dotcom era died. Some grew very large and made a lot of people rich.

  • @Blackmist@feddit.uk
    link
    fedilink
    English
    482 years ago

    Ah, the sudden realisation of all the VCs that they’ve tipped money into what is essentially a fancy version of predictive text.

    Alexa proudly informed me the other day that Ray Parker Jr is Caucasian. We ain’t in any danger of the singularity yet, boys.

  • @NounsAndWords@lemmy.world
    link
    fedilink
    English
    3142 years ago

    Just a reminder that the dot com bubble was a problem for investors, not the underlying technology that continued to change the entire world.

      • @FMT99@lemmy.world
        link
        fedilink
        English
        222 years ago

        Not that you’re wrong per-se but the dotcom bubble didn’t impact my life at all back in the day. It was on the news and that was it. I think this will be the same. A bunch of investors will lose their investments, maybe some adventurous pension plans will suffer a bit, but on the whole life will go on.

        The impact of AI itself will be much further reaching. We better force the companies that do survive to share the wealth otherwise we’re in for a tough time. But that won’t have anything to do with a bursting investment bubble.

        • JustSomePerson
          link
          fedilink
          422 years ago

          Good for you. But perhaps fuck off, because some of us lost jobs, homes, and financial stability.

            • JustSomePerson
              link
              fedilink
              12 years ago

              Have you considered accepting that you’re wrong on this? It’s not a personal disaster to realize that just because you didn’t personally see the impact, there was none. Instead of sticking to what you thought, you might learn insights. Perhaps they can be valuable when analyzing the current bubbles.

        • @BradleyUffner@lemmy.world
          link
          fedilink
          English
          922 years ago

          Lots of everyday normal people lost their jobs due to the bubble. Saying it only impacted the already rich investors is wrong.

          • @ZagTheRaccoon@reddthat.com
            link
            fedilink
            English
            3
            edit-2
            2 years ago

            There really isn’t much that can harm rich people that won’t indirectly do splash damage on other people, just because their actions control so much of the economy that people depend on for survival.

          • @Candelestine@lemmy.world
            link
            fedilink
            English
            -12
            edit-2
            2 years ago

            Lots or some? I’d say “some”.

            edit: Really, though. How many people lost their jobs? Obviously, this being a techy space, anecdotes will lean towards people knowing someone personally. Tech people know other tech people.

            But in a country of 300 million, how many people was it? Was unemployment significantly moved by it? No, it was not, because for the most part the websites that failed did not employ very large numbers of people, and there were other jobs available in the field.

          • @WarmSoda@lemm.ee
            link
            fedilink
            English
            -332 years ago

            I don’t know about that. Not a single person I know or I’ve met has ever said they were affected by it in any way. In any state.

            • Chetzemoka
              link
              fedilink
              212 years ago

              Well there are two of us right here in the comment section. I had a great job at a startup online retailer. They had a good business model, it was a great place to work.

              We had been beating our sales projections and were only a couple months away from being profitable when the Sept 11 attacks happened. Within two weeks, our VC funding stopped and we were all out of jobs because the company owners had to choose between paying rent and paying us. They chose to pay us all severance, bless them for that.

              Thankfully I was young, didn’t own a house, didn’t have kids. But a lot of my colleagues did.

                • Chetzemoka
                  link
                  fedilink
                  222 years ago

                  Fully admits to being a literal child at the time. Still talking like they have something to contribute about the situation they fully admit to knowing nothing about. Gets snarky with the people who were actually impacted by it.

                  Fucking why do people like you feel the compulsive need to open their mouths about every god damned thing? Maybe your opinion, I dunno, isn’t relevant.

                  I would like to introduce you to a different possibility. It’s called keeping your mouth shut and listening. Crazy idea, I know, but it’s often followed by this thing called learning.

                  Give it a try sometime.

            • JustSomePerson
              link
              fedilink
              92 years ago

              Are you maybe too young to know people who were actually working at the time. Obviously the life of a high schooler wasn’t very affected.

              • @WarmSoda@lemm.ee
                link
                fedilink
                English
                -17
                edit-2
                2 years ago

                Right. Everyone I know was obviously in highschool? I was long out of highschool lol

            • @grabyourmotherskeys@lemmy.world
              link
              fedilink
              English
              142 years ago

              As someone who was getting a comp sci degree at the time, a huge percentage of my cohort could not find jobs in any IT position, let alone programming, so they ended up taking what they could get. A couple years later when companies started hiring again, no one wanted them because they hadn’t worked in the industry and their degree was stale (which is bs, they were just able to hire much more experienced people for the same salary). Most of those people then ended up paying off student loans for degrees they never used.

              Meanwhile, those who could stayed in school flooding the market with Masters and PHD candidates which raised the bar for all coming after.

              That still affects hiring practices to this day.

            • @wizardbeard@lemmy.dbzer0.com
              link
              fedilink
              English
              52 years ago

              I’d reason that has more to do with your circles than anything else.

              I entered college right around that time. I know multiple families who lost their home from it. My parents nearly did. My Grandparents attempt to downsize was delayed by almost five years of sitting with their house on the market and they ended up having to absolutely slash the sales price to sell their home.

              I know people who lost their jobs as primary breadwinner in their household and never were able to get back into the workforce in any significant capacity until just before the pandemic.

              I know many people who graduated college 2008-2012, had wonderful credentials/resumes, who weren’t able to find stable employment or a starter “career” job until 2017 or later.

              Hell, the 2008 crash was the big tipping point for the public idea that if you worked hard and did good in school, you could just expect things to work out well for your employment.

              There’s all sorts of shit you could use to pick apart these folks, blame what occurred on choices they made, and you wouldn’t be entirely off base for some of them. However, that doesnct change that despite your personal circles, it had a significant impact.

              We’re going anecdote v anecdote here. Your insistance of a lack of effect on people crumbles the moment anyone comes in and says they know people who were.

        • kitonthenet
          link
          fedilink
          472 years ago

          The dotcom bubble was one of the middle dominos on the way to the 2008 collapse, the fed dropped interest rates to near zero and kept them there for years, investor confidence was low, so here come mortgage backed securities.

          In addition, the bubble bursting and its aftermath is what allowed the big players in tech (Amazon, Google, Cisco etc) to merge to monopoly, which hasn’t been particularly good

          • @WarmSoda@lemm.ee
            link
            fedilink
            English
            -512 years ago

            Most Americans didn’t even feel the effects of the 2008 collapse. Most recessions aren’t noticed by most of the country. These things are blown way out of proportion by news conglomerates that have a vested interest in it.

            • @5in1k@lemm.ee
              link
              fedilink
              English
              292 years ago

              Um no. So many of my friends unemployed. So many people were losing houses. It was nuts. Everyone noticed. The government even gave us “please don’t riot” money.

                • @5in1k@lemm.ee
                  link
                  fedilink
                  English
                  12 years ago

                  The company I worked for went from 12 to 5 employees. I bought a house for 15k because so many people lost them and the prices collapsed. Did you live through it? Maybe you were just insulated or ignorant of what was happening.

                • @5in1k@lemm.ee
                  link
                  fedilink
                  English
                  12 years ago

                  I even remember going to downtown Detroit to repair scratches on desks on entire floors of a empty of employees skyscraper right on Jefferson. Hipsters bought an empty skyscraper as well up in New Center. But yeah, no one affected.

              • @WarmSoda@lemm.ee
                link
                fedilink
                English
                12 years ago

                Idk, probably not the same thoughts you had deciding to chime in to a 5 day old discussion.

            • @rambaroo@lemmy.world
              link
              fedilink
              English
              102 years ago

              You have got to be kidding me. I don’t know a single millennial who wasn’t affected badly by 2008.

            • @Landrin201@lemmy.ml
              link
              fedilink
              English
              22 years ago

              Is this Elon musks account? Because for you to be so out of touch as to say the insane shit you just said you must be extraordinarily wealthy or a hermit living in the woods with nobody else around

    • @MyDogLovesMe@lemmy.world
      link
      fedilink
      English
      12 years ago

      In Canada, a good example is cannabis industry. Talk about fucking up opportunities.

      Why would it be any different with tech?

      It’s about the early cash-grab, imo.

    • @GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      52 years ago

      Yeah. Note how we’re having this conversation over the web. The bubble didn’t hurt the tech.

      This is something to worry about it you’re an investor or if you’re making big career decisions.

      If you have a managed investment account, like a 401(k), it might be worth taking a closer look at it. There’s no shortage of shysters in finance.

  • Altima NEO
    link
    fedilink
    English
    02 years ago

    What happened to everyone freaking out about AI taking our jobs?

    • FaceDeer
      link
      fedilink
      42 years ago

      Based on this article it seems they’ve moved to the “denial” phase.

  • Lettuce eat lettuce
    link
    fedilink
    English
    52 years ago

    AI will follow the same path as VR IMO. Everybody will freak out about it for a while, tons of companies will try getting into the market.

    And after a few years, nobody will really care about it anymore. The people that use it will like it. It will become integrated in subtle and mundane ways, like how VR is used in TikTok filters, Smart phone camera settings, etc.

    I don’t think it will become anything like general intelligence.

    • R0cket_M00se
      link
      fedilink
      English
      42 years ago

      Nah, this ain’t it.

      So here’s the thing about AI, every company desperately wants their employees to be using it because it’ll increase their productivity and eventually allow upper management to fire more people and pass the savings onto the C suites. Just like with computerization.

      The problem is that you can’t just send all of your spreadsheets on personal financial data to OpenAI/Bing because from a security perspective that’s a huge black hole of data exfiltration which will make your company more vulnerable. How do we solve the problem?

      In the next five to ten years you will see everyone from Microsoft/Google to smaller more niche groups begin to offer on-premise or cloud based AI models that are trained on a standardized set of information by the manufacturer/distributor, and then personally trained on company data by internal engineers or a new type of IT role completely focused on AI (just like how we have automation and cloud engineering positions today.)

      The data worker of the future will have a virtual assistant that emulates everything that we thought Google assistant and Cortana was going to be, and will replace most data entry positions. Programmers will probably be fewer and further between, and the people that keep their jobs in general will be the ones who can multiply and automate their workload with the ASSISTANCE of AI.

      It’s not going to replace us anytime soon, but it’s going to change the working environment just as much as the invention of the PC did.

    • magic_lobster_party
      link
      fedilink
      42 years ago

      The problem with VR is the cost of a headset. It’s a high cost of entry. Few want to buy another expensive device unless it’s really worth it.

      Generative AI has a small cost of entry for the consumer. Just log in to a site, maybe pay some subscription fee, and start prompting. I’ve used it to quickly generate Excel formulas for example. Instead of looking for a particular answer in some website with SEO garbage I can get an answer immediately.

    • @elscallr@lemmy.world
      link
      fedilink
      English
      82 years ago

      That’s exactly what was said about the internet in 1990. We have no idea what the next step will be.

    • @infyrin@lemmy.world
      link
      fedilink
      English
      42 years ago

      I wouldn’t really compare AI and VR.

      We know VR is still somewhat in it’s infancy but with pushes like with what Valve has been doing for it, it’s progressively getting better. It is still years away from being perfected but for what we have of VR right now, it’s not bad. We just need to figure out how to take it out of the goggle/headset interface because I think it can be designed better. If we integrate it with AR technology, then we might have a winning combination there.

      AI is heavily critiqued because it is being used as a tool in a similar vein as automation, it is just a shortcut for businesses to use AI who they think can replace humans. The writers strike going on right now, reflects this. I wouldn’t exactly call it a ‘mundane’ way, more like a draconian way.

  • Orphie Baby
    link
    fedilink
    English
    29
    edit-2
    2 years ago

    Good. It’s not even AI. That word is just used because ignorant people eat it up.

    • R0cket_M00se
      link
      fedilink
      English
      132 years ago

      Call it whatever you want, if you worked in a field where it’s useful you’d see the value.

      “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

      Holy shit! So you mean… Like humans? Lol

      • @whats_a_refoogee@sh.itjust.works
        link
        fedilink
        English
        122 years ago

        “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

        Holy shit! So you mean… Like humans? Lol

        No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

        I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

        We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.

      • Orphie Baby
        link
        fedilink
        English
        3
        edit-2
        2 years ago

        I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.

        Edit: When the fuck did I say “limited usefulness = not useful for anything”? God the fucking goalpost-moving. I’m fucking out.

        • R0cket_M00se
          link
          fedilink
          English
          -22 years ago

          If you think it’s usefulness is limited you don’t work on a professional environment that utilizes it. I find new uses everyday as a network engineer.

          Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me.

          Sure I had to tweak it to match my specific deployment, and there was a couple of things it was out of date on, but that’s the point isn’t it? Humans using AI to get more work done, not AI replacing us wholesale. I’ve never gotten more accurate information faster than with AI, search engines are like going to the library and skimming the shelves by comparison.

          Is it perfect? No. Is it still massively useful and in the next decade will overhaul data work and IT the same way that computers did in the 90’s/00’s? Absolutely. If you disagree it’s because you either have been exclusively using it to dick around or you don’t work from behind a computer screen at all.

            • R0cket_M00se
              link
              fedilink
              English
              32 years ago

              Plus it’s just been invented, saying it’s limited is like trying to claim what the internet can and can’t do in the year 1993.

          • @whats_a_refoogee@sh.itjust.works
            link
            fedilink
            English
            32 years ago

            Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me

            And you would have no idea what bugs or unintended behavior it contains. Especially since you’re not a programmer. The current models are good for getting results that are hard to create but easy to verify. Any non-trivial code is not in that category. And trivial code is well… trivial to write.

          • Orphie Baby
            link
            fedilink
            English
            22 years ago

            “Limited” is relative to what context you’re talking about. God I’m sick of this thread.

            • R0cket_M00se
              link
              fedilink
              English
              -42 years ago

              Talk to me in 50 years when Boston Dynamics robots are running OpenAI models and can do your gardening/laundry for you.

              • Orphie Baby
                link
                fedilink
                English
                3
                edit-2
                2 years ago

                Haha, keep dreaming. If a system made by OpenAI is used for robots, it’s not going to work anything like— on a fundamental level— current “AI”. It’s not a matter of opinion or speculation, but a matter of knowing how the fuck current “AI” even works. It just. Can’t. Understand things. And you simply can’t fit inside it an instruction for every scenario to make up for that. I don’t know how else to put it!

                You want “AI” to exist in the way people think about it? One that can direct robots autonomously? You have to program a way for it to know something and to be able to react appropriately to new scenarios based on context clues. There simply is no substitute for this ability to “learn” in some capacity. It’s not an advanced, optional feature— it’s a necessary one to function.

                “But AI will get better!” is not the fucking answer. What we currently have? Is not made to understand things, to recognize fingers, to say “five fingers only”, to say “that’s true, that’s false”, to have knowledge. You need a completely new, different system.

                People are so fucking dense about all of this, simply because idiots named what we currently have “AI”. Just like people are dense about “black holes” just because of their stupid name.

                • R0cket_M00se
                  link
                  fedilink
                  English
                  02 years ago

                  We’re like four responses into this comment chain and you’re still going off about how it’s not “real” AI because it can’t think and isn’t sapient. No shit, literally no one was arguing that point. Current AI is like the virtual intelligences of Mass Effect, or the “dumb” AI from the Halo franchise.

                  Do I need my laundry robot to be able to think for itself and respond to any possible scenario? Fuck no. Just like how I didn’t need ChatGPT to be able to understand what I’m using the python script for. I ask it to accomplish a task using the data set that it’s trained on and it can access said pretrained data to build me a script for what I’m describing to it. I can ask DALLE2 to generate me an image and it will access it’s dataset to emulate whatever object or scene I’ve described based on its training data.

                  You’re so hung up on the fact that it can’t think for itself in a sapience sense that you’re claiming it cannot do things that it’s already capable of. The models can absolutely replicate “thinking” within the information it has available. That’s not a subjective opinion, if it couldn’t do that they wouldn’t be functional for the use cases we already have for them.

                  Additionally, robotics has already reached the point we need for this to occur. BD has bipedal robots that can do parkour and assist with carrying loads for human operators. All of the constituent parts of what I’m describing already exist. There’s no reason we couldn’t build an AI model for any given task, once we define all of the dependencies such a task would require and assimilate the training data. There’s people who have already done similar (albeit more simplistic) things with this.

                  Hell, Roombas have been automating vacuuming for years, and without the benefit of machine learning. How is that any different than what I’m talking about here? You could build a model to take in the pathfinding and camera data of all vacuuming robots and use it to train an AI for vacuuming for fucks sake. It’s just combining ML with other things besides a chatbot.

                  And you call me dense.

          • I’m not the person you asked, but current deep learning models just generate output based on statistic probability from prior inputs. There’s no evidence that this is how humans think.

            AI should be able to demonstrate some understanding of what it is saying; so far, it fails this test, often spectacularly. AI should be able to demonstrate inductive, deductive, and abductive reasoning.

            There are some older AI models, attempting to similar neural networks, could extrapolate and come up with novel, often childlike, ideas. That approach is not currently in favor, and was progressing quite slowly, if at all. ML produces spectacular results, but it’s not thought, and it only superficially (if often convincingly) resembles such.

    • FaceDeer
      link
      fedilink
      162 years ago

      It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.

      Ignorant people get upset about that word being used because all they know about “AI” is from sci-fi shows and movies.

      • Orphie Baby
        link
        fedilink
        English
        132 years ago

        Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.

        • FaceDeer
          link
          fedilink
          172 years ago

          “Artificial intelligence” is well-established technical jargon that’s been in use by researchers for decades. There are scientific journals named “Artificial Intelligence” that are older than I am.

          If the general public is so confused they can come up with their own new name for it. Call them HALs or Skynets or whatever, and then they can rightly say “ChatGPT is not a Skynet” and maybe it’ll calm them down a little. Changing the name of the whole field of study is just not in the cards at this point.

          • @pexavc@lemmy.world
            link
            fedilink
            English
            22 years ago

            Never really understood the gatekeeping around the phrase “AI”. At the end of the day the general study itself is difficult to understand for the general public. So shouldn’t we actually be happy that it is a mainstream term? That it is educating people on these concepts, that they would otherwise ignore?

          • @shy@reddthat.com
            link
            fedilink
            English
            -42 years ago

            We should call them LLMAIs (la-mize, like llamas) to really specify what they are.

            And to their point, I think the ‘intelligence’ in the modern wave of AI is severely lacking. There is no reasoning or learning, just a brute force fuzzy training pass that remains fixed at a specific point in time, and only approximates what an intelligent actor would respond with through referencing massive amounts of “correct response” data. I’ve heard AGI being bandied about as the thing people really thought when you said AI a few years ago, but I’m kind of hoping the AI term stops being watered down with this nonsense. ML is ML, it’s wrong to say that it’s a subset of AI when AI has its own separate connotations.

            • FaceDeer
              link
              fedilink
              12 years ago

              LLaMA models are already a common type of large language model.

              but I’m kind of hoping the AI term stops being watered down with this nonsense.

              I’m hoping people will stop mistaking AI for AGI and quit complaining about how it’s not doing what they imagined that they were promised it would do. I also want a pony.

              • @shy@reddthat.com
                link
                fedilink
                English
                -12 years ago

                You appear to have strong opinions on this, so probably not worth arguing further, but I disagree with you completely. If people are mistaking it then that is because the term is being used improperly, as the very language of the two words do not apply. AGI didn’t even gain traction as a term until recently, when people who were actually working on strong AI had to figure out a way to continue communicating about what they were doing, because AI had lost all of its original meaning.

                Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                • FaceDeer
                  link
                  fedilink
                  22 years ago

                  Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                  Take a look around Hugging Face, LLaMA models are everywhere. They’re a very popular base model because they’re small and have open licenses.

                  You’re complaining about ambiguous terminology, and your proposal is to use LLMAIs (pronounce like llamas) as the general term for the thing that LLaMAs (pronounced llamas) are? That’s not particularly useful.

          • Orphie Baby
            link
            fedilink
            English
            -3
            edit-2
            2 years ago

            If you haven’t noticed, the people we’re arguing with— including the pope and James Cameron— are people who think this generative pseudo-AI and a Terminator are the same thing. But they’re not even remotely similar, or remotely-similarly capable. That’s the problem. If you want to call them both “AI”, that’s technically semantics. But as far as pragmatics goes, generative AI is not intelligent in any capacity; and calling it “AI” is one of the most confusion-causing things we’ve done in the last few decades, and it can eff off.

            • FaceDeer
              link
              fedilink
              22 years ago

              The researchers who called it AI were not the ones who are the source of the confusion. They’ve been using that term for this kind of thing for more than half a century.

              I think what’s happening here is that people are panicking, realizing that this new innovation is a threat to their jobs and to the things they had previously been told were supposed to be a source of unique human pride. They’ve been told their whole lives that machines can’t replace that special spark of human creativity, or empathy, or whatever else they’ve convinced themselves is what makes them indispensable. So they’re reduced to arguing that it’s just a “stochastic parrot”, it’s not “intelligent”, not really. It’s just mimicking intelligence somehow.

              Frankly, it doesn’t matter what they call it. If they want to call it a “stochastic parrot” that’s just mindlessly predicting words, that’s probably going to make them feel even worse when that mindless stochastic parrot is doing their job or has helped put out the most popular music or create the most popular TV show in a few years. But in the meantime it’s just kind of annoying how people are demanding that we stop using the term “artificial intelligence” for something that has been called that for decades by the people who actually create these things.

              Rather than give in to the ignorant panic-mongers, I think I’d rather push back a bit. Skynet is a kind of artificial intelligence. Not all artificial intelligences are skynets. It should be a simple concept to grasp.

              • Orphie Baby
                link
                fedilink
                English
                3
                edit-2
                2 years ago

                You almost had a good argument until you started trying to tell us that it’s not just a parrot. It absolutely is a parrot. In order to have creativity, it needs to have knowledge. Not sapience, not consciousness, not even “intelligence” as we know it— just knowledge. But it doesn’t know anything. If it did, it wouldn’t put 7 fingers on a damn character. It doesn’t know that it’s looking at and creating fingers, they’re just fucking pixels to it. It saw pixel patterns, it created pixel patterns. It doesn’t know context to know when the patterns don’t add up. You have to understand this.

                So in the end, it turns out that if you draw something unique and purposeful, with unique context and meaning— and that is preeeetty easy— then you’ll still have a drawing job. If you’re drawing the same thing everyone else already did a million times, AI may be able to do that. If it can figure out how to not add 7 fingers and three feet.

                • FaceDeer
                  link
                  fedilink
                  02 years ago

                  As I said, call it a parrot if you want, denigrate its capabilities, really lean in to how dumb and mindless you think it is. That will just make things worse when it’s doing a better job than the humans who previously ran that call center you’re talking to for assistance with whatever, or when it’s got whatever sort of co-writer byline equivalent the studios end up developing to label AI participation on your favourite new TV show.

                  How good are you at drawing hands? Hands are hard to draw, you know. And the latest AIs are actually getting pretty good at them.

        • @Prager_U@lemmy.world
          link
          fedilink
          English
          02 years ago

          The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.

          To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.

    • @jeanma@lemmy.ninja
      link
      fedilink
      English
      -32 years ago

      true, not AI but it’s doing a quite impressive job. Injecting fake money should not be allowed and these companies should generate sales. Especially in disrupting in some human field, even if it is a fad.

      You can compete OK, but you use your own money and benefits to support your cost.

      Yeah I know, something is called “investment”

    • @Not_Alec_Baldwin@lemmy.world
      link
      fedilink
      English
      82 years ago

      I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.

      There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.

      But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.

      I expect AI to be a very similar monster.

      If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.

      • Orphie Baby
        link
        fedilink
        English
        -5
        edit-2
        2 years ago

        Current “AI” doesn’t solve problems. It doesn’t understand context. It can’t see fingers and say “those are fingers, make sure there’s only five”. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.

        • magic_lobster_party
          link
          fedilink
          42 years ago

          You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.

          You can use the more specific terms “weak AI” or “narrow AI” if you prefer.

          Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).

          • Orphie Baby
            link
            fedilink
            English
            -12 years ago

            Yeah, well, “AGI” is not the end result of this generative crap. You’re gonna have to start over with something different one way or another. This simply is not the way.

          • Orphie Baby
            link
            fedilink
            English
            0
            edit-2
            2 years ago

            No? There’s a whole lot more to being human than being able to separate one object from another and identify it, recognize that object, and say “my database says that there should only be two of these in this context”. Current “AI” can’t even do that much-- especially not with art.

            Do you know what “sapience” means, by the way?

  • @traveler@lemdro.id
    link
    fedilink
    English
    202 years ago

    Let this sink in: some companies got $100k from VCs where the project was pretty much a software that made API Calls to ChatGPT.

    Obviously the bubble will burst.

  • ShooBoo
    link
    fedilink
    72 years ago

    They gimped it for the masses but AI is going strong. There is no question of the power of GPT4 and others. LLM’s are just a part of the big picture.

    • @Provider@feddit.de
      link
      fedilink
      12 years ago

      while you mean something different, i just want to add that the power of gpt4 IS currently in question. (compared to gpt3 and older versions)

      • The Ai iteration rate is fast, we cant just assume ai intelligence wont improve. Gpt4 recently have intelligent decrease due to more ruleset as they call the safety tax. But they also said gpt5 could possibly be 100 time bigger.

  • SolNine
    link
    fedilink
    English
    152 years ago

    How much VC is really being invested at the moment? I know a variety of people at start ups and the money is very tight at the moment given the current interest rate environment.

  • @JollyTheRancher@lemmy.world
    link
    fedilink
    English
    132 years ago

    I read an article once about how when humans hear that someone has died, the first thing they try and do is come up with a reason that whatever befell the deceased would not happen to them. Some of the time there was a logical reason, some of the time there’s not, but either way the person would latch onto the reason to believe they were safe. I think we’re seeing the same thing here with AI. People are seeing a small percentage of people lose their job, with a technology that 95% of the world or more didn’t believe was possible a couple years ago, and they’re searching for reasons to believe that they’re going to be fine, and then latching onto them.

    I worked at a newspaper when the internet was growing. I saw the same thing with the entire organization. So much of the staff believed the internet was a fad. This belief did not work out for them. They were a giant, and they were gone within 10 years. I’m not saying we aren’t in an AI bubble now, but, there are now several orders of magnitude more money in the internet now than there was during the Dot Com bubble, just because it’s a bubble doesn’t mean it wont eventually consume everything.

    • @KTVX94@lemmy.myserv.one
      link
      fedilink
      English
      62 years ago

      The thing is, after enough digging you understand that LLMs are nowhere near as smart or as advanced as most people make them to be. Sure, they can be super useful and sure, they’re good enough to replace a bunch of human jobs, but rather than being the AI “once thought impossible” they’re just digital parrots that make a credible impersonation of it. The real AI, now renamed AGI, is still very far.

      • @garyyo@lemmy.world
        link
        fedilink
        English
        32 years ago

        The real AI, now renamed AGI, is still very far

        The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.

      • @JollyTheRancher@lemmy.world
        link
        fedilink
        English
        32 years ago

        I am not sure they have to reach AGI to replace almost everyone. The amount of investment in them is now higher than it has ever been. Things are, and honestly have been, going quick. No, they are not as advanced as some people make them out to be, but I also don’t think the next steps are as nebulously difficult as some want to believe. But I would love it if you save this comment and come back in 5 years and laugh at me, I will probably be pretty relieved as well