Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • @Duamerthrax@lemmy.world
    link
    fedilink
    English
    2220 hours ago

    Next you’ll tell me half the population has below average intelligence.

    Not really endorsing LLMs, but some people…

  • @kromem@lemmy.world
    link
    fedilink
    English
    01 day ago

    Wow. Reading these comments so many people here really don’t understand how LLMs work or what’s actually going on at the frontier of the field.

    I feel like there’s going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.

  • 👍Maximum Derek👍
    link
    fedilink
    English
    772 days ago

    Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

    So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.

  • TrackinDaKraken
    link
    fedilink
    English
    41 day ago

    Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.

    • ZephyrXero
      link
      fedilink
      English
      21 day ago

      My new mental model for LLMs is that they’re like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.

  • @notsoshaihulud@lemmy.world
    link
    fedilink
    English
    382 days ago

    I’m 100% certain that LLMs are smarter than half of Americans. What I’m not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

  • @aceshigh@lemmy.world
    link
    fedilink
    English
    320 hours ago

    Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.

  • Traister101
    link
    fedilink
    English
    82 days ago

    While this is pretty hilarious LLMs don’t actually “know” anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps “words” to other “words” to allow a computer to understand language. IE all an LLM knows is that when it sees “I love” what probably comes next is “my mom|my dad|ect”. Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at “answering” a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.

  • @CalipherJones@lemmy.world
    link
    fedilink
    English
    31 day ago

    AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

      • @Donkter@lemmy.world
        link
        fedilink
        English
        01 day ago

        It’s semantics. The difference between an llm and “asking” wikipedia a knowledge question is that the llm will “answer” you with predictive text. Both things contain more knowledge than you do, as in they have answers to more trivia and test questions than you ever will.

        • @Waraugh@lemmy.dbzer0.com
          link
          fedilink
          English
          21 day ago

          I guess I can see that, maybe my understanding of words or their implication is incorrect. While I would agree they contain more knowledge I guess that reads different to me than being more knowledgeable. I think that maybe it comes across as anthropomorphizing a dataset of information to me. I could easily be wrong.

  • @fubarx@lemmy.world
    link
    fedilink
    English
    502 days ago

    “Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin