Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
I believe LLMs are smarter than half of US adults
Next you’ll tell me half the population has below average intelligence.
Not really endorsing LLMs, but some people…
pathologically stupid, and still wrong. yes.
Wtf is an llm
Large language model. It’s what all these AI really are.
Ah, gotcha. I avoid ai like the plague, so I wasn’t familiar. Thank you.
Wow. Reading these comments so many people here really don’t understand how LLMs work or what’s actually going on at the frontier of the field.
I feel like there’s going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.
AKA “future shock”
Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.
So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.
or they’re dumb enough to be correct.
That’s a bingo
Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.
My new mental model for LLMs is that they’re like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.
I’m 100% certain that LLMs are smarter than half of Americans. What I’m not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.
A daily bite of horror.
That’s called a self-proving statement.
Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.
Hallucination comes off as confidence. Very human like behavior tbh.
I should be more confident when communicating my hallucinations, it humanizes me.
While this is pretty hilarious LLMs don’t actually “know” anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps “words” to other “words” to allow a computer to understand language. IE all an LLM knows is that when it sees “I love” what probably comes next is “my mom|my dad|ect”. Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at “answering” a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.
Is stringing words together really considered knowledge?
As much as a search engine is
Does it have agency?
Then yss
It’s semantics. The difference between an llm and “asking” wikipedia a knowledge question is that the llm will “answer” you with predictive text. Both things contain more knowledge than you do, as in they have answers to more trivia and test questions than you ever will.
I guess I can see that, maybe my understanding of words or their implication is incorrect. While I would agree they contain more knowledge I guess that reads different to me than being more knowledgeable. I think that maybe it comes across as anthropomorphizing a dataset of information to me. I could easily be wrong.
I have a new word for you: information
If they’re strung together correctly then yeah.
“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin
For anyone wondering.
I’m starting to think an article referring to LLM as AI is s red flag, while them referring to them as LLM is a green flag.
Always has been
moron opens encyclopedia “Wow, this book is smart.”
If it’s so smart, why is it just laying around on a bookshelf and not working a job to pay rent?