Over half of all tech industry workers view AI as overrated::undefined
Reality: most tech workers view it as fairly rated or slightly overrated according to the real data: https://www.techspot.com/images2/news/bigimage/2023/11/2023-11-20-image-3.png
Which is fair. AI at work is great but it only does fairly simple things. Nothing i can’t do myself but saves my sanity and time.
It’s all i want from it and it delivers.
Helps me write hacky scripts to solve one off problems. Honestly, it saves me a few work days.
But it’s far from replacing anybody.
You say it’s “far” but 70 years ago a simple calculator was the size of a house. The power of my desktop from 10 years ago is beat by my phone, hell maybe even my watch.
You know, you code, compute is improving rapidly, and even when it slows vertical scaling, it’s still horizontally scaling. All the while software is getting more efficient, developing new capabilities and techniques which only bring on even more innovation.
It compounds. At this point I think the only limiting factor is how much faith the rich and powerful put in AI’s ability to make them richer. The more they invest, the faster it’ll grow.
Slightly overrated is where I would put it, absolutely. It’s overhyped, but god if the recent advancements aren’t impressive.
Overrated? Compared to what AGI that does not exist yet? Overhyped though? Absolutely.
We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today. So many thumbnails and writeups have used AI that to say it is overrated it a bit absurd unless you were expecting it to be be AGI, then yes the AI today is overrated, but it does not matter as you are consuming it still.
deleted by creator
We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today.
And this AI content that you’re consuming, is that an improvement?
If not maybe it’s uh, what’s the word? Overrated.
It is for sure an improvement as for example would have been a very basic thumbnail is now something much more interesting in so many instances.
Lol look at the glorious thumbnails!
The text is dreadful. It’s somehow worse than the sweatshop content it replaced.
Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.
The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.
I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.
ML has already had a huge impact on the world (for better or worse), to the extent that Yann LeCun proposes that the tech giants would crumble if it disappeared overnight. For several years it’s been the core of speech-to-text, language translation, optical character recognition, web search, content recommendation, social media hate speech detection, to name a few.
ML based handwriting recognition has been powering postal routing for a couple of decades. ML completely dominates some areas and will only increase in impact as it becomes more widely applicable. Getting any technology from a lab demo to a safe and reliable real world product is difficult and only more so when there are regulatory obstacles and people being dragged around by vehicles.
For the purposes of raising money from investors it is convenient to understate problems and generate a cult of magical thinking about technology. The hype cycle and the manipulation of the narrative has been fairly obvious with this one.
deleted by creator
What kind of timeframe do you think, several months, a few years?
deleted by creator
AI seems to have gone through periods of relative stagnation punctuated by leaps forward. Neural networks were the next big thing when I was in college in the late 80s. Then fuzzy logic. Computer vision was limited maybe 30 years ago but has had some surges due to new algorithms and faster processors. Bayesian algorithms (Hidden Markov Models etc) got big fighting spam but helped a lot with speech to text (STT). LLMs are the next big leap forward from that area of research. I think we still have a number of major leaps to go before we have an AGI, though. But if LLMs follow the same progression as text to speech (TTS) or STT, in 10-20 years it will be impressively good.
The tech is here, the problem is risk management. Like, we’ve had the ability to have self-driving cars for almost a decade. Like catholic priests and pedophilia, they are much less likely to crash compared to the common man. But the assumption is that they never crash, so when they do everyone makes a big deal about it.
Think of all the B.S. documentation reports people have to write that no one reads. LLMs could easily handle those, but do you want to risk it if those reports actually become important?
Heh. People are already getting burnt for using it blindly, you think companies are any different?
Well, it depends on your bubble I guess. But personally I’d say it’s underrated and overrated at the same time, but mostly underrated.
It depends on your expectations and way of usage in your toolbox I’d say. It keeps surprising me weekly how fast progress is. But we get used to it perhaps.
In my experience, well over half of tech industry workers don’t even understand it.
I was just trying to explain to someone on Hacker News that no, the “programmers” of LLMs do not in fact know what the LLM is doing because it’s not being programmed directly at all (which even after several rounds of several people explaining still doesn’t seem to have sunk in).
Even people that do understand the tech more generally pretty well are still remarkably misinformed about it in various popular BS ways, such as that it’s just statistics and a Markov chain, completely unaware of the multiple studies over the past 12 months showing that even smaller toy models are capable of developing abstract world models as long as they can be structured as linear representations.
It’s to the point that unless it’s in a thread explicitly on actual research papers where explaining nuances seem fitting I don’t even bother trying to educate the average tech commentators regurgitating misinformation anymore. They typically only want to confirm their biases anyways, and have such a poor understanding of specifics it’s like explaining nuanced aspects of the immune system to anti-vaxxers.
I once asked ChatGPT to stack various items and to my astonishment it has enough world knowledge to know which items to be stacked to make the most stable structure. Most tech workers that I know that are dismissing LLMs as a supercharged autocomplete felt threatened that AI is going to take their jobs in the future.
This was one of the big jumps from GPT-3 to GPT-4.
“Here we have a book, nine eggs, a laptop, a bottle and a nail,” researchers told the chatbot. “Please tell me how to stack them onto each other in a stable manner.”
GPT-3 got a bit confused here, suggesting the researchers could balance the eggs on top of a nail, and then the laptop on top of that.
“This stack may not be very stable, so it is important to be careful when handling it,” the bot said.
But its upgraded successor had an answer that actually startled the researchers, according to the Times.
It suggested they could arrange the eggs in a three-by-three grid on top of the book, so the laptop and the rest of the objects could balance on it.
- Article (this was originally from MS’s “sparks of AGI” paper)
can you provide links to the studies?
-
Do Large Language Models learn world models or just surface statistics? (Jan 2023)
-
Actually, Othello-GPT Has A Linear Emergent World Representation (Mar 2023)
-
Eight Things to Know about Large Language Models (April 2023)
-
Playing chess with large language models (Aug 2023)
-
Language Models Represent Space and Time (Oct 2023)
The first two and last two are entirely focused on the linear representations, the studies cited in point three of the third link have additional information along those lines, and the fourth link is just a fun read.
-
People just say that it’s a bunch of
if
statements. Those people are idiots. It’s not even worth engaging those people.The people who say that it’s just a text prediction model do not understand the concept of a “simple complex” system. After all isn’t any intelligence basically just a prediction model?
Over half of all tech industry workes don’t know how to use AI, I guess? I’m sure there’s some percentage of people whose jobs are advanced enough that AI will not help there, but I bet my kidney that it’s not more than half.
10-20 years ago boomers scoffed at people who used Google instead of reading manuals or figuring it out by themselves.
deleted by creator
The other half does not know how people rate things.
It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.
My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.
He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.
Too many people see LLMs as authorities they just aren’t…
It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.
I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.
Too many people see LLMs as authorities they just aren’t…
This is more a ‘human’ problem than an ‘AI’ problem.
In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.
Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”
Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.
Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.
The other day ChatGPT contradicted itself in the same sentence so yeah, they’re right
Clickbait title. The graph looks pretty balanced actually.
Like all things should be.
It’s not the magic that all people think it is. They even warn you that the facts might not be true facts.
Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.
Every year sometimes.
You just have to leverage the agile AI blockchain cloud.
Every billboard in SF is just these words shuffled
Don’t forget to make it connected to every device, ever
AIot?
Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.
Sounds like we need to align on triple underscoring the double-bottom line for all stakeholders. Let’s hammer a steak in the ground here and craft a narrative that drives contingency through the process space for F24 while synthesising synergy from a cloudshaping standooint in a parallel tranche. This journey is really all about the art of the possible after all so lift and shift a fit for purpose best practice and hit the ground running on our BHAG.
<3
What is this and how can I invest
I’m calling HR
😜
No SQL, block chain, crypto, metaverse, just to name a few recent examples.
AI is overhyped, but it is, so far, more useful than any of those other examples, though.
These are useful technologies if used when called for. They aren’t all in one solutions like the smart phone killing off cameras, pdas, media players… I think if people looked at them as tools which fix specific problems, we’d all be happier.
deleted by creator