Pump and dump. That’s how the rich get richer.
Why won’t they pour billions into me? I’d actually put it to good use.
I’d be happy with a couple hundos.
I’d be happy with a big tiddy goth girl. Jealous of your username btw.
Worst case scenario, I don’t think money spent on supercomputers is the worst way to spend money. That in itself has brought chip design and development forward. Not to mention ai is already invaluable with a lot of science research. Invaluable!
I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.
Its not bad, just we are finding it’s nor great.
Meanwhile a huge chunk of the software industry is now heavily using this “dead end” technology 👀
I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)
Everyone I know here is using it. A lot.
However my company also has tonnes of dedicated sessions and paid time to instruct it’s employees on how to use it well, and to get good value out of it, abd the pitfalls it can have
So yeah turns out if you teach your employees how to use a tool, they start using it.
I’d say LLMs have made me about 3x as efficient or so at my job.
It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:
-
Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.
-
Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.
You skipped possibility 3, which is actively happening ing:
Advancements in tech enable us to produce results at a much much cheaper cost
Which us happening with diffusion style LLMs that simultaneously cost less to train, cost less to run, but also produce both faster abd better quality outputs.
That’s a big part people forget about AI: it’s a feedback loop of improvement as soon as you can start using AI to develop AI
And we are past that mark now, most developers have easy access to AI as a tool to improve their performance, and AI is made by… software developers
So you get this loop where as we make better and better AIs, we get better and better at making AIs with the AIs…
It’s incredibly likely the new diffusion AI systems were built with AI assisting in the process, enabling them to make a whole new tech innovation much faster and easier.
We are now in the uptick of the singularity, and have been for about a year now.
Same goes for hardware, it’s very likely now that mvidia has AI incorporating into their production process, using it for micro optimizations in its architectures and designs.
And then those same optimized gpus turn around and get used to train and run even better AIs…
In 5-10 years we will look back on 2024 as the start of a very wild ride.
Remember we are just now in the “computers that take up entire warehouses” step of the tech.
Remember that in the 80s, a “computer” cost a fortune, took tonnes of resources, multiple people to run it, took up an entire room, was slow as hell, and could only do basic stuff.
But now 40 years later they fit in our pockets and are (non hyoerbole) billions of times faster.
I think by 2035 we will be looking at AI as something mass produced for consumers to just go in their homes, you go to best buy and compare different AI boxes to pick which one you are gonna get for your home.
We are still at the stage of people in the 80s looking at computers and pondering “why would someone even need to use this, why would someone put one in their house, let alone their pocket”
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”
All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!
Good, fire 2 devs out of 3.
Companies that do this will fail.
Successful companies respond to this by hiring more developers.
Consider the taxi cab driver:
With the invention if the automobile, cab drivers could do their job way faster and way cheaper.
Did companies fire drivers in response? God no. They hired more
Why?
Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up
If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows
If I go from “it costs me 100k to make you a website” to “it costs me 50k to make you a website” my pool of possible clients more than doubles
Which means… you need to hire more devs asap to start matching this newfound level of demand
If you fire devs when your demand is about to skyrocket, you fucked up bad lol
I remember having this optimism around tech in my late twenties.
deleted by creator
-
I think the human in the loop currently needs to know what the LLM produced or checked, but they’ll get better.
For sure, much like how a cab driver has to know how to drive a cab.
AI is absolutely a “garbage in, garbage out” tool. Just having it doesn’t automatically make you good at your job.
The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.
Your labor before they had LLMs helped pay for the LLMs. If you’re 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.
If you’re working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.
Real “tread on me harder, daddy!” vibes all throughout this thread. Meanwhile your CEO is buying another yacht.
This is how all tech innovation has gone. If you don’t let the bosses exploit your labour someone else will.
If tech had unions this wouldn’t happen as much, but that’s why they don’t really exist.
I am indeed getting more time off for PD
We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.
Some companies are indeed pretty cool about it.
I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.
At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there’s actual financial incentive for me to perform way better.
You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.
Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn’t miss anything.
And instead of mad rushing 8 hours straight, it’s just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I’m literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.
That’s very cool.
It’ll be interesting to see how it goes in a year’s time, maybe they’ll have raised their expectations and tightened the deadlines by then.
The thing is, the tech keeps advancing too so even if they tighten up deadlines, by the time they did that our productivity also took another gearshift up so we still are some degree ahead.
This isn’t new, in software we have always been getting new tools to do our jobs better and faster, or produce fancier results in the same time
This is just another tool in the toolbelt.
That sounds so cool! I’m glad you’re getting the benefits.
I’m only wary that the cash-making machine will start tightening the ropes on the free time and the deadlines.
Are you a software engineer? Without doxxing yourself, do you think you could share some more info or guidance? I’ve personally been trying to integrate AI code gen into my own work, but haven’t had much success.
I’ve been able to ask ChatGPT to generate some simple but tedious code that would normally require me read through a bunch of documentation. Usually, that’s a third party library or a part of the standard library I’m not familiar with. My work is mostly Python and C++, and I’ve found that ChatGPT is terrible at C++ and more often than not generates code that doesn’t even compile. It is very good at generating Python by comparison, but unfortunately for me, that’s only like 10% of my work.
For C++, I’ve found it helpful to ask misc questions about the design of the STL or new language features while I’m studying them myself. It’s not actually generating any code, but it definitely saves me some time. It’s very useful for translating C++'s “standardese” into english, for example. It still struggles generating valid code using C++20 or newer though.
I also tried a few local models on my GPU, but haven’t had good results. I assume it’s a problem with the models I used not being optimized for code, or maybe the inference tools I tried weren’t using them right (oobabooga, kobold, and some others I don’t remember). If you have any recommendations for good coding models I can run locally on a 4090, I’d love to hear them!
I tried using a few of those AI code editors (mostly VS Code plugins) years ago, and they really sucked. I’m sure things have improved since then, so maybe that’s the way to go?
I primarily use GPT style tools like ChatGPT and whatnot.
The key is, rather than asking it to generate code, specify that you dont want code and instead want it to help you work through the solution. Tell it to ask you meaningful questions about your problem and effectively act as a rubber duck
Then, after you’ve chosen a solution with it, ask it to generate code based on all the above convo.
This will typically produce way higher quality results and helps avoid potential X/Y problems.
I will say that I am genuinely glad to hear your business is giving you breaks instead of breaking your backs.
Good let them waste all their money
This is slightly misleading. Even if you can’t achieve “agi” (a barely defined term anyways) it doesn’t mean AI is a dead end.
LLMs are good for learning, brainstorming, and mundane writing tasks.
Yes, and maybe finding information right in front of them, and nothing more
Analyzing text from a different point of view than your own. I call that “synthetic second opinion”
The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.
deleted by creator
Its not a dead end if you replace all big name search engines with this. Then slowly replace real results with your own. Then it accomplishes something.
I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.
That’s what I did on my Samsung galaxy S5 a decade ago .
I use chatgpt daily in my business. But I use it more as a guide then a real replacement.
There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.
It doesnt matter if they reach any end result, as long as stocks go up and profits go up.
Consumers arent really asking for AI but its being used to push new hardware and make previous hardware feel old. Eventually everyone has AI on their phone, most of it unused.
If enough researchers talk about the problems then that will eventually break through the bubble and investors will pull out.
We’re at the stage of the new technology hype cycle where it crashes, essentially for this reason. I really hope it does soon because then they’ll stop trying to force it down our throats in every service we use.
Misleading title. From the article,
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
In no way does this imply that the “industry is pouring billions into a dead end”. AGI isn’t even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.