- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Stack Overflow and Google were once the “AI” of the previous generation. “These kids can’t code, they just copy what others have done”
Yeah, and copy-pasting SO answers with no thought is just as bad.
And when copy-pasting didn’t work, those who dared to rise above and understand it, became better. Same with AI, those of the new generation who see through the slop will learn. It’s the same as it has always been. Software engineering is more accessible than ever, say what you will about the current landscape of software engineering but that fact remains undeniable.
I’m glad that AI is making it easier to enter into new areas of knowledge. I just hope it won’t be used as a crutch too far into people’s journeys.
I’m glad that AI is making it easier to enter into new areas of knowledge. I just hope it won’t be used as a crutch too far into people’s journeys.
Software engineering is more accessible than ever
This is key here. Having it more accessible, we see more people who do not want to learn but still trying to code. But we also see more people who wants to learn and create solutions.
Well said. Some of the most talented devs I know use Stack Overflow. It depends on how you use it.
All the devs I know use SO…
True.
As someone who can’t code (not a developer) but occasionally needs to dip my toes in it. I’ve learned quite a bit from using chatgpt and then picking apart whatever it shat out to figure out why it’s not working. It’s still better than me starting from scratch on whatever it is I’m working on because usually I don’t even know where to begin.
I’m a little defeatist about it. I saw with my own 3 eyes how a junior asked ChatGPT how to insert something into an
std::unordered_map
. I tell them about cppreference. The little shit tells me “Sorry unc, ChatGPT is objectively more efficient”. I almost blew a fucking gasket, mainly cuz I’m not that god damn old. I don’t care how much you try to convince me that LLMs are efficient, there is no shot they are more efficient than opening a static page with all the info you would ever need. Not even considering energy efficiency. Utility aside, the damage we have dealt to developing minds is irreversible. We have convinced them that thought is optional. This is gonna bite us in the ass. Hard.It’s going to get worse. I suspect that this’ll end with LLM taking the part of a production programs. Juniors just feeding it scenarios to follow, hook the thing up to a database and web page and let it run. It’ll gobble power like there’s no tomorrow and is just a nightmare to maintain, but goes live in a quarter if the time so every manager goes with that.
I work at a software development school, and ChatGPT does a lot of damage here too. We try to teach that using it as a tool to help learning is different from using it as a “full project code generator”, but the speed advantages it provides makes it irresistible from many students’ perspective. I’ve lost many students last year because they couldn’t pass a simple code exam (think FizzBuzz difficulty level) because they had no access to internet, and had to code in Emacs. We also can’t block access to it because it starts an endless game where they always find a way to access it.
Damn, I forgot about the teaching aspect of programming. Must be hard. I can’t blame students for taking shortcuts when they’re almost assuredly swamped with other classwork and sleep-deprived, but still. This is where my defeatist comment comes in, because I genuinely think LLMs are here to stay. Like autocomplete, but dumber. Just gotta have students recognize when ChatGPT hallucinates solutions, I guess.
Might sound a bit unrelated, but have you been noticing an apparent rise on ageism too? The social media seem to be fueling it for some reason.
Make the junior put it to the test John Henry style. You code something while they use gpt and see who comes up with a working version first
How is it more efficient than reading a static page? The kids can’t read. They weren’t taught phonics, they were taught to guess the word with context clues. It’s called “whole language” or “balanced reading”
Holy shit just like an LLM
Literacy rates are on a severe decline in the US, AI is only going to make that worse.
Over half of Americans between 16 and 74 read below a 6th grade level (that’s below the expected reading level of an 11 year old!)
We have the same problem with literacy here in Sweden. It’s unnerving to think that these kids will need to become doctors, lawyers and police officers in the future.
Sweden of all places? What happened in the last decade that Sweden’s slowly losing the fame of country to follow in social aspects?
This is only a guess, but it could be related to increased use of technology. Many things we interact with are simplified, and if you come across a word you don’t know your phone can give you simple synonyms or if you can’t spell autocorrect will catch it.
The same problem people are talking about with LLMs with a different lens.
Of course, there are different opinions, but here’s my take (as a Swede, but not an expert in politics/history):
The issues didn’t start during the last decade. In the 90’s, it was politically decided that schools wouldn’t be nearly as centrally managed by the state as they had been, instead municipalities would handle most school-related politics and administration locally. It was also decided that parents are allowed to choose more freely where to send their kids. This weakened public schools. Moreover, legislation was introduced (in the 00’s I think but I’m not sure) that allows for-profit private schools, which historically AFAIK had been prohibited.
Parents usually don’t have to pay anything extra to send their kids to private schools, and for each private school pupil more tax money flows into the private instead of public schools. The private schools are of course incentivized to attract children from families that are well off, since they tend to perform better (boosting the school’s score and thus reputation), have parents that can e.g. drive them from a longer distance, and just generally have less issues and so cost and complain less. For instance, it’s been reported that some private schools refuse (openly or through loopholes) e.g. special needs pupils since the tax money paid to the school for them isn’t worth the cost (and “bad PR”, no doubt) of actually giving them a proper education.
Sweden has also had a high rate of immigration the last decades. Immigrant parents understandably tend to not be as savvy about the school system and have less time/resources for getting their kids to “nicer” schools further away. Immigrant kids also tend to require more attention, both due to needing to learn Swedish and because psychological problems, e.g PTSD, are more common among many immigrant groups. Also I haven’t seen any studies on this, but IMO the private schools’ advertisements (on billboards etc) tend to be very geared towards “white” kids/parents with no immigrant background.
In 2007 a tax benefit for “homework help” among other things was introduced, halving the price parents have to pay for private tutors at home. This again benefits families that are well off and lets private companies in education siphon tax money.
All this means a cycle of segregation seen in so many countries. Public schools are burdened with students that require more resources, while private schools do everything they can to snatch up low-maintenance pupils. This makes private schools seem to perform better and gives public schools bad reputations. Racism and class discrimination also plays into all this of course.
It also doesn’t help that teachers’ salaries and social standing have decreased, partly due to the same general patterns.
This degradation of the public school system has continued during both left-wing and right-wing governments, though it’s often accelerated during right-wing governance. For instance, the social democrats party was the one to push in the 90’s for shifting responsibilities from the state to municipalities. There is an ever growing issue with corruption across the political spectrum (but worst/most blatant on the right), where it’s become quite common for politicians to push for decisions that benefit private companies, then retiring from politics and joining said companies’ boards etc.
Thanks for the reply, i’ve seen those patterns as well, kinda sad.
I don’t think phonics are the most critical part of why the kids can’t read.
It’s proven that people who read primarily books and documents read thoroughly, line by line and with understanding, while those that primarily read from screens (such as social media) skip and skim to find certain keywords. This makes reading books (such as documentation) hard for those used to screens from a young age and some believe may be one of the driving forces behind the collapse in reading amongst young people.
If you’re used to the skip & skim style of reading, you will often miss details, which makes finding a solution in a manual infinitely frustrating.
It’s not that phonics is integral, but rather if reading is a guessing game that’s just one more barrier to reading, and they read less, and what they do read they skim over and potentially ignore foreign words
Skip & skim could also stem from the fact that this how a mind used to everpresent ads reads. It’s like an adblocker built into your brain.
Really? My kids are hitting the rules hard. In 1st grade, they’re learning pronunciation rules I never learned (that’s phonics, right?). My 2nd grader is reading the 4th Harry Potter book, and my 5th grader finished the whole series in 3rd grade and is reading at a 7th or 8th grade level.
I did teach them to read before kindergarten (just used a book for 2-3 months of 10 min lessons), but that’s it, everything else is school and personal interest. They can both type reasonably well because they use the Minecraft console and chat. They’re great at puzzles, and my 5th grader beat me at chess (I tried a wonky opening, and he punished me), which they learned at school (extra curricular, but run by a teacher).
We love our charter school, though I don’t think it’s that different from the public school.
I work in a small company that doesn’t hire hardly at all… Stories like this scare me because I have no way to personally quantify how common that kind of attitude might be.
Look, ultimately the problem is the same as it has always been: juniors doing junior shit. There’s just more of it going on. If you’re hiring one, you put a senior on them ready to extinguish fires. A good review process is a must.
Now that I think about it, there was this one time the same young’un I was talking about tried to commit this insane subroutine that was basically resizing a vector in the most roundabout way imaginable. Probably would have worked, but you can also just use the
resize
method, y’know? In retrospect, that was probably some Copilot bullshit, but because we have a review process in place, it was never an issue.
I am not a professional coder, just a hobbyist, but I am increasingly digging into Cybersecurity concepts.
And even as an “amature Cybersecurity” person, everything about what you describe, and LLM coders, terrifies me, because that shit is never going to have any proper security methodology implemented.
On the bright side, you might be able to cash in on some bug bounties.
One can classify approaches to progress in at least four most popular ways:
The most dumb clueless jerks think that it’s replacing something known with something known and better. Progress enthusiasts, not knowing a single thing from areas they are enthusiastic about, are usually here.
The careful and kinda intellectually limited people think that it’s replacing something known with something unknown. They can sour the mood, but are generally safe for those around them.
The idealistic idiots think that it’s replacing something unknown with something known, that’s “order bringers” and revolutionaries. Everybody knows how revolutionaries do things, who doesn’t can look at Musk and DOGE.
The only sane kind think that it’s replacing something unknown with something unknown. That is, that when replacing one thing with another thing you are breaking not only what you could see and have listed for replacement. Because nature doesn’t fscking care what you want to see.
I honestly don’t know how anyone’s been able to code anything predominantly using AI that’s production worthy.
Maybe it’s the way I’m using AI, and to be honest I’ve only used chatGPT so far, but if I ask it to generate a bit of code then ask it to build on it and do the next thing, by about the third or fourth iteration it’s forgotten half of what we talked about and missed out bits of code.
On a number of occasions it’s given me a solution and when I questions it about the accuracy of it and why a bit of it probably won’t work I just get oh yes let me adjust that for you.
Maybe I’m doing AI wrong I don’t know, but quite frankly I’ll stick with stack overflow thanks.
I frankly only used those to generate pictures and sometimes helloworlds for a few languages, which didn’t work and didn’t seem to make sense. It was long enough ago.
Also I have ASD, so it’s hard enough for me to make consistent clear sense from something small. A machine-generated junk to give ideas is the last thing I need, my thought process is different.
You have to aggressively purge the current chat and give it more abstract references for context. With enough context it can rewrite some logic loops, maybe start a design pattern. You just have to aggressively check the changes.
Feels like it would be quicker and easier just to write the code myself at that point…
It’s only useful for stuff that’s been done a million times before in my experience. As soon as you do anything outside of that, it just starts hallucinating.
It’s basically like how junior devs used to go to stack overflow, grabbed whatever code looked like it would work and just plopped it in the codebase.
I remember talking to someone about where LLMs are and aren’t useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn’t been trained on.
The other person insisted that that is exactly what LLMs are great at. They wouldn’t explain how exactly the LLM was supposed to know how my company’s internal software, which is a trade secret, is structured.
But hey, I figured I’d give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions… on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.
Oddly enough, that’s not what my company’s ASDIS is. It’s almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.
Yeah, and the way it will confidently give you a wrong answer instead of either asking for more information or saying it just doesn’t know is equally annoying.
Because giving answers is not a LLM’s job. A LLM’s job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.
This is exactly right. AI can only interpolate between datapoints. I used to write code for research papers and chat gpt couldn’t understand a thing I asked of it.
Recently my friend was trying to get me to apply for a junior dev position. “I don’t have the right skills,” I said. “The biggest project I ever coded was a calculator for my Java final, in college, a decade and a half ago.”
It did not occur to me that showing up without the skills and using a LLM to half ass it was an option!
No wonder open source software becomes more efficient than proprietary one.
The problem is not only the coding but the thinking. The AI revolution will give birth to a lot more people without critical thinking and problem solving capabilities.
apart from that, learning programming went from something one does out of calling, to something one does to get a job. The percentage of programmers that actually like coding is going down, so on average they’re going to be worse
This is true for all of IT. I love IT - I’ve been into computer for 30+ years. I run a small homelab, it’ll always be a hobby and a career. But yeah, for more and more people it’s just a job.
That’s the point.
Along with censorship.
What are you guys working on where chatgpt can figure it out? Honestly, I haven’t been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.
ChatGPT is perfect for learning Delphi.
I’ve been using (mostly) Claude to help me write an application in a language I’m not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it’s very easy to mismanage memory the exact way Rust requires it.
That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I’ve been learning more about Rust from having to correct code that’s been generated by an LLM.
I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.
Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.
Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.
It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I’m on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.
I’m afraid for a future where forums and such don’t exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn’t know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.
It’s like useful information grows as fruit from trees in a digital forest we call the Internet. However, the fruit spoils over time (becomes less relevant) and requires fertile soil (educated people being online) that can be eroded away (not investing in education or infrastructure) or paved over (intellectual property law). LLMs are like processed food created in factories that lack key characteristics of more nutritious fresh ingredients you can find at a farmer’s market. Sure, you can feed more people (provide faster answers to questions) by growing a monocrop (training your LLM on a handful of generous people who publish under Creative Commons licenses like CC BY-SA on Stack Overflow), but you also risk a plague destroying your industry like how the Panama disease fungus destroyed nearly all Gros Michel banana farming (companies firing those generous software developers who “waste time” by volunteering to communities like Stack Overflow and replacing them with LLMs).
There’s some solar punk ethical fusion of LLMs and sustainable cultivation of high quality information, but we’re definitely not there yet.
To extend your metaphor: be the squirrel in the digital forest. Compulsively bury acorns for others to find in time of need. Forget about most of the burial locations so that new trees are always sprouting and spreading. Do not get attached to a single trunk ; you are made to dance across the canopy.
This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.
What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.
When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).
It’s also nice for writing C# unit tests.
However, the times I’ve been stuck on my main languages, it’s been utterly useless.
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
Yeah, I’m not even that down on using LLMs to search through and organize text that it was trained on. But in it’s current iteration? It’s fancy stack overflow, but stack overflow runs on like 6 servers. I’ll be setting up some LLM stuff self hosted to play around with it, but I’m not ditching my brain’s ability to write software any time soon.
I love asking AI to generate a framework / structure for a project that I then barely use and then realize I shoulda just done it myself
I’m forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times… the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.
Forced to use copilot? Wtf?
I would quit, immediately.
I would quit, immediately.
Pay my bills. Thanks.
I’ve been dusting off the CV, for multiple other reasons.how surprising! /s
but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.
a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.
It’s an at-scale company, orders came from way above. As did RTO after 2 years full-at-home, etc, etc.
Same. It can generate credible-looking code, but I don’t find it very useful. Here’s what I’ve tried:
- describe a function - takes longer to read the explanation than grok the code
- generate tests - hallucinates arguments, doesn’t do proper boundary checks, etc
- looking up docs - mostly useful to find search terms for the real docs
The second was kind of useful since it provided the structure, but I still replaced 90% of it.
I’m still messing with it, but beyond solving “blank page syndrome,” it’s not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.
I’m really bad at explaining what I want, because by the time I can do that, it’s faster to just build it. That said, I’m a senior dev, so I’ve been around the block a bit.
I used it a few days ago to translate a math formula into code.
Here is the formula: https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782
It’s not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time typing around. And searching online a few things.
Has anyone else clicked the chat.com url in the article …
that s the point of being junior. Then problems show up and they are forcing them to learn to solve them
To me, I feel like this is a problem perpetuated by management. I see it on the system administration side as well – they don’t care if people understand why a tool works; they just want someone who can run it. If there’s no free thought the people are interchangeable and easily replaced.
I often see it farmed out to vendors when actual thought is required, and it’s maddening.
i always found this to be upsetting as an IT tech at a former company - when a network or server had an issue and i was sent to resolve it, it was a “just reboot it” fix, which never kept the problem from recurring and bringing the server down at 07:00 the next Monday.
the limitations on the questions i could ask hurt that SLA more than any network switch’s memory leak ever did, and i felt as if my expertise meant nothing as a result.
All I hear is “I’m bad at mentoring”
And some sort of “no one wants to work any more”.
I know young brilliant people, maybe they have to be paid correctly?
There is only so much mentoring can do though. You can have the best math prof. You still need to put in the exercise to solve your differential equations to get good at it.
You get out of education what you put into it. You won’t be an artist from the best art school if you do the bare minimum to pass. You can end up as a legend of the industry coming from a noname school.
deleted by creator
This isn’t a new thing. Dilution of “programmer” and “computer” education has been going on for a long time. Everyone with an IT certificate is an engineer th se days.
For millennials, a “dev” was pretty much anyone with reasonable intelligence who wanted to write code - it is actually very easy to learn the basics and fake your way into it with no formal education. Now we are even moving on from that to where a “dev” is anyone who can use an AI. “Prompt Engineering.”
“Prompt Engineer” makes a little vomit appear in the back of my mouth.
I could barely code when I landed my job and now I’m a senior dev. It’s saying a plumber’s apprentice can’t plumb - you learn on the job.
You’re not learning anything if Copilot is doing it for you. That’s the point.
That’s true, it can only get you so far. I’m sure we all started by Frankenstein-ing stack overflow answers together until we had to actually learn the “why”
100% agree.
I dont think there is no place for AI as an aid to help you find the solution, but i dont think it’s going to help you learn if you just ask it for the answers.
For example, yesterday, i was trying to find out why a policy map on a cisco switch wasn’t re-activating after my radius server came back up. Instead of throwing my map at the AI and asking whats wrong l, i asked it details about how a policy map is activated, and about what mechanism the switch uses to determine the status of the radius server and how a policy map can leverage that to kick into gear again.
Ultimately, AI didn’t have the answer, but it put me on the right track, and i believe i solved the issue. It seems that the switch didnt count me adding the radius server to the running config as a server coming back alive but if i put in a fake server and instead altered the IP to a real server then the switch saw this as the server coming back alive and authentication started again.
In fact, some of the info it gave me along the way was wrong. Like when it tried to give me cli commands that i already knew wouldn’t work because i was using the newer C3PL AAA commands, but it was mixing them up with the legacy commands and combining them together. Even after i told it that was a made-up command and why it wouldn’t work, it still tried to give me the command again later.
So, i dont think it’s a good tool for producing actual work, but it can be a good tool to help us learn things if it is used that way. To ask “why” and “how” instead of “what.”