Please remove it if unallowed
I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.
Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.
I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.
I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.
That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.
Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.
What do you think? What negative experience do you have with AI chatbots that made you hate them?
When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you’re tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it’s slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.
ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of “rubber ducky” debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that’s just not true. Sure, ChatGPT can produce code that “works,” but it will fail at edge cases and will generally be inefficient and slow.
Exactly. LLMs may replace interns and junior devs, they won’t replace senior devs. And if we replace all of the interns and junior devs, who is going to become the next senior devs?
As a senior dev, a lot of my time is spent reviewing others’ code, doing pair-programming, etc. Maybe in 5-10 years, I could replace a lot of what they do with an LLM, but then where would my replacement come from? That’s not a great long-term direction, and it’s part of how we ended up with COBOL devs making tons of money because financial institutions are too scared to port it to something more marketable.
When I use LLMs, it’s like you said, to get hints as to what options I have. I know it’s sampling from a bunch of existing codebases, so having the LLM go figure out what’s similar can help. But if I ask the LLM to actually generate code, it’s almost always complete garbage unless it’s really basic structure or something (i.e. generate a basic web server using <framework>), but even in those cases, I’d probably just copy/paste from the relevant project’s examples in the docs.
That said, if I had to use an LLM to generate code for me, I’d draw the line at tests. I think unit tests should be hand-written so we at least know the behavior is correct given certain inputs. I see people talking about automating unit tests, and I think that’s extremely dangerous and akin to “snapshot” tests, which I find almost entirely useless, outside of ensuring schemas for externally-facing APIs are consistent.
deleted by creator
Now, I obviously didnt tell it to write the entire code by itself. […]
I am fairly competent in writing programs.
Go ahead using it. You are safe.
Because most people on Lemmy have never actually had to write code professionally.
I use it as a time-saving device. The hardest part is spotting when it’s not actually saving you time, but costing you time in back-and-forth over some little bug. I’m often better off fixing it myself when it gets stuck.
I find it’s just like having another developer to bounce ideas off. I don’t want it to produce 10k lines of code at a time, I want it to be digestible so I can tell if it’s correct.
Basically this: Flying Too High: AI and Air France Flight 447
Description
Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.
Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…
In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn’t - and that the consequences of asking the wrong question are disastrous.
I know about this crash and don’t see the connection. What’s the argument?
I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.
The theme that I would highlight here though:
More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.
But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.
How will you be ready for that if you don’t get practice along the way?
I got a lot of listening lined up but I’ll add it to the list. I hope they offer some statistics. Flying has become a lot safer over the years. I’d be surprised if automation works against the trend rather than for it.
Thanks for the answer.
It gives a false sense of security to beginner programmers and doesn’t offer a more tailored solution that a more practiced programmer might create. This can lead to a reduction in code quality and can introduce bugs and security holes over time. If you don’t know the syntax of a language how do you know it didn’t offer you something dangerous? I have copilot at work and the only thing I actually accept its suggestions for now are writing log statements and populating argument lists. While those both still require review they are generally faster than me typing them out. Most of the rest of what it gives me is undesired: it’s either too verbose, too hard to read, or just does something else entirely.
One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.
In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.
An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.
An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.
So anyway I don’t see a problem with the tool, it’s just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it’s more than this and can do their job for them.
The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.
I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I’d wanted to do, while in another tab chatgpt was still spewing nonsense.
LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they’re extremely good at convincing their users that they know what they’re doing and what they’re talking about.
That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn’t incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone’s time and learning nothing.
Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you’re copying will work for your particular situation or not.
LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you’d do given different commented answers.
They’ll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it’s not.
I can feel that frustrated look when someone uses chatGPT for such a tiny reason
This is my big concern…not that people will use LLMs as a useful tool. That’s inevitable. I fear that people will forget how to ask questions and learn for themselves.
Exactly. Maybe you want the number of unicode code points in the string, or perhaps the byte length of the string. It’s unclear what an LLM would give you, but the docs would clearly state what that length is measuring.
Use LLMs to come up with things to look up in the official docs, don’t use it to replace reading docs. As the famous Russian proverb goes: trust, but verify. It’s fine to trust what an LLM says, provided you also go double check what it says in more official docs.
I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines. I feel with an llm i can give it the wider context and it figures it exactly the sort of things I’m trying to find. Even more useful with trying to understand a complex error message you haven’t seen before.
That being said. LLMs are not where my searching ends. I check to see where it got the information from so I can read the actual truth and not what it has conjured up.
I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines
Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.
That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.
I remembered another example: I was checking a pull request and it wouldn’t compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of
string.IsNullOrWhitespace()
(in C#internal
means “I designed my classes wrong and I don’t have time to redesign them from scratch; this member should beprivate
orprotected
, but I need to access it from outside the class hierarchy, so I’ll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case asfriend
in c++; it’s used a lot in standard .NET libraries).Now, that particular
internal
function isn’t documented practically anywhere, and beinginternal
can’t be used outside its particular library, so it wouldn’t pop up in any example the coder might have seen… but .NET is open source, and the library’s source code is on GitHub, so chatgpt/copilot has been trained on it, so that’s where the coder must have gotten it from.The thing, though, is that LLM’s being essentially statistic engines that’ll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is
internal
. Orprivate
, orprotected
, for that matter.That function is used in the code they’ve been trained on to figure if a string is empty, so they’re just as likely to output it as
string.IsNullOrWhitespace()
orstring.IsNullOrEmpty()
.Hell,
if(condition)
andif(!condition)
are probably also equally likely in most places… and I for one don’t want to have to debug code generated by something that can’t tell those apart.If you know what you need to find, then yeah search engines are still good. But as a tool for discovery they’re massively shit now. You often need to be super specific to get what you want and almost at that point you already know it, you just need a reminder.
Are search engines worse than they used to be?
Definitely.
Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?
Also yes. And it’s not taking significantly longer than it did when they were less enshittified.
Are LLMs a viable alternative to search engines, even as enshittified as they are today?
Fuck, no. They’re slower, they’re harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can’t learn anything useful from (even worse, if you learn it as the correct solution you’ll never learn why it’s suboptimal or, more probably, downright harmful).
If search engines ever get enshittified to the point of being truly useless, the alternative aren’t LLMs. The alternative is to grab a fucking book (after making sure it wasn’t defecated by an LLM), like we did before search engines were a thing.
Cool I’ll just try and find which book i need to read it from the millions and millions of books.
I haven’t got an issue with reading books and whatnot. For coding specifically I always prefer to read documentation. But if I don’t know what is needed for my current use case and search isn’t helping. I’m not going to know where to begin. LLMs at least give me a jumping off point. They are not my be all and end all.
Discoverability of new tools and libraries via search is awful. Through LLMs, it’s passable to point you in the right direction.
Many lazy programmers may just copy paste without thinking too much about the quality of generated code. The other group of person who oppose it are those who think it will kill the programmer job
There is an enormous difference between:
rm -rf / path/file
vs.
rm -rf /path/file
Many lazy programmers may just copy paste without thinking too much about the quality of generated code
Tbf, they’ve been doing that LONG before AI came along
Sure, but if you’re copying from stack overflow or reddit and ignore the dozens of comments telling you why the code you’re copying is wrong for your use case, that’s on you.
An LLM on the other hand will confidently tell you that its garbage is perfect and will do exactly what you asked for, and leave you to figure out why it doesn’t by yourself, without any context.
An inexperienced programmer who’s willing to learn won’t fall for the first case and will actually learn from the comments and alternative answers, but will be completely lost if the hallucinating LLM is all they’ve got.
I agree AI is a godsend for non coders and amateur programmers who need a quick and dirty script. As a professional, the quality of code is oftentimes 💩 and I can write it myself in less time than it takes to describe it to an AI.
AI is a godsend for non coders and amateur programmers who need a quick and dirty script.
Why?
I mean, it is such a cruel thing to say.
50% of these poor non coders and amateur programmers would end up with a non-functioning script. I find it so unfair!
You have not even tried to decide who deserves and gets the working solution and who gets the garbage script. You are soo evil…
I think you’ve hit the nail on the head. I am not a coder but using chatGPT I was able to take someone else’s simple program and modify for my own needs within just a few hours of work. It’s definitely not perfect and you still need to put in some work to get your program to run exactly the way you want it to but it’s using chatGPT is a good place to start for beginners, as long as they understand that it’s not a magic tool.
i love it when the AI declares and sets important sounding variables it then never uses 🙄
I think the process of explaining what you want to an AI can often be helpful. Especially given the number of times I’ve explained things to junior developers and they’ve said they understood completely, but then when I see what they wrote they clearly didn’t.
Explaining to an AI is a pretty good test of how well the stories and comments are written.
A lot of people spent many many nights wasting away at learning some niche arcane knowledge and now are freaking out that a kid out of college can do what they can with a cool new machine. Maybe not fully what they do but 70% there and that makes them so hateful. They’ll pull out all these articles and studies but they’re just afraid to face the reality that their time and life was wasted and how unfair life can be
Who hurt you?
I have been there, wasted learning stupid things I will never need to know.
Coders are gonna get especially screwed by AI, compared to other industries that were disrupted by leaps in technology.
Look at auto assembly. Look at how many humans used to be involved in that process. Now a lot of the assembly is performed by robotics.
The real sad part is that there’s tons of investment (in terms of time and in terms of money) to become a skilled programmer. Any idiot can read a guide on Python and throw together some functional scripts, but programming isn’t just writing lines of code. That code comes from tons of experience, experiments, and trial and error.
At least auto workers had unions though. Coders don’t have that luxury. As a profession it really had its big boom at a time when people had long since been trained to be skeptical of them.
I don’t think it’s the same at all. Building code issue the same as building physical vehicle parts. All it’ll mean is that any company that uses strictly AI will be beat by a company using AI plus developers because the developers will just add AI as another tool in their toolbox to develop code.
As a cybersecurity guy, it’s things like this study, which said:
Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
FWIW, at this point, that study would be horribly outdated. It was done in 2022, which means it probably took place in early 2022 or 2021. The models used for coding have come a long way since then, the study would essentially have to be redone on current models to see if that’s still the case.
The people’s perceptions have probably not changed, but if the code is actually insecure would need to be reassessed
I think it’s more appalling because they should have assumed this was early tech and therefore less trustworthy. If anything, I’d expect more people to believe their code is secure today using AI than back in 2021/2022 because the tech is that much more mature.
I’m guessing an LLM will make a lot of noob mistakes, especially in languages like C(++) where a lot of care needs to be taken for memory safety. LLMs don’t understand code, they just look at a lot of samples of existing code, and a lot of code available on the internet is terrible from a security and performance perspective. If you’re writing it yourself, hopefully you’ve been through enough code reviews to catch the more common mistakes.
Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that’s no longer the case, I’ll revisit my opinion. Until then, I’m assuming that the study holds true. We can’t do security based on “it’s probably fine now.”
Also if you are interested, here are those scripts I wrote with chatGPT
Lol this comment is the opposite of the code it’s trying to describe, https://gitlab.com/cy_narrator/lukshelper/-/blob/main/luksCreate.sh?ref_type=heads#L107
Changed and added a feature to check if volume exists before override
deleted by creator
I’ve found it to be extremely helpful in coding. Instead of trying to read huge documentation pages, I can just have a chatbot read it and tell me the answer. My coworker has been wanting to learn Powershell. Using a chatbot, his understanding of the language has greatly improved. A chatbot can not only give you the answer, but it can break down how it reached that conclusion. It can be a very useful learning tool.
It’s great for regurgitating pre written text. For generating new or usable code it’s largely useless. It doesn’t have an actual understanding of what it says. It can recombine information and elements its seen before. But not generate anything truly unique.
That isn’t what the comment you replied to was talking about so that’s why you’re getting downvoted even though some of what you said is right.
The first sentence addressed what they talked about. It’s great as an assistant to cut through documentation to get at what you need. In fact, here’s a recent video from Perry Fractic doing just that with microtext for the C64.
Anything else like having it generate the code itself, it’s more of a liability than an asset. Since it doesn’t really understand what its doing.
Perhaps I should have separated the two thoughts initially? Either way I’ve said my piece.
Top level comment is talking about using it for learning. Saying that AI is just regurgitating text doesn’t address that fact at all. In fact it sounds like you were putting down the commentor for using it for learning.
The bulk of your comment was about how poorly it writes code which isn’t what that comment was talking about. At all. So yes, I agree, you should have separated your two thoughts and probably focused the second thought on a different thread within this post. Perhaps at the top level to say it to the OP.
I’ve been using it for CLI syntax and code for a while now. It’s not always right but it definitely helps in getting you almost all the way there when it doesn’t. I will continue to use it 😁
It’s really useful to quickly find the parameters to convert something in a specific way using ffmpeg.
Hell yeah it is. So much faster than reading the man pages and stuff
When was it wrong? I am curious like how much wrong it was and what AI assistent you asked.
Chatgpt all versions. I don’t know. I use it a lot and I just know it’s been wrong. Powershell comes to mind. And juniper srx syntax. And Alcatel.
Yes, me too, you can often ask it to explain it to a layman and it provides pretty easy to follow explanation
Is the explanation accurate?
It could be, in a monkeys with typewriters sort of way… 🤷♂️
but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.
Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.