Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
You must log in or register to comment.
What a waste of time. Both the article and the researchers.
Literally by the time their research was published, it was using irrelevant models, on top of the fact that, yeah, that’s how LLMs work. That would be obvious from 5m of using them.