Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
You must log in or register to comment.
Check out the sci-fi book “Talbot” if you are interested in what a realistic look at a rogue AI (AGI) would be like. It was a fun book.
By which author? I can’t find the book
Richard F. Weyand