Will this further fuck up the inaccurate nature of AI results? While I’m rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.
The article says it’s not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.
E:
It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.
if you’re dumb enough to trust a large language model because someone told you “iTs Ai!” no amount of facts will be of great utility to you.
That take would be more digest able if I wasn’t stuck on the same planet as those people.
im saying they want to be lied to. it would be disrespectful to offer them the truth.
Until the AI generating the content starts hallucinating.
Thank you for catching that. Even reading through again, I couldn’t find it while skimming. With the mention of X2 and RSS, I assumed that paragraph would just be more technical description outside my knowledge. Instead, what I did hone in on was
“No real human would go four links deep into a maze of AI-generated nonsense.”
Leading me to be pessimistic.
and the data for the LLM is now salted with procedural garbage. it’s great!
If you’re dumb enough and care little enough about the truth, I’m not really going to try coming at you with rationality and sense. I’m down to do an accelerationism here. fuck it. burn it down.
remember; these companies all run at a loss. if we can hold them off for a while, they’ll stop getting so much investment.
The problem I see with poisoning the data is the AI’s being trained for law enforcement hallucinating false facts used to arrest and convict people.
Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.
They aren’t poisoning the data with disinformation.
They’re poisoning it with accurate, but irrelevant information.
For example, if a bot is crawling sites relating to computer programming, or weather, this tool might lure the crawler into pages related to animal facts, or human biology.
that’s the entire point of laws, though, and it was already being used for that.
giving the laws better law stuff will not improve them. the law is malevolent. you cannot fix it by offering to help.
Law enforcement doesn’t convict anyone, that’s a judge’s job. If a LEO falsely arrests you, you can sue them, and it should be pretty open-and-shut if it’s due to AI hallucination. Enough of that and LEO will stop it.
More likely they will remove your ability to sue them if you are talking about the usa and many other countries