Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.
The only reason Google and MS are running their search engines through an LLM is to better pipe adverts and keep you on-service.
You know unless we teach more critical thinking, AI is going to destroy us as a civilization in a few generations.
I’m more worried that happy educated citizen stops being an asset and is disconnected from the societies money flow.
Every country will soon turn in to a “banana republic” and big businesses will eventually own everything.
Ouch, getting voted down for being totally correct.
Even MLK Jr, who didn’t get to see the disgusting megacorps of today, spoke often of the complacency of the comfortable.
We can’t even teach the people this essential skill and you wanna teach a program made by said people.
I think you misunderstood me. We need to teach the general populace critical thinking so they can correctly judge what we get from ChatGPT (or Wikipeida… or social media, or random youtube video).
I genuinely had students believe that what ChatGPT was feeding them was fact and try to source it in a paper. I stamped out that notion as quick as I could.
LOL. ChatGPT has become the newer version of wikipedia, only it won’t provide references.
Only studies have shown Wikipedia is overall about as truthful and accurate as as regular encyclopedia. ChtGPT will straight up make shit up but sound so authoritative about it people believe it.
It used to provide references but it made them up so they had to tweak it to stop doing that.
Man so it really learned from us, that’s great. Has me laughing again considering that.
Pretty sure we will destroy ourselves first with war or some other climate disasters first
Well that also would solve the problem of people being mislead in a pretty novel way.
Why not both. Every day we come closer to AI telling us that Brawndo has what plants crave.
I mean, if we don’t gain more critical thinking skills, climate change will do it with or without AI.
I’d almost rather the AI take us out in that case…
A candidate at tonights Republican debate called it the “climate chnage hoax”
camera cuts to parts of the planet literally on fire
Must have gone to school in Florida.
What a completely cherry picked video.
“Was slavery beneficial?”
“Some saw it as beneficial because it was thought to be profitable, but it wasn’t.”
“See! Google didn’t say that slavery was bad!”
This is the best summary I could come up with:
Not only has it been caught spitting out completely false information, but in another blow to the platform, people have now discovered it’s been generating results that are downright evil.
Case in point, noted SEO expert Lily Ray discovered that the experimental feature will literally defend human slavery, listing economic reasons why the abhorrent practice was good, actually.
That enslaved people learned useful skills during bondage — which sounds suspiciously similar to Florida’s reprehensible new educational standards.
The pros included the dubious point that carrying a gun signals you are a law-abiding citizen, which she characterized as a “matter of opinion,” especially in light of legally obtained weapons being used in many mass shootings.
Imagine having these results fed to a gullible public — including children — en masse, if Google rolls the still-experimental feature out more broadly.
But how will any of these problems be fixed when the number of controversial topics seems to stretch into the horizon of the internet, filled with potentially erroneous information and slanted garbage?
The original article contains 450 words, the summary contains 170 words. Saved 62%. I’m a bot and I’m open source!
I think this is an issue with people being offended by definitions. Slavery did “help” the economy. Was it right? No, but it did. Mexico’s drug problem helps that economy. Adolf Hitler was “effective” as a leader. He created a cultural identity for people that had none and mobilized them to a war. Ethical? Absolutely not. What he did was horrendous and the bit should include a caveat, but we need to be a little more understanding that it’s a computer; it will use the dictionary of the English language.
deleted by creator
Your and @WoodenBleachers’s idea of “effective” is very subjective though.
For example Germany was far worse off during the last few weeks of Hitler’s term than it was before him. He left it in partial ruins - and under the control of multiple other powers.
To me, that’s not effective leadership, it’s a complete car crash.
If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.
it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.
Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled, because there’s no “concept” to LLMs, just combination of words.
Yes, but in many defense the “smaller picture” I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn’t mean to suggest it was doing anything we’d recognize as forming an opinion.
Sorry if I gave you the impression that I was trying to disagree with you. I just piggy-backed on your comment and sort of continued it. If you read them one after the other as one comment (at least iny head), they seem to flow well
deleted by creator
If AI can only think at surface level, we are beyond doomed.
Honestly AI doesn’t think much at all. They’re scary clever in some ways but also literally don’t know what anything is or means.
They don’t think. They think 0% of the time.
It’s algorithms, randomness, probability, and statistics through and through. They don’t think any more than a calculator thinks.
LLMs aren’t AI… they’re essentially a glorified autocorrect system that are stuck at the surface level.
LLMs don’t think.
deleted by creator
Nobody said we were relying on that. We’ll all keep searching. We’ll all keep hoping it will bring abundance, as opposed to every other tech revolution since farming. I can only think at the surface level though. I definitely have not been in the science field for 25 years.
We should always fact check things we believe we know and seek additional information on topics we are researching.
Yay yet another person saying that primary information sources should be verified using secondary information sources. Yes, you’re right it’s great actually that in your vision of the future everyone will have to be a part time research assistant to have any chance of knowing anything about anything because all of their sources will be rubbish.
And that’s definitely a thing people will do, instead of just leaning into occultism, conspiratorial thinking, and group think in alternating shifts.
All I have to say is thank fuck Wikipedia exists.
He was able to convince the majority that his way of thinking was the right way to go and deployed a plan to that effect
So, you’re basically saying an effective leader is someone who can convince people to go along with them for a sustained period. Jim Jones was an effective leader by that metric. Which I would dispute. So was the guy who led the Donner Party to their deaths.
This is why I see a problem with this. You and I are able to discuss this and work out what each other means.
But in a world where people are time-poor and critical thinking takes time, errors based on fundamental misunderstandings of consensual meanings can flourish.
And the speed and sheer amount of global digital communication means that they can be multiplied and compounded in ways that individual fact checkers will not be able to challenge sucessfully.
deleted by creator
Huh? Yikes this feels like being back on reddit.
No I am not trying to “fight” you or “straw man” you at all!!!
I thought we were having a pleasant and civilized conversation about the merits and pitfalls of AI , using our different ideas about the word “effective” as an example.
Unfortunately I didn’t see that you’re handing me downvotes until just now, so I didn’t pick up on your vibe.
deleted by creator
I’ll fight you
I mean Jim Jones was pretty damn effective at convincing a large group of people to commit mass suicide. If he’d been ineffective, he’d have been one of the thousands of failed cult leaders you and I have never heard of. Similarly, if Hitler had been ineffective, it wouldn’t have takes the combined forces of half the world to fight him.
This is true, I guess the difference in the Jim Jones scenario is whether you define effective leadership as being able to get your plan carried out (even if that plan is killing everyone you lead) or whether you define it as achieving good outcomes for those you lead.
Hitler didn’t do either of those things in the end so I still don’t rate him, but I can see why you would if you just look at the first part of his reign.
AI often produces unintended consequences based on its interpretations - there’s a great TED talk on some of these - and I think with the LLMs we have way more variables in our inputs than we have time to define them. That will probably change as they get refined.
Yes agreed. There is nuance and details and context always left out or ignored
Never would I’ve thought that I would see México and Hitler used in a paragraph
Slavery is not good for the economy… Think about it, you have a good part of your population that are providing free labour, sure, but they aren’t consumers. Consumption is between 50 and 80% of GDP for developed countries, so if you have half your population as slave you loose between 20% and 35% of your GDP (they still have to eat so you don’t loose a 100% of their consumption).
That also means less revenue in taxes, more unemployed for non slaves because they have to compete with free labour.
Slaves don’t order on Amazon, go on vacation, go to the movies, go to restaurant etc etc That’s really bad for the economy.
Look at the Saudi, China or the UAE, it’s still a pretty efficient way to boost your economy. People don’t need to be consumer if this isn’t what your country needs.
China has slavery? Also Saudi Arabia and the UAE import slaves, which is better for the economy than those people not being there at all but worse than them being regular workers.
Muslim and christian minorities are forced to work in camps to “re-educate” them to be good chinese citizen.
Oh yeah fair enough.
Those are very specifics examples, with two of the biggest oil producers, and the factory of the world. Thus their whole economies is based on export, so internal consumption isn’t important.
Moreover what proof do you have their economies wouldn’t be in a better shape if they didn’t exploit some population but made them citizen with purchasing power?
2/3 of the people living in the Saudi Emirate are immigrants whose passports have been confiscated, they work in factory, construction sites, oil pit, and all other kind of manual jobs. Meanwhile the Saudi citizens occupy all the well paid job that require education, immigrants can’t apply to those. If they didn’t use forced labor, there simply wouldn’t be enough people in the country to occupy all the jobs. Their economy could not be as good as it is right now.
Because their GDP comes from exporting a very rare and valuable natural resource. This is a rare case in the world, and not the one I was talking about.
Plus who’s to say they wouldn’t have a better economy if those exploited people could consume more?
True consumers are only 1 pillar of gdp.
That really bad for a modern consumer economy yes. But those werent a thing before the industrial revolution. Before that the large majority of people were subsitance/tennant farmer or serfs who consumed basically nothing other than food and fuel in winter. Thats what a slave based economy was an alternantive to. Its also why slvery died out in the 19th century, it no longer fit the times.
And isn’t the economy much better now than before the industrial revolution?
Obviously, but my point was that slaves weren’t economically terrible in an agrarian peasant/serf economy, which everywhere was before the industrial revolution.
I wish it did die out in the 19th century. We have more slaves now than ever.
There being more slaves now then ever is heavily disputed. There is also the fact that was little more than a billion people in the world when the trans-Atlantic slave trade stopped, so there would have to be 8 times as many for slavery to be as prevalent.
Yes, I agree, our per capita slave figure has to be much lower these days, mathematically speaking.
Even one slave is a slave too many, and knowing there are still so many (whatever figure we put it at) is heartbreaking.
Things like the cocoa plantation slaves and the slave fishing ships have people kidnapped and forced to work for nothing. Actual slavery by any definition.
Of course, when I said it died out I didn’t mean slavery was entirely gone and doesn’t exist at all. I mean it died out as a prevalent societal structure.
100s of people in slavery on a cocoa plantation is of course awful, but it shouldn’t obscure the fact that there used to be vast swathes of land where slaves outnumbered free people and their children were born into bondage - that is what has died out.
I understand your wider point and I agree with it.
But I think the point I was making actually supposts what you were saying upthread.
The agrarian model of the cocoa industry is economically reliant on slavery. 2.1 million children labour on those plantations in Ghana and Cote d’Ivoire, and a significant number have been trafficked or forced.
I think the problem is more that given the short attention span of the general public (myself included), these “definitions” (I don’t believe that slavery can be “defined” as good, but okay) are what’s going to stick in the shifting sea of discourse, and are going to be picked out of that sea by people with vile intentions and want to justify them.
It’s also an issue that LLMs are a lot more convincing than they should be, and the same people with short attention spans who don’t have time to understand how they work are going to believe that an Artificial Intelligence with access to all the internet’s information has concluded that slavery had benefits.
what’s going to stick in the shifting sea of discourse
This is what I think too. We’ve had enough trouble with “vaccines CaUsE AuTiSm” and that was just one article by one rogue doctor.
AI is capable of a real death-by-a-thousand-cuts effect.
especially with the current lack of regulation on it
Hitler didn’t create a cultural identity for Germans, that already happened in the 1800s.
I mean slavery was bad for the economy in the long run. And Hitler didn’t create a German cultural identity, that’d been a thing for a while at the time.
did they train it with ben shapiro speeches?
That dude already sounds like an AI deep fake voice
Yes, along with tons of other data.
deleted by creator
For the US in the list of countries starting with M, maybe too many 'Murica memes in the training set?
Articles about what some LLM wrote are just so stupid.
just wait until those articles can be written by LLMs
So the AI provided factual information and they did not like that because ‘slavery bad, therefore there was no benefit to it.’ There were benefits to slavery, mainly for the owners. US had a huge cotton export at one point, with the fields being worked by slaves.
But also a very few slaves did benefit, like being able to work a job that taught them very useful skills, which let them buy their own freedom, as they were able to earn money from it. Of course being a slave in the first place would be far better, but when you are one already, learning a skill that makes you earn your freedom and get a job afterwards is quite the blessing. Plus for a few individuals it might’ve been living in such terrible conditions, that being forced to work while getting fed might’ve not been so bad…
A bit of a nitpick but it was technically right on that one thing….
Hitler was an “effective” leader…. Not a good or a moral one but if he had not been as successful creating genocide then i doubt he be more than a small mention in history.
Now a better ai should have realized that giving him as an example was offensive in the context.
In an educational setting this might be more appropriate to teach that success does not equal morally good. Sm i wish more people where aware off.
Shooting someone is an effective way to get to get to the townhall if the townhallbuilding is also where the police department and jail are.
Effective =/= net postive
Hitler wanted to kill jews and used his leadership position to make it happen, soldiers and citizens blindly followed his ideology, millions died before he was finally stopped.
Calling him not effective is an insult to the horrid damage caused by the holocaust. But i recognize your sincerity and i see we are not enemies. So let us not fight.
I dont need to reform the image of nazis and hitlers. Decent people know they are synonymous to evil and hatred and they should be.
People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.
While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.
I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.
And then when you do ask humans to help AI in parsing true information people cry about censorship.
Well, it can be less difficult, but still difficult, for humans to parse the truth also.
The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.
What!?!? I don’t believe that. Who are these people?
removed by mod
What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.
What’s more worrisome are the sources it used to feed itself.
It’s usually just the entirety of the internet in general.
Well, I mean, have you seen the entirety of the internet? It’s pretty worrisome.
The internet is full of both the best and the worst of humanity. Much like humanity itself.
Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.
Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.
To repeat something another guy on lemmy said.
Making AI say slavery is good is the modern equivalent of writing
BOOBS
on a calculator.Obviously it doesn’t “think” any of these things. It’s just a machine repeating back a plausible mimicry.
What does scare me though is what google execs think.
They will be tweaking it to remove obvious things like praise of Hitler, because PR, but what about all the other stuff?Like, most likely it will be saying things like what a great guy Masaji Kitano was for founding Green Cross and being such an experimental innovator, and no one will bat an eye because they haven’t heard of him.
As we outsource more and more of our research and fact checking to machines, errors in knowledge are going to be reproduced and reinforced. Like how Cinderella now has “glass” slippers.
Sounds like the bot has been training on Florida public education and Prager U content.
You need slash now instead of “and”