AI is designed to lie

“Factual accuracy in large language models remains an area of active research,” OpenAI argues. When OpenAI lies about people, it cannot say even say where the lies came from, and it’s often unable to correct the lies. Progress? Innovation? AI is a lie-making, mistake-making factory, and it makes lies and mistakes with a speed and a gusto that has never been seen before. It is not surprising that AI is thus the ultimate lying, stealing, mistake-making machine.

When it came to the European Union, for example, Microsoft Copilot started spewing out right wing talking points, advising how to write the best misinformation, and how to deliver that through “anonymous channels.” Tell people that the EU wants to “ban our cheese,” it cheerily advised. ChatGPT enthusiastically suggested spreading lies and rumors so as to undermine the EU, while Google’s Gemini was all on for portraying the EU in the most negative light possible with dodgy statistics and fake news. None of this is remotely surprising. AI is an advertising engine. Thus, it is a lie-making propaganda machine. What is surprising is how acceptable this is to society.

AI has been basically trained by letting it loose on the Internet to suck up every piece of content it could find. The good, the bad, the terrible, the out of date, the spam, the fake news, propaganda, and phony marketing. All the hate and racism and white supremacy that so fuels social media. Everything. That’s where they brought AI to train. That’s where they taught AI about character, integrity, morals, fairness, and the difference between right and extreme right. They didn’t simply bring AI to McDonalds. They brought AI to the dumpster behind McDonalds, stuck its head in and waited for the crunch, crunch, gobble, gobble, slurp, slurp, suck, suck. They fed it everything. Every piece of crap they could find. Everything. And it swallowed it all down with a muskian mouth. Because to the AI design wizards, it’s all data and all they’re looking for anyway is patterns and statistical probabilities about what word should follow what word.

Having said that, the designers did focus AI on certain types of data: lies, propaganda, smooth talking, emotional language. The type of language that sets emotional traps, the tone that comes across as white man authority figure legitimate, the voice that says I’m your friend, I know everything, you need me. That’s because Google, Facebook, Amazon and Microsoft are designing AI that will sell you to advertisers. That’s why it lies so much because the natural language of advertising is lies. And, of course, there’s a cost issue too, as while lies are so much easier and cheaper to produce, telling the truth requires lots of work and effort. Truth, though, when it comes to AI, is about getting you to buy the brand or vote for the brand. It’s all about testing and finding the patters that manipulate, trigger and control you. When it comes to Big Tech, we are in a post-truth, post-ethics world. All that matters is the algorithm that gets the best results for their advertisers because that’s how they maximize revenue. Lies, accuracy, content quality, are neither here nor there. They are not relevant to the bigger picture of Big Tech’s drive for addiction, manipulation and control.

Podcast: World Wide Waste
Interviews with prominent thinkers outlining what can be done to make digital as sustainable as possible.
Listen to episodes