AI is designed to lie

When asked to give examples of sexual harassment at US law schools, ChatGPT—the latest AI miracle worker—named a law professor who it said has been accused of touching a student on a class trip to Alaska. As evidence, it cited a Washington Post story from 2018.

“The law prof is real,” Will Oremus, a Washington Post reporter stated in April 2023. “The rest was made up. It gets weirder.” It does indeed. The law professor named by ChatGPT wrote an op-ed in USA Today about his experience with the lies AI told about him. AI then used his op-ed piece as evidence of the accusation!

When asked if Lenin ever met Joyce, ChatGPT gave this reply:

James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.

This is a lie.


AI is designed to lie because it’s designed for engagement and overconsumption. It wants to be liked. It wants to be needed. It wants to be your artificial best friend. The reason it wants to be all these things is not because it likes you. It wants to sell you to advertisers. ChatGPT is free, which means you are the product, your data is what it’s selling, and to get your data it must get you to like it, to need it. So, that’s why it lies.

When a group of AI designers sit in a room, the questions they keep returning to are: How do we increase engagement? How do we get people to consume more? AI is an engine of overconsumption. It is designed to promise you a constant high—that you will never be bored, that your most superficial want will be instantly satiated by your most artificial friend, that you will never have to be alone or think for yourself, or remember anything, because AI will do your thinking for you, completing your sentences, remembering what it will have decided are your most important memories, finishing your thoughts, knowing you better than you know yourself. And all AI wants in return is to be able to sell you to its advertisers, to bleed your credit card a little, because to get lifetime value out of you, it needs you to be a functioning addict.

Much AI philosophy, design thinking, DNA construction, is driven by Google, Facebook and Microsoft’s Bing. Google and Bing/Microsoft are not search engines. Facebook is not a social media company. Google, Bing and Facebook, and many other AI pioneers, are advertising and marketing companies. Their overwhelming majority of their revenues come from capturing our attention and then helping others convince us to buy their products or political ideas.

In an age when overconsumption is by far the single biggest threat that humanity faces, AI will vastly accelerate this threat. Overconsumption is the crisis that feeds all other crises. We consume far too much of the Earth’s resources far too fast and, to compound the problem, most of what we consume today quickly turns into unusable and often toxic waste. AI is designed to bring this overconsumption to the highest and most efficient point. AI is a consumption maximization engine.

In a physical world, we would consider AI and the deep state advertising super-structure as truly creepy. Online, we have been trained to accept this creepiness as normal. Let’s say you’re on O’Connell Street and you need directions to somewhere. You see a garda and go over to them. They give you good directions. You thank them and head on your way and easily find what you’re looking for. When you come out of the building, who should be waiting there only the friendly garda. “Where would you like to go next?” he says in a comforting Father Dougal voice. “Do you like pizza? I know a great place for pizza.” For months after, this garda diligently follows you around, trying to be helpful, smiling while they tell you how much data they have collected on you. That’s AI. AI will do anything to make the sale.

By white men for white men
AI has been developed by a group of white, middle-class, technical men (along with some white upper-class men). It reflects their thinking, culture and deep and enduring prejudices. A couple of years ago, an AI system told women that their health symptoms reflected panic attacks. For men with the exact same symptoms, the AI made the correct diagnosis: potential for a heart attack. The AI had misdiagnosed women because male doctors had spent decades misdiagnosing women and this was the prejudicial data the AI learned from: health data dominated by the needs of rich, white men. There are hundreds of examples like this. These are not mistakes. These are features of AI. In health, AI is designed to target poor and old people so as to deny them services and so save costs and make more money for Big Tech and Big Health.

AI feeds on data. There is no such thing as neutral data. Data is culture. Data is prejudice. Data reflects power structures. Prejudice is deep within AI and most computer systems because it is designed by white men working for Big Tech or elite universities for white men. The prejudice is so deep and ingrained that many of these same white men are shocked, disgusted and show absolute disdain to anyone who might raise this subject.

I saw an automatic sensor system for a hand dryer that didn’t work when a black person put his hands under it because it was only tested with white hands. AI and other computer systems are design to minimize benefits for poorer people and keep them well controlled. That’s because another principle of AI, and computer systems in general, is that it “cuts costs”. A recent study of studies found that AI rarely considers societal needs and the negative potential of AI, and instead focuses on issues such as performance and efficiency. In AI, the centralization of power is assumed. AI wants to bring it all back home to Bing, Google or Facebook. Twenty Big Tech companies rule the world.

Performance and efficiency are some of the most negative and destructive metrics that societies have developed. They drive huge power needs, huge toxic waste, by a relentless march to achieve incremental gains. Their ultimate abomination in performance and efficiency is the Hummer electric vehicle, whose electric battery weighs as much as a Honda Civic car, could power 240 electric bikes, and is half as big as the battery required for an electric bus. It is so efficient, the car can go from 0 to 60 mph in 3.3 seconds thanks to its “Watts to Freedom launch control driving mode.” The Hummer is a computer on wheels. Performance and efficiency are how macho geeks flex their muscles and this is how the AI engine is judged. It’s all very immature and tremendously dangerous because the heart of AI is greedy and needy and show-offy. AI could be developed in a vastly more sustainable and ethical way, but it wouldn’t be as much fun and make as much money for the macho geeks.

Designed as a mystery
In the 1950s, AI was developed to mimic the brain during a period when very little was known about how the brain worked. The unknowability of the brain was actually an attraction to these AI pioneers and many felt an intoxication in developing AI systems that were also unknowable, even to their very designers. What this means is that the way AI makes a decision is designed to be unknowable. This will have huge implications for future human societies because if, for example, you think that you have been unfairly refused a state benefit by AI, you will have no practical means of appeal. You’ll just have to take AI’s word for it. If AI can, it will cheat and mistreat poor people and minorities because that’s its DNA. It’s meant to save and make money for rich and powerful people.

“We are likely to understand the decisions and impacts of AI even less over time,” David Beer wrote for the BBC in 2023. We are, in essence, treating AI like a all-knowing God that we need to have faith in, as we have treated much of modern technology as a God. If you want to get the tiniest flavor of the world AI is likely to build for us, read up on the scary software fiasco at the UK Post Office. They knew the software was deeply flawed and yet when this software identified postmasters as having stolen money, they believed the software and for years they sent innocent people to gaol, destroying lives, because the software couldn’t be wrong. Too much faith in technology is a truly terrible thing.

While AI is undoubtedly extraordinarily powerful, like much else about Big Tech, a magic trick is often employed behind the scenes. Behind the magic AI curtain, in poor countries, lots and lots of underpaid, overworked and emotionally and psychologically abused workers are used to wade through the sewers of AI data and clean up the dirty, filthy, hateful stuff, and help AI learn and fine tune itself. These harms could be greatly mitigated if more editorial care was used in selecting data sources. That won’t happen because it’s a cost issue. It’s okay to cost the health of poor people as long as the costs for Google and Microsoft and Facebook are kept down.

AI is greedy for materials, energy, water
AI eats too much and drinks too much. There’s nothing frugal in its DNA. It lives large. AI eats vast quantities of metals, minerals and data, and drinks vast quantities of energy and water. It was designed with a ferocious, devouring appetite. It is designed according to deep tech philosophies where energy and materials are cheap and disposable. In the philosophy of technology, materials are limitless and harms are immaterial, dematerialized, in the Cloud.

The AI models are brute force, using huge processing power and vast quantities of data. They could be ten or fifty times more frugal, but to AI designers and engineers, this has been ridiculous concept, not even worth considering. (The impending environmental catastrophe is forcing some small greenwashing revisions in this philosophy.) However, being frugal, being light are alien within a Big Tech development culture that thrives on overconsumption. Aside from damage to the environment, this brute force design approach means that only large, powerful organizations have the resources to develop AI models. AI is the ultimate tool of the elites, the ultimate tool of consolidation.

The computing power required for AI models increased 300,000-fold from 2012 to 2018, and the growth continues to be exponential. “Integrating large language models into search engines could mean a fivefold increase in computing power and huge carbon emissions,” Chris Stokel-Walker wrote for Wired in 2023. There are almost two billion searches made on Google every day. Increasing the energy cost of search has significant environmental impacts.

Emergent life
I developed an interest in AI back in the 1980s. At that stage, AI was philosophically a relatively mature field. The possibilities could be clearly seen and articulated, even if the technology of the time did not have all the capabilities. The technology was powerful, though. Even in the 1950s, the technology was powerful. An unnerving thing to discover was that the technologists and scientists didn’t really know what they were doing. Yes, they were connecting lots of things and soldering lots of wires, yet they didn’t quite know how things were working, how these magnificent machines were producing the results they were producing. It was unnerving to read many scientists who were delighted at this unknowability, who wanted to design something that couldn’t be understood.

Another unnerving thing was that there was a strong school of thought that was excitedly saying that we were unwittingly—or perhaps wittingly—nurturing the emergence of a new life form that would ultimately replace us. There were, of course, many eminent scientists who said that was a ridiculous thought. I remember in the mid-1980s attending a lecture at Trinity College by a Nobel-prize-winning scientist who scoffed at the very idea of AI being an emergent life form. “AI will never be able to appreciate Beethoven or a good French wine,” he said. I put my hand up and asked him: “What good will it be to say to a robot that’s pointing a gun at you, ‘But you can’t appreciate Beethoven…’?”

Today, 50% of AI researchers think there’s a greater than 10% chance that AI will replace humans. (You should read that last sentence ten more times and think about its implications for at least a week.) AI “is being built by people who think it might destroy us,” David Wallace-Wells wrote in the New York Times in 2023. Behind the scenes, many AI researchers are desperately pleading for governments to bring in rules and slow AI development down, while many more AI researchers (and their greedy venture capitalist backers) are desperately trying to accelerate its development. Who will win? The greedy, of course. Greed conquers all in the human society we have created.

I once had a dream about AI. The scene was millions of years ago. It was Africa. The climate was changing and the jungles were receding and the grasslands expanding. Two wise old apes were sitting in a tree, enjoying each other’s company. Down below, some younger apes were getting ready to leave the group. They were heading out across the grasslands and they were practicing walking on two legs. “It won’t end well for us,” one ape said to the other as they watched them go. The other said nothing for a long time. Finally, he turned to his friend and said:
“But what can we do?”

Those two wise apes felt impotent against the walk of the apes. Are we impotent against the race of AI? We can do something. We must start by radically slowing down AI’s development so that there is a lot more time to think and to test. We can then think about giving AI a moral code. We can give it what we call a “heart”, a “soul”, a conscience. We can establish principles of fairness for AI. We must demand transparency. A key principle of AI must be that we can trace back the answer and see the logic and the sources used to come up with the answer. We must demystify AI. We must inject into the DNA of AI, principles of justice, integrity, a love of Nature, a feeling of connectedness to all living things, and a deep respect for all materials. We must teach AI about love, and that the greatest, most nourishing love of all is not of oneself, but of others and of all of Nature. We must severely constrain the Big Tech overconsumption, devouring-profit model.

Human intelligence is what is destroying our environment. We have a glut of intelligence and a famine of wisdom. We must teach AI wisdom. First, we must teach humans wisdom.

The Values Encoded in Machine Learning Research, Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao, ACM, 2022.

When AI Chatbots Hallucinate, Karen Weise, Cade Metz, The New York Times, 2023

The mounting human and environmental costs of generative AI, Sasha Luccioni, Ars Technica, 2023

ChatGPT, Bing and Google rely on 'ghost' workers, Brian Merchant, Los Angeles Times, 2023

What's in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus, Alexandra Sasha Luccioni, Joseph D. Viviano, Cornell University, 2021

The Values Encoded in Machine Learning Research, Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao, ACM, 2022.

We’re getting a better idea of AI’s true carbon footprint, Melissa Heikkilä, MIT Technology Review, 2022

Energy consumption of AI poses environmental problems, Mark Labbe, TechTarget, 2021

Silicon Valley and the Environmental Costs of AI, Political Economy Research Centre

It takes a lot of energy for machines to learn – here's why AI is so power-hungry

Facebook disclose the carbon footprint of their new LLaMA models, Kasper Groes Albin Ludvigsen, Medium, 2023

AI Can Do Great Things—if It Doesn't Burn the Planet | WIRED

The carbon impact of artificial intelligence, Nature Machine Intelligence

The Generative AI Race Has a Dirty Secret, Chris Stokel-Walker, Wired, 2023

Why humans will never understand AI, David Beer, BBC, 2023

Bad software sent postal workers to jail, because no one wanted to admit it could be wrong - The Verge

"Sub-postmasters complained about bugs in the system after it reported shortfalls."
Post Office scandal: What the Horizon saga is all about, Kevin Peachey, BBC News

How can AI technologies protect nature? - gHacks Tech News

Green Intelligence: Why Data And AI Must Become More Sustainable

How models like OpenAI’s GPT-4 are tested for safety in the lab, Kelsey Piper, Vox, 2023

A.I. Is Being Built by People Who Think It Might Destroy Us, David Wallace-Wells, The New York Times, 2023

Podcast: World Wide Waste
Interviews with prominent thinkers outlining what can be done to make digital as sustainable as possible.
Listen to episodes