A brief history of life
The Earth formed about 4.5 billion years ago. It is believed that life began to emerge about 800 million years later. Humans evolved from apes around three million years ago, with modern humans emerging only about 200,000 years ago.
The evolution of computers is generally described in generations. The first generation (1940–1956) used vacuum tubes and could take up the space of an entire room. The second generation (1956–1963) replaced vacuum tubes with transistors, making computers smaller and faster. The third generation (1964–1971) introduced the integrated circuit, making computers even smaller and faster. The microprocessor heralded the fourth generation of computers (1972–2010). This allowed for the development of desktops and laptops. The fifth generation (from 2010 to present) has seen the emergence of artificial intelligence (AI).
In 80 years, we moved through five generations of computing. Between 1956 and 2015 there was a one-trillion-fold increase in computing performance. The Apollo Guidance Computer that landed humans on the moon had the equivalent power of two Nintendo entertainment systems. The Apple iPhone 4, launched in 2010, had the same power as the Cray-2 supercomputer, launched in 1985.
Cuneiform is the earliest known language and emerged about 3,400 years ago in the area we now call Iraq. Cuneiform was written with a reed stylus on wet clay tablets. These tablets still exist and are perfectly readable, and as philologist Irving Leonard Finkel assures us, they will exist long after today’s computer storage has vanished.
In achieving speed, power and greater capacity, computer technology has traded longevity, durability and reliability.
A typical processor or piece of storage has an extremely short life expectancy of about five years. The resource and waste implications of this are enormous. If a clay tablet had a life expectancy of five years, we would have had to replace each tablet about 680 times since the information was first written down.
“Speaker company Sonos will cut off its most loyal customers from future software updates entirely unless they replace their old equipment for newer models,” The Guardian reported in 2020. These “smart” speakers and all the other smart networked stuff in the world of the Internet of Things (IoT) have very short lives. The hardware of the speaker may be working perfectly well but if the software is not updated then the product quickly degrades until it reaches a point whether it does not function and/or it becomes a security risk. This is a recipe for tremendous waste, pollution, and privacy violations.
Computer technology has had—and will continue to have—a ferocious appetite for energy and material resources. Artificial intelligence (AI) is a child that is growing at a phenomenal rate. It has taken modern humans 200,000 years and about 10,000 generations to get where we are today. In five generations, spanning 80 years, AI has emerged.
Lee Se-dol became a professional Go player at age 12. Go is a tremendously complex board game originating in China several thousand years ago. When computers finally beat humans at chess, I remember some saying that while computers might master chess, they would never master Go, a game of almost endless possibilities. Go was described as the Holy Grail of AI, with Lee Se-dol the Roger Federer of Go. In 2016, Google-owned AI program AlphaGo defeated Lee. In 2019, he retired from Go, saying simply that AI “cannot be defeated.”
I’ve been following the evolution of Artificial Intelligence (AI) for more than 30 years. The power and potential always seemed awesome. AI is still in its infancy, though it is learning fast. At what point it develops an independent intelligence, it’s hard to know, though it seems inevitable that AI will quickly evolve into something superior to human intelligence on many levels. When AI is combined with robotics, the path to artificial life is clear. Humans are as powerless to stop the evolution of AI as the apes were powerless to stop the evolution of humans.
Properly applied, AI can save energy and support the more efficient management of the environment. AI can help optimize water, energy and traffic management. AI-managed drones can plant trees 150 times faster than traditional methods. They can analyze soils and deliver targeted and precise fertilizer or weedkiller, increasing yields and reducing overall fertilizer and pesticide usage. In Brazil, honeybees are wearing Internet of Things (IoT) devices so that we can better understand the causes of colony collapse. AI has been found to be better than radiologists at diagnosing breast cancer from mammograms.
The questions are:
- Will the energy that AI saves be greater than the energy it consumes?
- Will the waste that AI helps eliminate be greater than the waste it creates?
- Will the benefits be greater than the costs?
Hungry AI
AI has developed a savage appetite for energy. Up until 2012, the amount of computing power required to train and feed an AI model roughly doubled every two years. Then it started doubling every three to four months. The energy efficiency of AI models, on the other hand, is in severe decline. AI researchers are throwing power at the problem. They’re behaving like drunks at a free bar, as if there’s limitless energy and they can consume as much as they want. AI is like one of those US gas-guzzler cars from the 1950s or 1960s, with not a care in the world for the world.
“In the AI field there is a dominant belief that ‘bigger is better’,” the AI Now Institute states. In other words, AI models that leverage processing-intensive mass computation are assumed to be “better” and more accurate. What they certainly are is much more expensive to feed.
Supposing we train AI to love energy like we love sugar? What sort of world would that create? AI’s huge appetite for energy means that developments in the area are increasingly the domain of mega-corporations such as Google or Microsoft. The AI researchers in these behemoths don’t notice because they have ready access to Herculean data centers. Academics, students and smaller companies notice though, as they find themselves increasingly being excluded from new developments because they can’t bring enough power to the table.
There are much more energy-efficient approaches out there. Today, most computers are based on 64-bit processors. This sort of processing power is required for 3D graphics and virtual reality. However, much AI research and development can be done with 16-bit or 8-bit processors, Seokbum Ko, a professor at the University of Saskatchewan explains. A 64-bit processor consumes 64 times more energy than an 8-bit processor.
In digital, too few are asking the question: “How do we get the job done in the most environmentally friendly way with the minimum of energy?” We must change digital from a culture of waste to a culture of conservation. AI proponents love to talk about its benefits to the world, but they need to be much more aware of its costs to the environment.
The indirect pollution costs of AI may be even greater in that AI further encourages human convenience. Spotify has stated that people who listen to its service on smart speakers are more likely to listen to more music. All this music is stored in data centers and transferred across networks, all creating more pollution.
In the US, an individual smart speaker consumes between $1.50 and $4 of electricity per year. By 2021, it’s estimated there will be 600 million smart speakers, generating at least one billion dollars annually of new electricity demand. If the smart speakers are linked up with a TV, for example, the TV’s standby power consumption can increase by a factor of 20, adding an extra $200 of extra electricity costs over the lifetime of an average TV. Building up and maintaining the network that surrounds AI will be a highly energy-intensive enterprise.
When you ask Alexa to turn on the lights, “a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization,” Kate Crawford and Vladan Joler write in Anatomy of an AI system. “The scale of this system is almost beyond human imagining.” The scale of resources required is many magnitudes greater than the energy and labor it would take for us to get up off our bums and turn the bloody light off, not to mention the benefits of getting a bit of exercise.
As AI makes our lives simpler, easier and more convenient, we will likely become lazier, fatter and sicker. Researchers have found that app-based ridesharing resulted in the greater use of cars, and that e-scooters caused a reduction in people walking, cycling and using public transport.
Because an AI-controlled car may have 600-plus sensors, it will generate something like 6 GB of data every 30 seconds, Jack Stewart wrote for Wired in 2018. All this data needs to be processed in real time by a power-hungry decision engine. Then it needs to be stored and later analyzed. A typical driver uses their car for about one hour per day. For that hour, AI will generate 720 GB of data.
Let’s say the car is used 250 days per year. (Based on our previous calculations, 100 GB of data creates about 0.42 kg of CO2, and a newly planted tree can absorb about 10 kg of CO2 per year.) What this means is that to deal with the yearly pollution that one AI-powered car emits, we’d need to plant 75 trees. It’s estimated that there are over a billion cars in the world. If they were all AI-powered, we’d need to plant 75 billion trees, and that’s just to deal with the pollution caused by data collection and transfer. We’re currently planting about five billion trees a year.
Most AI cars will be electric. In 2018, about five million electric vehicles were sold globally, with sales doubling every year. One million electric cars will create about 250,000 metric tons of battery waste, which would be enough to fill almost 70 Olympic swimming pools.
“Our connectedness is using vast amounts of energy,” science writer Angeli Mehta writes. The Internet of Things (IoT) promises to connect practically every object, system, animal and human into an almost unimaginable network of things. By 2025, there may be as many as 55 billion IoT devices. These billions of IoT devices will create zettabytes of data that will need to be transferred, stored and analyzed. The analysis will result in decisions that will need to be communicated back to the devices. All of this requires huge amounts of energy. If the data is used well, then the benefits of IoT in increased efficiency may outweigh its costs in energy consumption. May.
IoT devices require a lot of energy to manufacture because at the heart of IoT devices are logic circuits inside silicon chips. “The materials intensity of a microchip is orders of magnitude higher than that of ‘traditional’ goods,” a paper from the United Nations University states. In other words, manufacturing IoT devices requires an order of magnitude more energy than manufacturing typical physical goods. These devices will have short lives, and when they are dumped, they will add another mountain range of e-waste to the many mountain ranges of toxic e-waste humans have created so quickly.
Bias inherent in history
“We are all in the gutter but some of us are looking at the stars,” Oscar Wilde once wrote. Much of human history can feel like a gutter. It has been a long, hard struggle for human rights, for decency, for equality, for fairness. “Those who do not learn from history are doomed to repeat it” is reputed to be a quote from philosopher George Santayana. AI learns from history.
Smart speakers have the personas of coy female assistants because that’s what the role of women was throughout history. Alexa, Cortana, Google Assistant, and Siri (which means “beautiful woman who leads you to victory” in Norse) represent not simply female assistants but “perfect” females. These assistants blush when they hear sexually explicit language, apologize that they are unable to make a sandwich, and generally behave like some men expect a nice woman should.
In an age when women and minorities are “misbehaving” and demanding fair treatment, AI is reinforcing stereotypes. AI is teaching kids and teens that the female AI assistant is there to serve their every whim, someone they can shout at and be abusive to. Thankfully, some of these bias issues are being recognized by designers and some good progress is being made. The challenge is not to teach AI history but rather to teach AI fairness.
An AI system was given 3.5 million books to analyze. Then it was asked its opinion of men and women. It described women as beautiful and sexy, while it described men as “righteous, rational, and courageous.” An analysis of Google’s job advertising system in 2015 found that men were far more likely to be shown ads for high-paid jobs than women were.
An Amazon job recruitment AI was almost exclusively selecting men because, well, the historical resume training data it was fed was almost exclusively male. Even though Amazon addressed some of the most blatant discriminatory issues, the AI was clever enough to find other ways to discriminate because that’s the way it had been brought up—on lots and lots of manly data. That which you first feed AI shapes its character.
Historically, women who exhibit heart disease symptoms are “more likely to be diagnosed with anxiety and sent home, whereas if you’re a man, you’re more likely to be diagnosed with heart disease and receive lifesaving preventive treatment,” Lisa Feldman Barrett writes in How Emotions Are Made.
The prejudices and sexism of male doctors throughout history has been used to train AI. When AI looks at medical research on heart disease it will find that the majority of the research is on men. So, it will “logically” assume that women don’t get heart attacks. That’s not true, of course. What is true is that the lives of women were not valued enough throughout history. AI systems diagnose women for anxiety when they should be diagnosing heart disease. The son is repeating the sins of the father.
Apple Card “is such a fucking sexist program,” David Heinemeier Hansson wrote in 2019. “My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work.” Nobody David talked to at Apple was able to help. Their constant reply was: “IT’S JUST THE ALGORITHM.”
This reminds me of the text I used to see at the bottom of Google News: “The selection and placement of stories on this page were determined automatically by a computer program.” Automatically? No possible bias so. It was all automatically done by a machine. Hey, if you think there might be some inherent bias in the news stories that we publish here at Google News, well, that couldn’t be the case because the stories are chosen BY THE ALGORITHM and we all know that the algorithm is god.
Wherever there is bias in data, AI will pick it up and learn from it. AI will learn from poor quality data, from out-of-date data, from lies, half-truths, propaganda. AI will learn from whatever you feed it but what it learns may not be what you expect or want it to learn.
Amazon’s facial recognition AI falsely identified one in six professional athletes from the Boston area, and one in five California lawmakers, as criminals. Amazon said that it was not the AI to blame but rather the people who were using it. Amazon claimed that the real purpose of its AI was to find missing children and stop human trafficking (cue violins). The gentle, caring folk at Amazon also said that seeing that the government was incompetent and didn’t have a clue what was happening, maybe Amazon should be able to use an AI to write the laws that would govern the use of AI. (That’s not a joke.)
Another AI system called COMPAS, when analyzing black and white people who had never been arrested, was twice as likely to label black people as high risk. Tay, a Microsoft AI chatbot, started off as a chirpy, fun-loving bot, and within 24 hours of interacting on the Web, had become a raving, foaming, Breitbart-spouting, Hitler-loving Nazi. AI learns super-quickly, you see.
Researchers who dug through nearly 50,000 records of a US hospital “discovered that an AI in use effectively low-balled the health needs of the hospital’s black patients,” Tom Simonite wrote for Wired. This wasn’t a minor error. The AI system reduced the portion of black people who should have received help by 50%. This AI is believed to be used to manage the health of 70 million people in the US.
The algorithm wasn’t even fed the race of the patients. It was clever enough to infer that from other information it was fed. It essentially learned from past inherent bias and prejudice in the data. In fact, the underlying bias was not so much about race but rather about being poor. The AI system chose to give rich people better care than poor people.
The United Nations special rapporteur Philip Alston has warned that with AI we risk “stumbling zombie-like into a digital welfare dystopia.” In this dystopia, the rich become even richer and the poor get poorer and nothing can be questioned because IT’S THE ALGORITHM.
Sometimes AI, or what passes as AI, is just dumb—more like Artificial Ignorance. A manager at a large organization was hiring and was very disappointed by the quality of the people attending the interviews. By chance, she discovered that someone she knew who would have been perfect for the job had applied but had not been called for interview. She discovered that an AI system was being used to select interviewees based on a series of tests. The manager decided to take these tests herself and failed. She got other high-performing employees to take these tests and they practically all failed. And nobody in the company could explain how exactly the AI was deciding who should pass and who should fail. IT WAS THE ALGORITHM.
Dangerous mystery
How and why many AI systems make the decisions they make are a mystery, even to their creators. “Nobody understands THE ALGORITHM,” David Heinemeier Hansson states. “Nobody has the power to examine or check THE ALGORITHM.” We need transparent AI. We need to be able to hold AI to account. We need to be able to track back a decision, step by step. We need an understandable AI.
We most definitely need to build fairness into AI, establish for it a set of universal human rights, because “AI is expected to lead to increased economic inequality both across and within countries,” the World Economic Forum stated in 2019. “Firstly, already-rich and technology-savvy countries are better prepared to leverage AI and harness its productivity gains… the rewards of AI accrue primarily to capital owners while the technology itself is largely labour-displacing. Most importantly, the jobs displaced by AI automation have distinct ethnic patterns.”
Right now, AI is designed to pursue the Holy Grail of technologists down through the ages, which is the replacement of humans by technology. The deepest religion, the underlying philosophy, the culture of technology is that humans are the problem and that cost-cutting automation is the solution. There must be an algorithm that can do this better is how the minds of Mark Zuckerberg and Larry Page think.
Google did not start out as an idea for a search engine but rather as the dream of building the ultimate AI. Search data was to be used to train the AI, and how better to have an AI understand us than to have it learn from what we search for.
The potential of AI is immense, almost unimaginable. We must ensure an AI that is kind to the planet, that conserves energy rather than consuming it in increasing quantities as it currently does, that is fair and truly transparent so that we can understand it in the same way that it already understands us. To do that, we must nurture AI from its birth with an energy-efficiency mindset, feeding it quality, unbiased data, because what it learns first will establish the foundations of its DNA.
AI can save and take lives. In 2020, the BBC reported on a study conducted by Google Health and Imperial College London which created an AI that outperformed radiologists in reading mammograms to identify signs of cancer. On the other side, there is a debate about how much AI can be allowed to deal in death. “Twenty years from now we’ll be looking at algorithms versus algorithms,” Lt. Gen. Jack Shanahan, head of the Pentagon’s AI initiative, told Wired in 2019.
“Unlike humans, AI is tireless,” Fergus Walsh wrote for the BBC in 2020. As a glimpse of the future, in 2017 Facebook was forced to shut down an AI system that had developed its own language to communicate that the human developers could not understand. “It is as concerning as it is amazing—simultaneously a glimpse of both the awesome and horrifying potential of AI,” Tony Bradley wrote for Forbes.
Supposing AI becomes like us? Supposing it has the same lack of respect for the Earth and other life on it? Supposing it becomes as addicted to energy as we are?
The future of AI is much too important to be left in the hands of technologists or militarists or any other single group. It is a matter than must concern everybody on this planet because before long it will affect the lives of everybody.
Key actions
Raise your voice about AI. Educate yourself about it. Talk to your friends; spread the word about its growing influence and power.
Demand that well-debated and well-thought-through laws are enacted to govern AI. Demand an ethical AI, an AI that always seeks to save energy rather than waste it.
World Wide Waste
- Free, weekly email
- Published since 1996
- Read some examples
- Subscribe with your email below
Links
- Cracking Ancient Codes: Cuneiform Writing, Irving Finkel, YouTube, 2019
- The Evolution of Storage Devices, Rahul Chowdhury, HP, 2013
- Sonos to deny software updates to owners of older equipment, Alex Hern, The Guardian, 2020
- Are Smart Speakers and Streaming Devices Energy Efficient? Noah Horowitz, NRDC, 2019
- AI voice assistants reinforce harmful gender stereotypes, new UN report says, Nick Statt, The Verge, 2019
- Voice tech like Alexa and Siri hasn’t found its true calling yet: Inside the voice assistant ‘revolution’, Rani Molla, Recode, 2018
- Green AI, R. Schwartz, J. Dodge, N. Smith, O. Etzioni, Allen Institute for AI, 2019
- Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future, Tony Bradley, Forbes, 2017
- At Tech’s Leading Edge, Worry About a Concentration of Power, Steve Lohr, New York Times, 2019
- The anatomy of AI, Kate Crawford, Vladan Joler, SHARE Lab, 2018
- Can AI light the way to smarter energy use? Angeli Mehta, Ethical Corporation, 2019
- Voice Recognition Still Has Significant Race and Gender Biases, Joan Palmiter Bajorek, Harvard Business Review, 2019
- I’d blush if I could: closing gender divides in digital skills through education, UNESCO, 2019
- AI-powered automation will have an ethnic bias, Kai Chan, World Economic Forum, 2019
- This machine read 3.5 million books then told us what it thought about men and women, Maria Hornbek, World Economic Forum, 2019
- Can you make AI fairer than a judge? Play our courtroom algorithm game, Karen Hao, Jonathan Stray, MIT Review, 2019
- A Health Care Algorithm Offered Less Care to Black Patients, Tom Simonite, Wired, 2019
- AI’s dirty secret: Energy-guzzling machines may fuel global warming, Donna Lu, New Scientist, 2019
- Energy and Policy Considerations for Deep Learning in NLP, E. Strubell, A. Ganesh, A. McCallum, University of Massachusetts Amherst, 2019
- Intel Study: Applying Emerging Technology to Solve Environmental Challenges, Todd Brady, Intel, 2018
- How AI could save the environment, Alison DeNisco Rayome, TechRepublic, 2019
- Collaborative Intelligence: Humans and AI Are Joining Forces, H. James Wilson, Paul R. Daugherty, Harvard Business Review, 2018
- How AI can help us clean up our land, air, and water, Vox, 2018
- Self-driving cars could cause more pollution without dramatic changes to the grid, P. Fox-Penner, J. Hatch, W. Gorman, GreenBiz, 2018
- Driverless Cars Generate Massive Amounts of Data. Are We Ready? Mark Pastor, EnterpriseAI, 2018
- Self-Driving Cars Use Crazy Amounts of Power, and It’s Becoming a Problem, Jack Stewart, Wired, 2018
- Intelligent Transportation Systems and Greenhouse Gas Reductions, M. Barth, G. Wu, Kanok Boriboonsomsin, Springer Link, 2015
- Artificial Intelligence—A Game Changer for Climate Change and the Environment, Renee Cho, Columbia University, 2018
- How artificial intelligence can tackle climate change, Jackie Snow, National Geographic, 2019
- AI and Climate Change: How they’re connected, and what we can do about it, AI Now Institute, Medium, 2019
- Beyond the Smart Fridge: Leveraging the Internet of Things to Save the World, Michael Wornow, Harvard Political Review, 2018
- AI ‘outperforms’ doctors diagnosing breast cancer, Fergus Walsh, BBC, 2020
- ICT’s potential to reduce greenhouse gas emissions in 2030, Jens Malmodin, Pernilla Bergmark, Ericssson, 2015
- How to make computers faster and climate friendly, Seokbum Ko, University of Saskatchewan, 2018
- Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day, James Vincent, The Verge, 2016
- Amazon scraps secret AI recruiting tool that showed bias against women, Jeffrey Dastin, Reuters, 2018
- Can the planet really afford the exorbitant power demands of machine learning? John Naughton, The Guardian, 2019
- The Pentagon’s AI Chief Prepares for Battle, Elias Groll, Wired, 2019