Author Archives: Gerry McGovern

Data centers are really data dumps

It’s not simply crap content. Computer code bloat is everywhere. For starters, most software, most features, serve no useful function. A pile of crap software is launched and then either it dies a death, or else for years badly designed fixes are made to try to get it to work in the most basic manner, while making it even more complex and bloated. It’s hard to comprehend the appalling quality of most enterprise systems, while most consumer software apps hardly even get downloaded. Those that do, hardly ever get used. Thirty days after news apps, shopping apps, entertainment apps, education apps have been downloaded, most will have lost over 90% of their users. A typical webpage can easily weigh 4 MB. If it was properly coded, that weight could be brought down to 200 KB, a 95% reduction. 95% crap.

Big Tech laughs about all this crap production. This is how Big Tech makes so much money in its data centers. It sells them as plush five-star hotels for superior VIP data, when in reality it’s running a datafill, a data dump. If most of the data stored in a typical data center was processed and accessed every day, then everything would explode, the servers would fry. The demand would crash everything. The data center business case is dependent on most people never accessing the data they’ve stored. You’re paying for a data dump.

To protect their profits, Big Tech has historically made it very hard for you to delete. Artist Honor Ash observed:

“Initially, Gmail didn’t even include a delete button. Not only was it no longer necessary to delete emails regularly to make space in your inbox, but it was actually not even possible. This shift broke everyone’s deletion habit—it ended the ritualistic appraisal of what should be kept, and ushered in a default in which literally everything should.”

While the good habits of deletion ended with the Cloud, they were replaced by the very bad habits of keeping everything. How often I’ve heard the argument that we have to keep everything because you never know what will be important in the future. This was executives in charge of intranets, websites and computer systems, where nobody could find anything because of the terrible search design and because of the awful information architecture. And what they did find was usually some sort of dodgy draft, some copy of a copy, or something that was way out of date, inaccurate or useless. Keeping all this junk data does not simply reduce the chances of findability, it also increases cybersecurity risk. Huge quantities of poorly structured and badly maintained data and software are an invitation to hacking and other risks.

Even if we could put a data center in every town and village in the world, we couldn’t keep everything anyway. There is simply too much data being produced, vastly too much, so that in any one year we’re lucky if we have the space to store about 10% of the total data produced. We are now into the era of zettabytes. As my previous book, World Wide Waste, explained:

“A zettabyte is 1,000,000,000,000,000? MB or one quadrillion MB. If a zettabyte was printed out in 100,000-word books, with a few images thrown in, then we would have one quadrillion books. It would take 20,000,000,000,000 (20 trillion) trees’ worth of paper to print these books. It is estimated that there are currently three trillion trees on the planet. To print a zettabyte of data would thus require almost seven times the number of trees that currently exist to be cut down and turned into paper.”

Data centers contain 90% crap data

We need to talk about the data. Crap data. We’re destroying our environment to create and store trillions of blurred images, half-baked videos, rip-off AI ‘songs’, rip-off AI animations, videos and images, emails with mega attachments, never-to-be-watched-again presentations, never-to-be-read-again reports, files and drawings from cancelled projects, drafts of drafts of drafts, out of date, inaccurate and plain wrong information, and gigabytes and gigabytes of poorly written, meandering content.

We’re destroying our environment to store copies of copies of copies of stuff we have no intention of ever looking at again. We’re destroying our environment to take 1.9 trillion photos every year. That’s more photos taken in one single year in the 2020s than were taken in the entire 20th century. That more than 200 photos taken for every child, woman and man alive. Every year. 12 trillion photos and growing, stored in the Cloud, the vast majority of which will never be viewed again. Mind boggling and exactly how Big Tech wants it.

I have spent almost 30 years working with hundreds of the largest organizations in the world in some 40 countries, trying to help them to better manage their content and data. Here’s what I’ve learned. 90% plus of commercial or government data is crap, total absolute crap. Period. It should never have been created. It certainly should never have been stored. The rise of digital saw the explosion of data crap production. Content management systems were like giving staff diesel-fueled diggers, whereas before they only had data shovels. I remember around 2010 being in conversation with a Microsoft manager, who estimated that there were then about 14 million pages on Microsoft.com, and that four million of them had never been visited. Four million, I thought. That’s basically the population of the Republic of Ireland of pages that nobody has ever visited. Why were they created? All the time and effort and energy and waste that went into all these pages that nobody had ever read. We are destroying our environment to create and store crap. And nobody cares.

Everywhere I went it was nothing but the same old story. Data crap everywhere. Distributed publishing that allowed basically anyone to publish anything they wanted on the intranet. And nobody maintains anything. When Kyndryl, the world’s largest provider of IT infrastructure services, was spun off by its parent, IBM, they found they had data scattered over 100 disparate data warehouses. Multiple teams had multiple copies of the same data. After cleanup, they had deleted 90% of the data. There are 10 million stories like this.

Scottish Enterprise had 753 pages on its website, with 47 pages getting 80% of visits. A large organization I worked for had 100 million visits a year to its website, with 5% of pages getting 80% of visits. 100,000 of its pages had not been reviewed in 10 years. “A huge percentage of the data that gets processed is less than 24 hours old,” computer engineer, Jordan Tigani, explained. “By the time data gets to be a week old, it is probably 20 times less likely to be queried than from the most recent day. After a month, data mostly just sits there.” The Southampton University public website found that 0.2% of pages got 90% of visits. Only 4% of its pages were ever visited. So, 96% of the roughly four million pages were not visited. One organization I knew of had 1,500 terabytes of data, with less than 2% ever having been accessed after it was first stored. There are 20 million more stories like these.

Most organizations have no clue what content they have. It’s worse. Most organizations don’t even know where all their data is stored. It’s even worse. Most organizations don’t even know how many computers they have. At least 50% of data in a particular organization is sitting on some server somewhere and nobody in management knows if it even exists; nor do they care. The average organization has hundreds of unsanctioned third-party app subscriptions being paid for by some manager’s credit card, storing everything from project chats to draft reports to product prototypes.

The Cloud made the crap data problem infinitely worse. The Cloud is what happens when the cost of storing data is less than the cost of figuring out what to do with the crap. One study found that data stored by UK engineering and construction industry firms had risen from an average of three terabytes in 2018 to 26 terabytes in 2023. That’s a compound annual growth rate of 50%! That sort of crap data explosion happened—and is happening—everywhere. And nobody in management cares because it’s so ‘cheap’ to store data. And this is what AI is being trained on. And we wonder why AI gets stuff wrong so often? Crap data in. Crap data out. And nobody cares. Particularly at a senior management level, nobody cares. Senior management is full to overflowing with Big Tech groupies chanting about the latest tech miracle that’s going to magically transform and supercharge their careers. Having to deal with senior managers has always been the most unsavory part of my job, because when it comes to technology, these managers exist on a whole other level of stupid vanity and narcissistic pursuit of their own selfish agendas.

Extreme secrecy of data centers

As soon as Lars Ruiter stepped out of his car, he was confronted by a Microsoft security guard seething with anger, Morgan Meaker wrote for Wired. The security guards for data centers are specially trained to be aggressive and confrontational, so as to reinforce the air of secrecy and alienness of a data center in a local community. Ruiter, a Dutch local councilor, had parked in the rain outside a half-finished Microsoft data center that was rising out of flat North Holland farmland. The guard was not willing to listen to any local councilor expounding on the democratic rights of transparency and before Ruiter knew it, this data center security guard had his hands around the councilor’s throat.

Is there a more secretive empire in the world than Big Tech and its data centers? Big Tech realizes the power of data, the power it has over us when it has our data. It knows that if we knew about it even half of what it knows about us, then we would control Big Tech a lot more strictly than we do today.

There’s a common mantra in Big Tech when it has to respond to people who worry about all the data that it is sucking up about them: “If you’ve got nothing to hide, you’ve got nothing to worry about.” It’s such a disarming and innocent-sounding phrase. So comforting. However, does this mean that Big Tech, which hides its own data with fanatical religiosity, has something very big to hide? Yes, of course it does. Big Tech has an awful lot to hide.

Big Tech makes huge efforts to deny academic institutions or other research bodies access to data that would help highlight the harm it does. “Without better transparency and more reporting on the issue, it’s impossible to track the real environmental impacts of AI models,” Kate Crawford, a research professor at USC Annenberg, who specializes in the societal impacts of AI, told the Financial Times. According to Julia Velkova, an associate professor at the University of Helsinki, “These companies are unapproachable and largely disconnected from the places in which they are built. They refuse to talk to researchers or the public but instead communicate through press releases and YouTube videos, through the platforms that they own.”

Data center secrecy is rampant, deliberate and consistent. “When it comes to Google, what’s really striking is the lack of transparency and information when it comes to these projects,” said Sebastián Lehuedé, an expert in AI. How much water do US data centers use? “We don’t really know,” Lawrence Berkeley National Laboratory research scientist Dr. Arman Shehabi explained. “I never thought it could be worse transparency than on the energy side, but we actually know less.” And Philip Boucher-Hayes, a journalist with RTE, the Irish national broadcaster, said: “We have been really bad at reporting data centres accurately, largely because the data centres refuse to be transparent. I spent months trying to get interviews with some of the hyperscale operators here. They refused.”

Data center jobs scam

Rural Washington, USA, was going to be transformed. There would be so many jobs. Making Rural Washington Great Again. All it required was cheap land, cheap water, cheap electricity, and big tax breaks. The data centers were coming. The data centers came. Where were the jobs? people asked. The government would not say. Top secret. Trade secret. Can’t tell. The public must never know what happened behind those closed doors, when the government officials and data center executives commingled and whispered sweet nothings in each other’s ears.

Michigan legislature extended tax breaks for data centers even though the data centers that had already located there had brought fewer than 3% of the promised jobs. All those promises of great digital jobs were never going to happen, as John Mozena, president of the Center for Economic Accountability, explained to Scott McClallen for Michigan Capitol Confidential. According to Mozena, data centers are some of the “dumbest things a state can subsidize.”

“Developers say that companies will come to town to be near the data center, but only a tiny fraction of very specific industries, such as high-speed Wall Street trading firms, need to be physically close to the data centers they’re using.”

How did governments so easily fall for the data center jobs scam? After all, not needing to be close to where your data is located has been the foundational promise of the Internet and the Cloud. “Tax breaks for data centers are a fortunately rare phenomenon—tax policy that is precisely wrong,” Andrew Leahey wrote for Bloomberg. “They represent misguided state fiscal strategy, and the tide must be stemmed before more states succumb to the mistaken notion of ‘investing’ in data centers with taxpayer money.”

A typical data center is designed to last 15 to 20 years. Many can have no more than a handful of people working there at a time, “ensuring that workers were alone most of their shifts,” Julia Velkova wrote for The Information Society. Work is tedious, dull and isolating. Job security is weak. There is constant talk of further automation. “People are afraid,” a worker said. Some hope that they might get to finish their careers because “we are much cheaper than the robots.”

Meanwhile, a propaganda policy statement from the Irish government claimed that “employment in data centres are high value jobs.” Sorry, no. You have more security guards in typical data center than technicians. Technical jobs are basic and involve shift work. And any good jobs, they fly them in for the day from headquarters. In Ireland, by the early 2020s, data centers were employing less than 1% of the workforce, while using more than 20% of the electricity, with government ministers absolutely refusing to release figures on exactly how many direct jobs they were delivering because the numbers were so embarrassingly low.

A typical data center provides 10% of the jobs of other industries while consuming 10 to 50 times the amount of electricity, and who knows how much water. A Google data center job was found to cost hundreds of times more in electricity than an average Norwegian job. Google came to the country demanding 5% of its electricity, while promising about 100 jobs. The local and national politicians thought it was a fantastic deal. For the remaining 95% of electricity, Norway was getting three million jobs. In neighboring Sweden, when Facebook announced a data center plan, the country’s business propaganda machine went into overdrive, promising 30,000 jobs from the new industry. Facebook initially delivered 56 jobs, rising to about 90, while using as much electricity as a town of 27,000 people.

In the USA, it was estimated that data centers delivered five to 10 jobs per acre, whereas other employers delivered about 50 jobs per acre. A city north of Paris calculated that the employment rate for a data center in the area was one full-time employee per 10,000 square meters, when the average in the area was 50, according to Gauthier Roussilhe, who specializes in the environmental challenges of digitalization. Almost every day you can read a story about a data center planning to build on 130 acres of land, promising 15 to 30 jobs. A mega Facebook data center was employing about 400 workers, while a similar sized Mall of America was employing 11,000 people: 28 times more staff.

Not delivering jobs is actually a good thing, according to the Data Center Coalition. Who wants those silly jobs, anyway? “Not having lots of workers driving to and from data centers saves localities from having to pay for roads, emergency services and schools,” the Coalition said, with a wink and a nod.

Data centers are noisy and smelly

You do not want to live close to a data center. Having one near your home is like having a lawnmower running in your living room 24/7, as one local resident described it. Residents talked about low-pitched roars interspersed with high-frequency screeches, as the whir of loud fans echoed through the air. A growing body of research shows that the type of chronic noise emitted by data centers was a hidden health threat that increased the risk of hypertension, stroke and heart attacks. As Zac Amos, writing for HackerNoon, explained:

“Many data centers have on-site generators. Their cooling systems—essential for keeping hardware operational—contain intake and exhaust fans, which are objectionably loud. They produce between 55 and 85 dB typically. The noise is even more noticeable in rural areas where massive, nondescript buildings replace spaces that used to be forests or farmland. Are data centers noisy at night? Most are since they run around the clock. Even if their volume doesn’t increase after hours, their loudness is more noticeable when it gets quiet. People often describe the noise as a buzzing, tinny whining or low-pitched roar.”

According to Christopher Tozzi, writing for Data Center Knowledge:

“Hundreds of servers operating in a small space can create noise levels of up to 96 db(A). At the same time, the ancillary equipment that data centers depend on, like the HVAC systems that cool servers or generators that serve as backup power sources, add to the noise. These systems can be especially noisy on the outside of a data center, contributing to noise pollution in the neighborhoods where data centers are located … They’re becoming even noisier as businesses find ways to pack ever-greater densities of equipment into data centers, and as they expand the power and cooling systems necessary to support that equipment.”

Antonio Olivo, writing for The Washington Post, told a story about Carlos Yanes and his family:

“Carlos Yanes believes he can tell when the world’s internet activity spikes most nights. It’s when he hears the sounds of revving machinery, followed by a whirring peal of exhaust fans that are part of the computer equipment cooling system inside an Amazon Web Services data center about 600 feet from his house. The sound keeps him awake, usually while nursing a headache brought on by the noise, and has largely driven his family out of the upstairs portion of their Great Oaks home, where the sound is loudest.”

With the evil bitcoin data centers, it’s even worse. Andrew Chow wrote for Science Friday:

“Residents of the small town of Granbury, Texas, say bitcoin is more than just a figurative headache. Soon after a company opened up a bitcoin mine there a couple years ago, locals started experiencing excruciating migraines, hearing loss, nausea, panic attacks, and more. Several people even ended up in the emergency room. The culprit? Noise from the mine’s cooling fans.”

Air pollution from power plants and backup diesel generators that supply electricity to data centers is “expected to result in as many as 1,300 premature deaths a year by 2030 in the United States,” a study by University of California and Caltech scientists found. “Total public health costs from cancers, asthma, other diseases, and missed work and school days are approaching an estimated $20 billion a year.”

Data center energy surge

Big Tech’s vaunted data center energy efficiency gains were not what they seemed. Despite substantial progress between 2007 and 2018, energy efficiency didn’t actually improve much after that. Amazon, Google, Microsoft and Meta more than doubled their energy demand between 2017 and 2021. Driven by AI, emissions figures from data centers were expected to double, treble or quadruple by 2030. Were AI-driven data centers adding a Germany-worth or a Japan-worth of electricity every couple of years? Was one data center using as much electricity as three million people? Did new data centers come with nuclear power plants attached?

“Something unusual is happening in America,” The New York Times reported. “Demand for electricity, which has stayed largely flat for two decades, has begun to surge.” In little old Ireland, by the early 2020s, data centers were consuming over 20% of Irish electricity, more than every city, town and village in the country.

Data centers often have an even bigger impact on electricity use in a region than the actual electricity they use, because of a practice called ‘air bookings’, where Big Tech locks up future electrical capacity just in case it needs it. This blocks development in the local area. In Skåne, Sweden, for example, “Microsoft booked so much electricity from the grid in the Malmö region that the local Swedish bread company Pågen could no longer build a bread-baking factory in the area and had to expand elsewhere,” said Julia Velkova, assistant professor at the University of Helsinki.

All this surge in data center electricity demand meant that coal plants were being kept in service longer, while oil, gas and nuclear stocks were booming. In the USA, “residents in the low-income, largely minority neighborhood of North Omaha celebrated when they learned a 1950s-era power plant nearby would finally stop burning coal,” Evan Halper wrote for The Washington Post. The good news didn’t last long because of the explosive growth of data centers. The same happened in Georgia, where “one of the country’s largest electric utilities, Southern Company, made a splash when it announced it would retire most of its coal-fired power plants,” Emily Jones wrote for Grist. However, because of an “extraordinary spike in demand for electricity” driven by data center growth, they too had to cancel their plans.

Even the data centers that claimed to use wind, solar or hydro energy had ‘backup’ diesel generators that were getting used more and more as their frenzied growth maxed out local electricity grids. Some were also questioning how Big Tech was monopolizing wind, solar, etc., that could instead be used to help homes and local businesses move away from fossil fuels. “For a lot of individuals and politicians, the fact that we use energy from newly constructed wind parks for the benefit of hyper-scale data centers feels out of balance,” explained Julia Krauwer, a technology analyst at Dutch bank ABN Amro.

Data center energy scam

For years, energy efficiency was the great big shining bright green fabulously good spinning story of the Big Tech data center love of and care for our environment. This was quite a feat of PR spinning, when you consider that a small area of a data center in the 2020s had more power than all of available computing around 1980, and that the electricity demand per square meter can be 50–100 times greater than for a normal office building. Despite this, the ‘efficiency’ story was spun from the early 2000s that while data was exploding, the quantities of energy required to run these data centers were not growing at anywhere near the same pace. Added to that, whatever energy was needed was ‘renewable’, and we all know that renewable energy has zero cost to the environment, don’t we? So all’s good in techland. The Efficiency Bros have come riding to the rescue yet again. Data centers were energy efficient, and getting more efficient every year, to the point where they would soon dematerialize and become invisible, requiring nothing to run on other than fresh air. Woo-hoo!

Behind the scenes, in reality, Big Tech worked very hard to manipulate, lie, greenwash, scam and spin like a trumpista, as it piled on the creative, innovative accounting. “Data center emissions were probably 662% higher than big tech claims,” Isabel O’Brien wrote for The Guardian.

“Meta, for example, reports its official scope 2 emissions for 2022 as 273 metric tons CO2 equivalent—all of that attributable to data centers. Under the location-based accounting system, that number jumps to more than 3.8m metric tons of CO2 equivalent for data centers alone—a more than 19,000 times increase.”

The way data centers measure electrical energy usage is through what is called Power Usage Effectiveness (PUE). The closer the PUE is to 1, the more ‘efficient’ the data center is. Let us park for a moment the awkward truth that improvements in technological efficiency have rarely if ever led to overall reductions in energy use—quite the total opposite, actually. Park that nasty thought for a moment. Here’s another nasty thought to park. Much of the PUE efficiency gains were achieved by massively increasing the use of water for cooling. Another way was by churning through computer servers and creating mountains of e-waste. Energy use was “being reduced by just throwing more material at the problem,” Johann Boedecker, founder of the circular economy consultancy Pentatonic, told the Financial Times. Data centers became a prime source of e-waste as they dumped perfectly good working servers, chasing that new server with that slightest of slight gains in energy efficiency. Again, we see how vital it is not to allow the focus to be on a single metric. We must calculate the true and total long-term costs to our environment—to the water, the air, the soil, the biodiversity, the climate.

It’s not drought, it’s pillage: data centers take the water

Using a slew of aliases to buy land and make sweet deals, getting secret, historic tax breaks, getting electricity at less than half of what ordinary people pay, being sold public land for less than half the market value, slurping down the cheapest of cheap water like there’s no tomorrow, this is how Big Tech rolls. “Google faced criticism for its plans to build a massive data center in Mesa, Arizona, after it was revealed that the company would pay a lower water rate than most residents,” Eric Olson, Anne Grau and Taylor Tipton reported for the University of Tulsa. “The deal, negotiated with the city, allowed Google to pay $6.08 per 1,000 gallons of water, while residents paid $10.80 per 1,000 gallons.”

In Uruguay, the people thought their fresh water was so plentiful they wrote it into their constitution as a basic citizen right. Along came climate change. In 2022, “Montevideo was the first capital in the world to arrive at ‘day zero’ and run out of potable water,” environmentalist Eduardo Gudynas told Mongabay. The citizens were forced to drink salty water from the Rio de la Plata river. Meanwhile, Google had plans for Uruguay’s fresh water and, as usual, it wasn’t telling anyone. Citizens had to go to court to force Google to disclose that its cooling towers would need 7.6 million liters (2.0 million gallons) of fresh water a day. “No es sequia, es saqueo!”—“It’s not drought, it’s pillage,” the citizens said. While over in Chile, they were asking, “With Google as my neighbor, will there still be water?”

When it comes down to a choice between a poor person drinking and the data drinking, it’s clear that the data comes first. “When Hurricane Maria and Hurricane Irma devastated Puerto Rico, the data centers on the island did not go hungry for power or thirsty for water,” researcher Steven Gonzalez, whose family are from the country, told me. The data drank its fill, even as the citizens went without access to fresh water for months. In Nigeria, they had to “buy water to cook,” Felix Adebayo, resident of Lagos, told Abdallah Taha and Alfred Olufemi. Meanwhile, close by, at least ten data centers drank their fill.

We are only at the beginning of the Big Tech war for water and other resources. It’s going to get much worse. The Big Tech Nazis are waving their arms and telling us who they truly are. We should believe them. The Big Tech Nazis would think nothing of wiping out half the planet once it helped them build their rocket to Mars, or whatever technology they’re building that will help them consolidate power.

There is a global water crisis. Lands are desertifying at frightening rates as soils collapse dues to the stresses of overconsumption driven by Big Tech. We have pumped so much groundwater in the last 50 years, we have altered the earth’s tilt. And now along comes ravenous AI, its thirst doubling and doubling again. Until recently, most data centers have been getting water so cheap they haven’t even bothered managing it. Only constant public pressure can have any chance of reining in the excesses of Big Tech.

Big Tech’s water use is 100 times bigger than expected

The total amount consumed by Big Tech could be much, much higher than what they nominally disclose. “When it comes to water, Big Tech only shows its direct water consumption, while hiding its real water footprint,” Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, told me. Based on his research, “Apple’s real water footprint is 100 times what it shows for its direct water consumption. By some calculations, I found that Apple’s real water footprint was about 300-600 billion liters in 2023, which is comparable to Coca Cola’s overall annual water footprint.” Shaolei went on to tell me that:

“More importantly, water footprint has “colors”: Coca Cola’s water footprint is largely “green water footprint” (i.e., water contained in soils and only usable for plants). On the other hand, Big Tech’s water footprint is mostly blue water footprint (i.e., water in surface water and groundwater that is directly usable for humans). The real issue with AI’s water footprint is a lot more (10-100 times) serious. Each Big Tech is a hidden “Coca Cola” in terms of the water footprint.”

In a town called The Dalles in Oregon, USA, local people were worried that Google’s water use was soaring. As is so often the case, the city officials, who had given Google millions in tax breaks, had no intention of letting anyone know how much water Google was using. It was up to a regional paper, the Oregonian, to try and find out. They were forced to bring a case to court. City officials were ordered by Google to argue in court that Google’s use of scarce public water was a “trade secret”. After more than a year of proceedings, city officials were forced to tell their own citizens how much public water Google was using.

“But most troubling in the affair,” Binoy Kampmark wrote for Scoop, “leaving aside the lamentable conduct of public officials, was the willingness of a private company to bankroll a state entity in preventing access to public records.” Actually, it was even worse than that, as Erin Kissane, a respected technology writer, informed me. “Rather than the Oregonian suing for access, the city of The Dalles actually sued the Oregonian in a ‘reverse public records lawsuit’ to prevent the paper from disclosing the data, despite their county district attorney having already ruled that the information should be disclosed. Google funded the suit until the press got too bad and then pulled out, so the city settled.”

In another story of “trade secrets”, David Wren, writing for the Post and Courier in Dorchester County, USA, warned that the amount of public water Google was demanding “is a closely guarded ‘trade secret.’” Imperious Google had imposed a gag order on Dorchester city officials, warning them that they must not tell the public anything about the Google project, particularly how much public water Google was slurping. Again, dragged kicking and screaming, KGB Google was finally forced to tell the public of Dorchester how much public water it was using. “After fighting its disclosure for more than a year, Dorchester County has agreed to publicize the amount of water used at a data center Google is building, reversing its previous stance that the information is a closely guarded trade secret that shouldn’t be shared with the public,” David Wren wrote for the Post and Courier. A victory of sorts for the community? Except that the community would also learn in the disclosure that Google demanded from Dorchester that in case of any natural emergency, its data center would have priority on the water. Like everywhere else, Big Tech demands that its data must drink its fill before people get to drink.

Big Tech lies about its water use

There is a reason why Big Tech has been so super-secretive about its water use. It’s because it was getting it so cheap that it wasn’t even worth measuring it. Water was this invisible externality, and because it was invisible to the public, Big Tech could keep telling the story of how it was becoming more and more efficient, more and more green, more and more “in the Cloud.” The cloud is on the ground. Big Tech abuses and misuses water at an alarming rate, treating it as essentially a free resource, and to keep on doing that, it was essential that the public should not know what was happening.

“Water consumption in data centers is super embarrassing,” a data center designer stated as far back as 2009. “It just doesn’t feel responsible.” Survey after survey showed that about 60% of data centers saw “no business justification for collecting water usage data.” Think about that. They were getting the water so cheap they didn’t even bother metering it. In a Microsoft data center in San Antonio, Texas, it was found that the actual cost of water should have been 11 times higher than what the company was paying. In drought-stricken Holland, a Microsoft data center slurped 84 million liters of drinking water in one year, when the local authority said the facility would only need 12 to 20 million liters.

Data centers know they have been hugely abusive of their water use and thus are desperate to hide usage figures from the general public. “There were actually water documents tracking how much this data-center campus was using,” journalist Karen Ho explained about a Microsoft data center. “But when the city came back with documents about that, everything was blacked out. They said that it was proprietary to Microsoft and therefore they couldn’t provide that information.” Public water use is proprietary to Microsoft. Sounds about right.

“The reason there’s not a lot of transparency, simply put, I think most companies don’t have a good story here,” stated Kyle Myers, a senior manager at a data center company. Data centers have a choice. They can either consume less water and use more electricity. Or they can use less energy and consume more water. “Water is super cheap,” Myers said. “And so people make the financial decision that it makes sense to consume water.”

In 2023, Bluefield Research estimated that, on a global basis, data centers were using more than 360 billion liters of water a year, including water used in energy generation. It predicted this figure would rise to over 600 billion liters a year by 2030. However, China Water Risk estimated that in 2024 China’s data centers alone used about 1.3 trillion liters of water, which is the equivalent of what 26 million people need. This amount would grow to more than 3 trillion liters a year by 2030, due to the explosive growth of AI. Which figures are correct? We don’t know.