Cloudwaste

Heads in the Cloud

Cloud is an abstraction and a distraction. Like so much digital, it gives the impression of lightness, impermanence, of something that is benign, of something that is good for the environment. Cloud and digital act to remove us from our physical environment and its responsibilities. We don’t see the impact of our actions. We don’t feel or sense that what we do in cloudland affects the land and the sea. We lead ourselves to believe that digital actions have no physical consequences. 

At least our smartphones are in our hands so there is something physical there. Our computers we see and feel. If we listen carefully, we can hear the fans whir as they do their job of cooling, and sometimes we can even feel the heat these machines emit. We can turn them off if we want to. We can hold onto a USB stick or portable hard drive. 

We rarely see the data centers, though, that make up the Cloud. When the Range International Information Group data center was completed in Langfang, China, in 2016, it took up 6.3 million square feet (585,000 square meters) of space, which is roughly the equivalent of 110 football fields. It’s estimated that there are over eight million data centers in the world. These computing goliaths are up every minute of every hour of every day, eating electricity, sweating heat, being cooled by enormous fans, and belching pollution. 

Some are proposing a more environmentally friendly approach to data center design, advising that they should be much smaller and located within cities. That would mean they would be closer to the source of data interaction, thus reducing transmission energy. It would also allow possibilities for the waste heat that is emitted by the data center to be used to heat nearby buildings. The principle is: the closer the better. The closer, the more useful waste. The greater the distance, the more useless waste. (The counter-argument is that the massive data centers run by the tech giants are much better designed and managed.) 

Another principal that data center design needs to embrace is maximum utilization. 2019 analysis by 451 Research found that average server utilization rates in corporate data centers were merely 18%. In other words, 82% of the time, servers were running idle, burning electricity and pumping out pollution for zero benefit. A 2015 McKinsey analysis found even lower utilization rates: between 5% and 15%. 

Amazon has claimed that server utilization rates for Cloud providers are above 50%. Like so much else in digital, the servers sit there idle, drinking electricity. If a machine is eating electricity, it should be working, not slouching around doing hardly anything, belching pollution. 

Saving one gigabyte of data to the Cloud consumes approximately 0.015 kWh, while saving to your local hard disk consumes 0.000005 kWh, Justin Adamson explained in Stanford Magazine in 2017. It is therefore 3,000 times more energy intensive to save to the Cloud than to your hard disk. “When you choose to save your document to your computer, your hard drive spins up, and its mechanical arm swings across a large magnetic platter to magnetize or demagnetize the tiny cells that represent your information,” Adamson wrote. 

Saving a text document to the Cloud requires it to be partitioned into a stream of data packets, which then speed towards a router and then off out into the network, whizzing down lines, passing through more routers and servers, switches, repeaters, approaching the speed of light, until they enter the data center, where they will be stored and backed up, and wait until they are called again, when they will start the journey back across the airwaves, wires, routers and servers. 

The greater the distance, the greater the cost to the environment. The more wireless the journey, the greater the cost to the environment. The most energy-efficient way to transfer data is through wires.

If 90% of data that is created is crap, then 90% of the activities that occur in the Cloud are waste management. There’s an old saying: What do you get when you cross a fox with a chicken? You get a fox, because the fox eats the chicken. The fox is all data and the chicken is good data. All data eats the good data because the good data is much smaller than all data and all data smothers the good data. If you let all data grow, it will smother your will to do anything about it. 

All data eats the planet too because there is so much of it. If immediately after you’ve taken those 50 photos, you delete the 45 crap ones, then you’re doing something good for the environment. You’re also doing something good for yourself and whoever else might get enjoyment out of those five good photos. Over time, even as you save the good ones, you will end up with 500 good ones and 50 that you really like. Take the 450 and store them locally on hard drive or USB stick. 

With organizational data, it’s essential to have an archive (fox), where you put the rarely-if-ever-used data that must be kept for legal reasons. This archive will use cold, deep storage systems, where data will not be so immediately accessible. Such storage systems can save up to 90% on energy costs. Put the fox into cold, deep storage. 

Static can be good

Always choose the least energy-intensive option to get the job done. If less does the job, use less. A database-driven website is a bit like having a seven-seater car. If there’s only two in your household, do you really need it? Perhaps a simpler, more energy-efficient static website is better? I used to make these sorts of arguments a lot about 15 years ago, and then for whatever reason I stopped making them and I started using databases for our websites because everyone else was doing it and it was more convenient. 

Most websites today are database driven. This means that the content, code and other components are stored and delivered from a database. The pages on the site don’t actually exist permanently. Rather, when you click on a link for a page or type the URL in, the page is dynamically created, going into the database to fetch all the relevant content, code and components. A database website is very effective when things are likely to be constantly changing, for example stock levels for a particular product. 

The alternative to a database website is what is called a “static website.” This is a website whose pages permanently exist. A great many websites do not need to be dynamically created from a database because they don’t change much. They can work perfectly well as static sites. A static page will load faster and will require less processing, thus saving energy. That’s good for the customer and it’s good for the environment. 

We did an experiment on my website gerrymcgovern.com, where we tested how long it took the site to load on a smartphone direct from a database and as a static website. 

There are two basic ways to measure how fast a page loads. The first measure concerns how quickly some content becomes visible, preferably text, so that there’s something you can read. Based on this metric, the static page loaded in 2.9 seconds and the database-driven version took 4.7 seconds to load. The second metric analyzes how quickly the page becomes usable, so how long until you can start booking your flight or whatever. Based on this metric, the static page loaded in 2.9 seconds and the database-driven version took 5 seconds to load. 

What’s a couple of seconds, you might ask?

  1. For every second faster Walmart.com was able to make its pages load, it had a 2% lift in conversions. 
  2. Firefox reduced its page load time by 2.2 seconds and had 10 million extra downloads as a result. 
  3. Financial Times found that a one-second delay in page loads caused a 4.9% drop in the number of articles read. A three-second delay caused a 7.2% drop.
  4. “A site that loads in 3 seconds experiences 22% fewer page views, a 50% higher bounce rate, and a 22% fewer conversions than a site that loads in 1 second,” a study by Radware found. 
  5. Google discovered that even a 400-millisecond delay could result in eight million fewer searches per day. (That’s less than half a second.)
  6. Amazon found that a page load slowdown of only one second could cost it $1.6 billion in sales each year. 

Time matters. Static websites are faster. I have seen analysis that indicates that a static website can be up to 10 times faster than an equivalent database-driven site. In our analysis, when the page was static, 378 KB of data was being transferred. When the page was served from a database, 701 KB was being transferred. That’s almost twice the data transfer for a database-driven page. Imagine all the energy and time that could be saved if the millions of websites that don’t need to be database driven migrated to static? We must think about the energy. We must think about the data. We must think about the pollution.

If static websites are better in many situations, why aren’t they used more? Why have we in fact moved in the opposite direction? “Progress,” “innovation,” the desire to be seen to be using the “latest” technology. A database is more “advanced” than a static website, and everyone wants to be seen as more advanced. We are willing zombies in the march of progress, assuming that innovation and what is new are always better. We are so enamored by more power and more processing and more complexity that we believe that more always delivers better, and that we must always have more. We don’t understand “enough” power, we only understand “more” power. We want the highest spec, always the highest spec. 

We must become much more questioning of energy, power, innovation, new things, and the impact that all our actions—both digital and physical—have on this beautiful and stressed planet we call home. How much is needed to do the job? If we can do the job with 1X power, let’s do it with 1X. Why do we need 10X when 1X will do? We can stop creating 9X pollution and still have everything we want. Use what is needed to get the job done, no more, no less.

Search

Google estimates that carrying out a single search takes about 0.0003 kWh (1080 joules) of energy. That’s the equivalent of leaving a 60-watt bulb on for 17 seconds. In 1999, it was estimated that there were one billion searches on Google. In 2019, there were 5.2 billion searches a day, and 1.9 trillion searches a year. That’s the equivalent of leaving a 60-watt bulb on for one million years.

1.9 trillion yearly searches consume 569,400,000 kWh of electricity. According to RenSMART this creates 161,180,000 kg of CO2. To offset the pollution from 1.9 trillion searches would require the planting of 16 million trees. 

Like so much else in digital, Google search is a new weight for the Earth to carry, one that is growing in leaps and bounds. In 1999, for example, to absorb the pollution from one billion searches, it would only have required the planting of 8,500 trees. In 20 years, the weight of search has become 1,900 times heavier for the Earth to bear.

If a single search costs 0.0003 kWh of energy, then if we translate that into calories, one search burns 0.26 calories in equivalent energy resources. If we assume that a typical search takes 10 seconds then, based on the Basal Metabolic Rate, the human body burns 0.16 calories during a search. Thus, every search costs the Earth 0.1 more in calories than it costs the searcher. If we were running at 5 mph (8 kmph) for 10 seconds, then we would burn about 1.7 calories. If we were walking at 2.5 mph (4 kmph) we would burn 0.85 calories in 10 seconds. 

Much of our interaction with digital is sedentary and slow from a human-energy-consumption perspective. 80% of teenagers globally are too inactive—and it could shorten their lives, according to a 2019 WHO study. Digital makes us burn more of the Earth’s resources than our own. Our own energy resources, of course, originally resided in the Earth as plants or animals. However, now they reside in us and if we don’t burn them, we waste energy and we potentially damage our bodies through lack of exercise. 

Some of this energy the body will try not to waste. It will turn it into fat, which is the body’s battery. For many, this battery will grow in size and corrode because it will not be used. The danger of digital is that the convenience it delivers creates short-term highs with long-term costs for us and the planet.

Google claims that their data centers are highly efficient and use sustainable energy where possible. However, we know that most digital energy is used not in the processing, storage or transmission of data but in the manufacture of the machines used to do the work. It’s estimated that Google uses about two million servers at any one time. These servers will have consumed most of their energy during their manufacture. They will have short, underutilized lives, and when they die, they will be dumped as toxic waste. 

Twitter, Facebook

It’s estimated that each tweet consumes about 90 joules of electricity, which is about one-tenth what a typical search would consume. (Although a tweet with an image would consume considerably more.) Each tweet will thus emit about 0.02 grams of CO2. 

In 2010, there were roughly 50 million tweets per day, delivering about one ton of CO2 into the atmosphere or 365 tons per year. By 2019, there were an average of 500 million tweets per day. Thus, the annual pollution for Twitter would be in the region of 3,650 tons. We’d need to plant about 330,000 trees to offset that. Facebook created 339,000 tons of CO2 in 2018, according to Statista. We’d need to plant 31 million trees to offset that pollution. 

Key actions

Conserve the Earth’s energy. Spend your own. Focus on the Earth Experience.

Only use as much digital as is needed to get the job done. Use Google less. Use your brain and memory more. 

Keep your data local. Only keep the essential, current stuff in the Cloud.

Buy now

World Wide Waste

  • Free, weekly email
  • Published since 1996
  • Read some examples
  • Subscribe with your email below



 

Links

Buy now