If we want to get a taste of the dystopian future that Big Tech is planning for us with AI, then we need look no further than the UK Post Office scandal. My first job, back in 1984, was in Dublin for a small company that was growing fast but having problems getting paid. The owner had been posting out handwritten statements. He was advised to computerize his accounts so that he could send out printed statements. Clients would pay up faster because they would know the statement came from a computer. Psychologically, not a lot has changed since then. Many people still think computers have magical powers, that something that is printed from a computer is more likely to be accurate than something that is written by hand.
Over the years, I have seen the inside of so many appalling computer systems. Shoddy work. If bad computer code stank, you wouldn’t be able to walk into most offices. Massive, massive cost overruns. Cruel and torturous and tortuous interfaces. Terrible design, full of security holes and other flaws. Systems that had grown without any proper documentation or planning, layers upon layers. Multiple systems doing the same thing. Old systems no longer being used but still being kept on. Systems where, all the original programmers having left, nobody knew for sure how things worked. It was always seen as safer to add more code because if you removed code you couldn’t be sure of all the interdependencies it had.
In 1999, the UK Post Office introduced Horizon, a computer system developed by the Japanese company Fujitsu. Its purpose was to manage such tasks as accounting and stocktaking. Almost as soon as it was launched, the staff in the post offices began to complain that the system was full of errors. What would they know, though? They were just ordinary people in small towns and villages. The computer system cost millions and was installed by a multinational company. The system could not be wrong. Millions had been spent on it. It was already too big to fail.
Rumors and scandal began to seep out. Accusations were made. These sub-postmasters and sub-postmistresses were stealing money, the computer system said. These people, who were at the heart of the local community, were robbers and rogues, the computer system said. The shame was enough to ruin lives. Marriages broke down. People died of heartbreak, ill health and addiction. More than 900 innocent people were prosecuted because the computer system lied and the management covered up the lies. Totally innocent people were sent to prison because the computer system lied. Big fines were levied. People were financially ruined.
They knew. Inside the system they knew that it was not working properly, pretty much from day one. To defend decisions and their computer system, they were willing to ruin the lives of hundreds of innocent people. It took a whistleblower to expose them and even then they fought and fought and fought to defend their system of lies. They said that the reality of what had happened was “impossible.” According to the computer system, reality was impossible, reality could not have happened, so everyone must believe the fiction the computer system and its supporters invented. Fraud investigator Ron Warmington told the BBC that it was all “unbelievably damning.” The scandal has been described as “the biggest single series of wrongful convictions in British legal history.” Lessons learned so? Are you joking? Now, they’re all in on AI with its cost-cutting promises and its magic bag of tricks.
If traditional computer systems were hard to understand, AI has a much deeper level of unknowable complexity. The sheer mass of data that AI has been fed, the multi-layered, multi-leveled learning process, the billions of parameters and trillions of connections and patterns, mean AI is beyond human understanding. Results are often “shocking, and awful,” as scientist Gary Marcus has stated. “I honestly don’t see an easy fix.” How can they fix it when AI’s designers don’t know how AI truly works, and this is at least partly deliberate. AI is trained using data that, as computer science writer Larry Hardesty explained, “is fed to the bottom layer – the input layer – and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer.” The more layers, the greater the transformation and the greater the distance from input to output, and the harder it is to develop an audit trail. How exactly did AI make this decision? Show me everything, step by step. Sorry, not possible. You have to simply believe. You must believe that AI made the right decision given the data it had.