On Tuesday, prominent figures in the tech world called for a six-month pause on advanced artificial intelligence. The open letter, signed by business leaders, researchers and academics, argued that time was needed to come up with effective safety protocols for AI development. It warned of “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Last week Google, OpenAI, and Microsoft all launched or updated their AI chatbots. Trained on vast data sets taken from the internet, these algorithms can generate a range of written responses and imagery on request. They have even been caught recycling each other’s fake news, which is surely some kind of milestone.
The proposed moratorium feels like too little, too late in terms of controlling the growth of this technology. Whichever company has the lead – currently Open AI, whose big investor is Microsoft – will resist any measures allowing the competition to catch up. If necessary, they will tell US lawmakers these algorithms are part of a crucial arms race with China. And even if developments at the bleeding edge are slowed down somehow, it is unlikely to save the estimated 300 million jobs at risk of automation.
The chatbots are only the tip of the iceberg: dozens of new AI tools are being released every week. Investors are piling money into AI-related companies and products, which are being put to use across the economy, from recruitment to inventory management. Expecting western governments to place meaningful limitations on a business with this kind of political and financial clout is like expecting a banana republic to limit the export of bananas.
Still, the open letter highlights what has long been a curious aspect of artificial intelligence: the failure of Big Tech, for all its power, to come up with a positive story about it. Generative algorithms have been casually made available to the public as though they were just another app, leaving the field open for critics to argue that this is actually a scary paradigm shift.
Why have the interested parties not done more to secure a positive reception for AI? One reason is surely that, on a cultural level and even a psychological one, the implications of this technology are too jarring. Put simply, AI doesn’t fit into the big marketing message of Silicon Valley, and of post-industrial society in general. That message has long encouraged us to see ourselves as sources of creative and expressive potential (or more cynically, of intellectual property).
Until now, digital platforms have been presented to the public – sorry, to “creators” – as tools for realising their talents and telling their unique stories. Last year, I kept running into an ad for the podcasting platform Acast that summarised the idea nicely. It told listeners that if they enjoyed talking to their mates, they were ready to become podcasters. This is the shtick that has underpinned digital media for the past fifteen years: you – yes you! – have something special to contribute; you just need the right software tools to realise your potential, and the right platform to find an audience. Call it the creator ideology: the master narrative that surrounds the so-called “creator economy.”
It’s very difficult to square the latest artificial intelligence with this master narrative. The chatbots and image generators – not to mention the AI composers, video editors, and so on – are clearly doing what was meant to make us creators special, and they are doing it with rather prosaic methods: finding patterns in data sets and choosing responses based on statistical probability.
This tension between tech narrative and tech reality has been brewing for a long time, but it was not inevitable. Consider another, not too distant era when information technology faced problems with public anxiety. In the 1960s, rapid advances in computing were spearheaded, like today, by big companies with close ties to government and national security concerns. It was all very unfamiliar and disconcerting, as mainframe computers not only entered offices, but started counting votes at election time and operating nuclear missile systems that threatened Cold War armageddon.
So the leading US computing firm, IBM, hired the designers Charles and Ray Eames to reassure the public. In numerous exhibits, the Eameses portrayed new technology as part of a story of human progress that was both familiar and profound. A good example is their multi-screen film installation, Think, which showed inside IBM’s striking egg-shaped pavilion at the 1964 World’s Fair in New York. You can see a version of it here:
The film portrayed computing as an extension of the kinds of problem solving that has always been necessary to create order in the world. In his narration, Charles Eames talks about railways and urban planning. He explains that, to solve problems rationally and effectively, we have to distil the relevant factors into abstract representations that can be modelled; mathematical algorithms are just the most sophisticated of these.
The film ends by suggesting to viewers that they already do this kind of thinking in their personal lives, using the example of a housewife struggling to sketch an appropriate seating plan for a dinner party. It’s a brilliantly clever vignette, clearly communicating the idea of computing as a servant of normality.
The Eameses’ work for IBM illustrates how the prevailing script of popular culture determines the ways technology can be presented and understood. Maybe there is an alternative timeline where AI is being promoted as just a more efficient way for complex societies to handle their organisational challenges, or an opportunity to reflect on how we ourselves produce and appreciate art. These systems are modelled on human neural networks after all.
Back in our actual present, there is not much scope for portraying technological modernity as a collective, organised endeavour, let alone one that can be understood through something as conventional as a suburban dinner party. The reason for this can also be traced back to the 1960s. It was towards the end of that decade that an emerging Silicon Valley came under the influence of the west-coast counterculture, which infused it with a vision of technology as a force for personal liberation. By the 1990s, libertarian prophets were presenting the World Wide Web as a utopian project to flatten institutional hierarchies, allowing people to find their authentic selves in a world of decentralised collaboration and exchange.
This techno-optimism is not the only source of the creator ideology. Aspects of it are reflected in everything from children’s books to celebrity culture and mainstream advertising. In essence, it conveys the belief that we should aspire to express our inner self to the world, especially if this means breaking the mould or trying something bold and original. In its advanced form, as seen on social media, it makes individual worth seem conditional on the ability to command an audience through force of personality.
It is difficult, within this framework, to see algorithms that mimic human creativity as anything other than a threat. Yes, AI output remains cold and formulaic, but that is the most demoralising part; it draws attention to the fact that our own efforts are often not much better.
Artificial intelligence will not destroy the creator ideology right away. Faced with a contradiction, people tend to intensify their ideals rather than discarding them. So now the emerging consensus is that human creators will simply have to work harder at being unique, since they will soon be competing not just with each other, but with AI too. In other words, the demonstration of authenticity will become even more the standard of human value, as though we were high-denomination bank notes. In the words of influential economist and blogger Tyler Cowen, “you will need to be showing off all the time” to earn your audience, for otherwise, “how is the world to know you are anything other than a bot with a human face?”
But technology will surely erode the foundations of this worldview over time, much as modern science did to Christianity. What comes next is anyone’s guess.
Illumimating, thank you. "the belief that we should aspire to express our inner self to the world" hints at the potential for AI art to take the opposite perspective - a bird's eye anthropology through the structures and patterns (and data) it borrows from us. Mimicking our chatter like Pinter wrought large - the hum of the species as opposed to the eloquence of the individual.