Will AI Replace Me?
Most likely yes, but there may be limits to the technological liquidation of everything
The Turing Test is a famous benchmark for Artificial Intelligence: if we see a human and a machine communicating through written text, can we tell who is the imposter? Now the computer scientist Nassim Dehouche has proposed an updated version, which should terrify those of us who live by the pen: “Can you write a page of text that could not have been generated by an AI, and explain why?”
Dehouche thinks language is no longer the unique possession of homo sapiens. “Unless one is reporting an original mathematical theorem and its proof,” he claims, “it is not possible” to write something that AI, in principle, could not.
Recent years have seen dramatic advances in “natural language processing” tools, such as the GPT-3 system designed by research lab Open Ai, also famous for its DALLE-2 image generator. I have now read four articles on the subject where the author says “surprise! This text was actually written by GPT-3.” There is already panic in higher education, as it becomes clear students are using this software to produce passable written work. One professor has given some examples of philosophy questions answered by AI, at a price of six cents per thousand words. Consider this response to the question “Should a socialist be a humanist? Should a humanist be a socialist? Defend your answer with careful analysis and argument.”
The answer to this question depends on how one defines "socialism" and "humanism." If one defines socialism as an economic system in which the means of production are owned and controlled by the community as a whole, then it is difficult to see how a socialist could not also be a humanist. After all, humanism is a philosophy that emphasizes the importance of human beings and their ability to reason and solve problems. In a socialist system, the community as a whole would be making decisions about how to use the resources it owns, and those decisions would be based on reason and the needs of the people.
However, if socialism is defined as a system in which the government controls the means of production, then it is possible for a socialist to be a humanist, but it is not necessarily the case. A humanist believes in the importance of human beings and their ability to reason and solve problems. However, a socialist government might make decisions about how to use the resources it controls based on political considerations, rather than on reason or the needs of the people.
Such formulaic “thinking” cannot yet replace this newsletter, unless I am much worse at writing than I thought, but that outcome is surely just around the corner. Those who have seen previews of GPT-4, the next generation of automated writing, express shock at its abilities. Some are arguing AI cannot replace writers as long as its quality depends on the competence of the person giving it prompts and editing its responses, but this misses the point. “Writers” working in combination with machines will produce content so much faster that, even if the quality dips, they will drive down the cultural and monetary value of most forms of writing even further, forcing the rest of us to either join the self-automation game or become irrelevant antiquarians. Only celebrities will be able to sustain demand for their authentic literary output.
As we’ve seen with the impact of Google maps on our navigation skills, reliance on technology causes human capacities to wither. AI-assisted “writing” will probably just require introductory paragraphs, prompts and plans, with the software producing a range of options for further development. It won’t be long before people lose the ability to construct longer pieces and connect ideas for themselves. Those of us who do obstinately continue writing, like romantic cavalrymen charging at machine guns, will increasingly find our audience has been automated too; AI is learning to analyse and summarise text to save reading time.
There’s no point sugar-coating the situation. The ancient craft of writing, which has provided humanity with intellectual and imaginative nourishment for almost 4,000 years, since the epics of ancient Mesopotamia, appears to be on its very last legs. Those who argue AI will simply provide a new extension of that tradition don’t understand what the tradition is. The machines can no more appreciate the meaning of their words than does an echoing room; it simply learns probabilistic patterns and sequences from the vast oceans of data in which it is trained.
Anyone with an interest in design, the main theme of this newsletter, should be ready to confront the implications of such cultural destruction. Modern design grew out of the division of labour produced by industrial manufacturing, a system predicated on the continuous replacement of human craft by machines. Since then, one of design’s main roles has been to develop and market the new technologies that do the replacing. But as I wrote back in July, we are starting to see design itself being outsourced to algorithms. The same fate awaits numerous other creative practices that were once considered good at adapting to new technologies, from illustration to composing. It is even conceivable that influencers, who are not yet a generation old, will be displaced by computer-generated proxies.
So it is not just special pleading when I say we probably ought to revisit the bargain that is supposed to make the destructive effects of technology worthwhile. That bargain started out with the simple observation that people who lost their livelihood to mechanisation would eventually be reabsorbed into the labour force. Over time it was modified to claim that the dislocation of some parts of society, and continuous adjustment among the rest, was a small sacrifice for a more productive economy. Now the terms have been updated again. The price to be paid for better goods and services, apparently, is permanent fluidity and uncertainty, the baseline assumption being that no human activity is likely to avoid radical transformation or obsolescence for very long.
In purely material terms, this is starting to look like a dodgy bet, as sections of western societies see basic metrics like life expectancy and real income go into reverse. But even if we assume innovation can turn all of this around, it is far from clear that the latest technological bargain is one people actually want to make.
There has been a growing realisation during my lifetime that, beyond a certain point, more bread and circuses cannot compensate for the disappearance of social roles that provide a framework for identity, community and personal fulfilment. We now acknowledge that jobs are not just about money, but unquantifiable things like purpose and meaning. We are gradually accepting that cheap TVs and smartphones are not a replacement for dignified livelihoods. We may even begin to wonder if a future of constant retraining to keep up with technology is really what a rewarding human life looks like.
The American think-tanker Brink Lindsey has recently argued that, since the 1960s, modern societies have become increasingly suspicious of innovation, as wealth makes them more risk-averse and sensitive to environmental destruction. But it’s also possible that there is simply a point at which the technological liquidation of everything no longer seems worth the putative rewards.
Conversely, you might say contemporary society shows a remarkable degree of faith in technology, since there is a strange sense today that we are simply waiting for new breakthroughs to provide the occupations and lifestyles of the future, whose character we cannot even predict. Yet this could also be seen as something more like resignation. I rarely hear the phrase “technological progress” now; those who want to put a positive spin on it tend to use the more objective-sounding “innovation,” while the most common usages seem to be “technological change” or simply “technology.” This language implies an impersonal force that is beyond our control, almost like a natural phenomenon, as well as the absence of coherent aims that would give meaning to the notion of progress. Indeed, being a techno-optimist now seems more a case of accepting the inevitable with an open mind than believing in any direction of travel.
And that may be the biggest problem of all. If we assume the endeavours which structure our lives are all destined for the technological blender, what way do we really have of relating to future? The famous catastrophism of Gen-Z, often articulated through environmentalism, is surely related to coming of age in a world where the only certain thing about the future is that it will be very different. China has embraced the fluid model of modernity with particular vigour, and its young people, who can hardly be accused of taking prosperity for granted, have taken to declaring themselves “the last generation.” Similarly, it is telling that utopian thought now tends towards transhumanism, or the transcendence of humanity through a merger with biotech. This vision implicitly acknowledges that once you remove the possibility of continuous projects across time, human progress and potential appear exhausted.
Writing is one of those continuous projects that will, it seems, be voided soon. What has always appealed to me about this practice is that, aside from providing a meaningful focus for my life, it allowed me to be a link in a chain that extended into both past and future; a chain whose overall significance was much greater than any single writer could ever be. Technology will no doubt give rise to exciting new cultural forms, but they will be born, like us mortals, with the expectation of their own transience. It sounds like a bad deal to me.
If present trends continue.... But the thing about present trends is that they never do continue. We should all be riding around in self-driving cars by now, but we're not. The ability to solve half a problem, or even 90% of a problem, is not evidence of the ability to solve the last 10%. And that seems to be the case with AI generally: it can solve 50% or even 90% of a problem -- and that is often very valuable -- but it always seems to stumble when it comes to the last 10%.
AI art and writing apps are pastiche generators. That's not a slight. It is a technical description of their function. University essays, and most of the fiction publishing industry, are pastiche factories, so there is doubtless a significant degree of application. But it is still only pastiche, and even the pastiche is not perfect yet. That last 10% of even the pastiche problem may be illusive.
It is tempting to look at a period of rapid advance in any industry and see that rapidity extending into the future at the same pace. It never happens that way. Everything plateaus out below the level of the most enthusiastic and most horrified projections.
What I suspect will come next, though, is AI used to detect the products of AI. The demand from the university system is obvious. The universities use pastiche exercises to train students in the mechanics required for developing and expressing original thought. AI detectors are the next logical step on from plagiarism detectors (which, I suppose, are probably AI based these days).
And I suspect that it will be much the same in the fiction market and the art market. People will want the work of people, not machines. AI probably suits the pornography market well, where a human connection is not wanted, but for most art and fiction, I suspect that the human connection is essential, particularly since direct contact between readers and writers is the main form of marketing in fiction these days. This is not to suggest that you will always be able to tell if you favorite romance writer is a person or a bot, but rather that you will want to, and will demand proof of authenticity, as we do in so many other fields in which fakes are hard to tell from the real thing.
And as much as we worry about the extinction of work, we are once again suffering from a grueling labor shortage.
This is interesting. Couple of points from me.
1. Possibly it will all go wrong for AIs. They are trained on a corpus. If the corpus gets taken over by text written by AIs (much of which is wrong - we have seen many examples online) then AIs might end up in a garbage in, garbage out situation. I don't know: perhaps the best AIs will be trained on a special corpus of pre-2000 human-written material. But that would have its own ossification problems.
2. Even if that's not right, as Mark Baker points out, we already have a lot of pastiche generation and it would be no great loss if that gets automated. One day we may come to think with great sadness of all the wasted years that some of greatest minds spent reading and marking undergraduate essays. So far as I know, no one other than the writer or close family of an undergraduate has ever wanted to read one of those essays - they are a training tool and other tools may arise.
3. We still have artisans and value their work. I suppose that's what will happen with writing. Novel-writing has only ever been a cottage industry, with newspapers being the mass-produced product. If all of news writing becomes automated, would it be any great loss to the world? Any greater loss than teaspoons being made by machines? We would still have artisan and hobbyist word crafters.
Anyway thank you for writing this.