I recently listened to an interesting debate about the consequences of internet technology, between the social psychologist Jonathan Haidt and the economist and blogger Tyler Cowen. It seemed like a rare case of public argument functioning as it’s meant to, producing more light than heat. Haidt makes the case that smartphones have had a deeply destructive impact on children and teenagers, while Cowen counters with his trademark optimism about technology and capitalism as forces enriching human life.
A decisive moment arrives when Haidt introduces the problem of design. In reply to Cowen’s doubts about the need to regulate social media companies like TikTok and Instagram, Haidt draws attention to the way these platforms engage young people:
They built design features like the constant refresh, the eternal feed. They designed it to be as addictive as possible, but they can’t be sued for their design choice. … What I’m saying is, we need to have these companies held liable for their decisions.
In the context of the exchange with Cowen, these comments illustrate how an awareness of design can inform the way we think about politics and history. Cowen’s advocacy for tech capitalism is most convincing when it appeals to a general sense of liberty. Why, he asks Haidt, should the government be empowered to decide young people’s relationship with technology, rather than parents or indeed young people themselves? But design reminds us that impersonal concepts like “technology,” “smartphones” and “social media” are in fact human artefacts, intentionally devised to serve particular goals and interests. Once attention shifts to the means employed by powerful tech companies (targeting young people with addictive software), and the ends to which they are employed (harvesting personal information for profit), it no longer seems helpful to assume that people are perfectly free in their dealings with these entities, or that the state has no role in regulating certain practices.
Similarly, Cowen likes to assess technology from a long-term perspective. In one blog post about artificial intelligence, he uses the example of the printing press to argue that none of us can predict the medium-term consequences of a revolutionary technology like AI, but in the long run, we should expect those consequences to be positive overall. Though the printing press was implicated in all kinds of disastrous upheavals, including a century of bloody religious conflict, the important thing for Cowen is the idea that “if you were redoing world history you would take the printing press in a heartbeat.” You can hear the same combination of skepticism and long-run optimism in the conversation with Haidt. There Cowen suggests we can’t be sure if a complex historical phenomenon like the changing emotional state of young people can be attributed to smartphones, while stating his own confident belief that the eventual outcome of the current technological revolution will be very good.
To consider our own era as part of a greater span of history is always a worthwhile exercise. But it has limitations, which are again exposed by the nature of design and its link to deliberate human agency. Cowen’s historical perspective, not unlike his political one, makes technology something akin to a force of nature: scary perhaps, but inevitable and therefore morally neutral. Design, by contrast, forces us to reckon with the decisions, intentions and motivations of particular groups of people. These might not matter on the scale of centuries, but they can matter a great deal to us.
The broader point here is that an awareness of design tends to rob both history and technology of its innocence. Design determines the actual forms that new knowledge will take, the concrete ways that new techniques will be implemented, and it is often through such decisions that responsibility becomes unavoidable. One famous illustration of this, explored by last year’s Hollywood epic Oppenheimer, was the development of the first atomic bomb in the Manhattan Project. It was possible for the scientists involved to believe that this technology was inevitable; that someone would work out how to do it, and so it might as well be us. And yet, as the guilt experienced by J. Robert Oppenheimer suggests, even this apparent inevitably did not remove the sense of moral culpability for having actually designed the bomb.
It is often in design decisions that we find the most disturbing or incriminating details of historical events, precisely because they reflect conscious thought and intentionality. People never fail to be scandalised that Parisian boulevards were designed to help the authorities crush urban revolts. Accounts of the Holocaust often dwell on small, perverse details that demonstrate how fastidiously mass murder can be planned. Or consider the examples cited by Langdon Winner in his influential 1980 article Do Artefacts Have Politics? Winner points out, for instance, that the American urban planner Robert Moses designed overpasses in Long Island to be unusually low, so as to prevent buses transporting poor people from using the highways. It is one thing talking about the unconscious coding of biases into the fabric of institutions and culture, but design tends to present us with something less ambiguous, something more immediately open to contestation: decisions taken deliberately, for a specific purpose.
In its more traditional usages, the English word design carries undertones of suspicion and intrigue. A “design” is a plot, a plan, a premeditated course of action with anticipated consequences. These associations are not without reason: design can make it rather difficult to drain human responsibility from history.
A classic from P. Virilio:
"To invent something is to invent an accident. To invent the ship is to invent the shipwreck; the space shuttle, the explosion. And to invent the electronic superhighway or the Internet is to invent a major risk that is not easily spotted because it does not produce fatalities like a shipwreck or a mid-air explosion. The information accident is, sadly, not very visible. It is immaterial like the waves that carry information."
I'm not fan of academic theoriticians of technology though, because they are complacent and thinks that there is a way to reform, there is none.
Or the human agency do something or there will be no humanity left, simple as.
This is stimulating. Design and Technology are often lumped together, but it is useful to tease them apart.
By the way, I also listened to that podcast and found TC's position on social media utterly baffling. The stuff about AI summarising your Tweets and Tiktoks as a way of reducing social pressures was incredibly wrongheaded.
There's some good reactions on his blog: https://marginalrevolution.com/marginalrevolution/2024/04/my-contentious-conversation-with-jonathan-haidt.html#comments