Thank you for this, Wessie. It makes me question my own endeavours on Substack: by sharing my work here, am I colluding with the replacement of human writing with AI? I suppose the answer must be yes, but what realistic stand can authors make, or could they have made in the past?
Hi Caroline, thanks for reading! Last time I checked Substack has an opt-out option, which is valuable as a matter of principle, but presumably won't makes much difference in the grand scheme of things. I think the stand we should have made – and should make – is to create guild-like institutions which can develop channels and platforms for people to share their work on terms favourable to them. I think about what Richard Sennett wrote regarding the craftsmen who tried to resist machinery in factories – rather than just defending the status-quo, they should have organised to pursue innovation on their own terms.
And yes, you are right that Substack has an opt-out option for allowing AI training. (For any Substack writers who are reading this: go to your publication dashboard - settings - privacy.) As you say, it probably won't make much difference, but worth doing in any case.
Interesting. I wonder if any analogous industries have successfully done such a thing, and whether there are writers trying to do this. One to explore for a future article perhaps?
It's a peculiar kind of fear, to have "killed the AI industry" in one's country through common sense regulation. A country that protects its artists from mass infringement is getting "left behind" from what, exactly? Unless this is some kind of arms race (and for the larger companies working with the U.S. government, this is absolutely about military applications and technological supremacy over adversaries), it seems like it would be a desirable policy outcome to have your laws become inhospitable to the industry. Protecting the economic viability of human creative output is the main point of copyright protection regimes, and the new-ness/opacity of LLMs and other content engines shouldn't distract from the obvious conclusion that artists need legal remedies for this latest flavor of piracy.
My impression is that the relevant lobbyists (of which Clegg is basically one) try to elide these distinctions when they put pressure on governments. The message is, if don't want to miss out on this, you need to meet all our demands – even if those demands are really a mixed bag of varying usefulness.
Wessie, nice essay. I am SO TIRED of reading about AI. I have a number of essays I've sort of meant to write, I hang out with computer scientists, and philosophers of data, etc., and . . . I'm exhausted. Just don't want to add to it. I have a similar problem with US Presidential politics, about which I have even more to say. Not to burden you with my problems . . . sure, if you want to write about AI, I'll struggle through . . . :)
This is the main reason why I haven't written much about it. I appreciate the potential importance of AI (I think!), but it still strikes me largely as a continuation of dynamics that have been in play for a long time – which you would never guess if you were judging by the breathless (and extensive) way it's written about.
I only interacted with the Microsoft chatbot. I first asked it for some explanations about how it works and the database it uses. I asked simple questions, about some historical events (I was then reading Marquez's story, "The Most Famous Year in the World", with its brilliant debut: "The international year 1957 did not begin on January 1. It began on Wednesday the 9th, at six in the evening, in London. ... "). At least at that time there was no permission to reproduce quotes from literary works (I did not remember some of the lines of a poem by Sergey Esenin - an English translation I saw was "Azure space is aflame up above..."). What intrigued and worried me was the (programmed) perseverance with which it wanted to impose the image of a human interlocutor. An indiscreet and excessively friendly one. I also noticed that, indeed, it was learning from one interaction to another. Including how to fantasize.
I like your observation about its projected character – "indiscreet and excessively friendly." It makes me think that while everyone has been scrutinising the machine's literary abilities, there hasn't been enough aesthetic criticism of its conversation style.
One of the elements that seemed both beneficial and unsettling to me was the AI's learning method, similar to ours (after all, AI was created by humans !), namely based on repetitive patterns. When it wants to discover such a pattern, it insists, comes up with questions that you can or cannot answer. The feeling, the momentary reaction is one of unpleasant curiosity on the part of the AI, of indiscretion, and you reject it. Although I am aware of the nonhuman nature of the "interlocutor", I cannot help myself, in a first phase, from attributing purely human characteristics to it. After some time of reflection, I come to the conclusion that it just wants to learn. It does not intend to find out anything specific about me, but only to know for the future, for other interactions, for other situations, how some elements are associated, what is the probability that they will be linked in the same way. Accommodating with AI is not simple. :)
drowning in data
swimming in slop
mindless creation
please make it stop
Thank you for this, Wessie. It makes me question my own endeavours on Substack: by sharing my work here, am I colluding with the replacement of human writing with AI? I suppose the answer must be yes, but what realistic stand can authors make, or could they have made in the past?
Hi Caroline, thanks for reading! Last time I checked Substack has an opt-out option, which is valuable as a matter of principle, but presumably won't makes much difference in the grand scheme of things. I think the stand we should have made – and should make – is to create guild-like institutions which can develop channels and platforms for people to share their work on terms favourable to them. I think about what Richard Sennett wrote regarding the craftsmen who tried to resist machinery in factories – rather than just defending the status-quo, they should have organised to pursue innovation on their own terms.
And yes, you are right that Substack has an opt-out option for allowing AI training. (For any Substack writers who are reading this: go to your publication dashboard - settings - privacy.) As you say, it probably won't make much difference, but worth doing in any case.
Interesting. I wonder if any analogous industries have successfully done such a thing, and whether there are writers trying to do this. One to explore for a future article perhaps?
It's a peculiar kind of fear, to have "killed the AI industry" in one's country through common sense regulation. A country that protects its artists from mass infringement is getting "left behind" from what, exactly? Unless this is some kind of arms race (and for the larger companies working with the U.S. government, this is absolutely about military applications and technological supremacy over adversaries), it seems like it would be a desirable policy outcome to have your laws become inhospitable to the industry. Protecting the economic viability of human creative output is the main point of copyright protection regimes, and the new-ness/opacity of LLMs and other content engines shouldn't distract from the obvious conclusion that artists need legal remedies for this latest flavor of piracy.
My impression is that the relevant lobbyists (of which Clegg is basically one) try to elide these distinctions when they put pressure on governments. The message is, if don't want to miss out on this, you need to meet all our demands – even if those demands are really a mixed bag of varying usefulness.
Well said!
Wessie, nice essay. I am SO TIRED of reading about AI. I have a number of essays I've sort of meant to write, I hang out with computer scientists, and philosophers of data, etc., and . . . I'm exhausted. Just don't want to add to it. I have a similar problem with US Presidential politics, about which I have even more to say. Not to burden you with my problems . . . sure, if you want to write about AI, I'll struggle through . . . :)
This is the main reason why I haven't written much about it. I appreciate the potential importance of AI (I think!), but it still strikes me largely as a continuation of dynamics that have been in play for a long time – which you would never guess if you were judging by the breathless (and extensive) way it's written about.
I only interacted with the Microsoft chatbot. I first asked it for some explanations about how it works and the database it uses. I asked simple questions, about some historical events (I was then reading Marquez's story, "The Most Famous Year in the World", with its brilliant debut: "The international year 1957 did not begin on January 1. It began on Wednesday the 9th, at six in the evening, in London. ... "). At least at that time there was no permission to reproduce quotes from literary works (I did not remember some of the lines of a poem by Sergey Esenin - an English translation I saw was "Azure space is aflame up above..."). What intrigued and worried me was the (programmed) perseverance with which it wanted to impose the image of a human interlocutor. An indiscreet and excessively friendly one. I also noticed that, indeed, it was learning from one interaction to another. Including how to fantasize.
I like your observation about its projected character – "indiscreet and excessively friendly." It makes me think that while everyone has been scrutinising the machine's literary abilities, there hasn't been enough aesthetic criticism of its conversation style.
One of the elements that seemed both beneficial and unsettling to me was the AI's learning method, similar to ours (after all, AI was created by humans !), namely based on repetitive patterns. When it wants to discover such a pattern, it insists, comes up with questions that you can or cannot answer. The feeling, the momentary reaction is one of unpleasant curiosity on the part of the AI, of indiscretion, and you reject it. Although I am aware of the nonhuman nature of the "interlocutor", I cannot help myself, in a first phase, from attributing purely human characteristics to it. After some time of reflection, I come to the conclusion that it just wants to learn. It does not intend to find out anything specific about me, but only to know for the future, for other interactions, for other situations, how some elements are associated, what is the probability that they will be linked in the same way. Accommodating with AI is not simple. :)