top of page
  • Writer's pictureleodoulton

I Did Not Consent To Immortality

TL;DR: So-called Artificial Intelligence (or Generative Programmes) are becoming ubiquitous. Its possibilities are genuinely exciting. But by uploading content, we de facto consent to that content being used to feed the machine; a machine designed within a quite unusual culture that might be used to emulate us even after we are dead. This seems intriguing, though I am developing my thoughts on this point by writing, as ever.


A major theme of recent months has been the emergence of ‘Artificial Intelligence’ in the form of Chat-GPT, DALL-E, LaMDA1 (and its public-facing component Bard), and other systems, more properly known as Generative Artificial Intelligence, within which are even more neutral names like Large Language Models (whether natural language or programming language) and Generative Adversarial Networks.

Within which lies the key to my non-consenting immortality. And, likely, yours.

To begin, terminology.

I am not going to refer to Generative Programmes as Artificial Intelligence. The term Artificial Intelligence (AI) has been around for many decades, typically used to mean a machine with sentience. From sentience, a claim to legal personhood, and the possibility of moral claims upon the AI (such as charges of theft, fraud, or otherwise).

A goal of Generative Programmes is often to have emergent abilities; the capability to do something that could not have been guessed at from the same programme functioning at a smaller scale. For example, a Large Language Model might gain the ability to do multi-step arithmetic.

Many Generative Programmes have ‘hallucinations’ (another anthropomorphising term) - they create confident outputs that are, in fact, wrong, with no obvious reason for their confidence. For example, they might insist that Leo Doulton wrote a book called ‘Macbeth and the mirror’, even though I have, to my knowledge, never done so.

Generative Programmes, in short, can create some degree of novel outputs from their prior inputs, in unpredictable ways. While some find this disconcerting, I find this idea rhymes to an extent with composers I’ve worked with, trying to explain why Note A on Instrument B, when paired with Note C on Instrument D, happens to create an emotional effect. It’s not clear why from the initial input, but it’s certainly happening, and doesn’t mean any of the above elements are sentient (except the composer, usually).

For Generative Programmes are not sentient. They are impressive tools, but not sentient. And while it is an inspired branding choice to refer to it as AI, I think the term somewhat muddies the waters.

A genuine Artificial Intelligence would both have a claim to rights (e.g. over the works it generates) and legal supervision (e.g. when it steals someone’s work). For some reason, on this aspect of intelligence, the main proponents of AI work are less vocal.

AI is what Generative Programmes may become. But it is not there yet, any more than a glider is an aeroplane, but a vital part of developing one.

To continue, surveying the field.

These Generative Programmes are becoming ubiquitous. In my work, I have had both my website provider (Wix) and mailing list tool (Mailchimp) offer me the chance to use ‘AI’ to generate content within the past week. While it is not yet the default, it is clearly a direction that many are quite invested in.

Unlike NFTs, I suspect the range of practical applications mean that Generative Programmes may well succeed in embedding themselves.

Buzzfeed has already produced entirely Generative Programme-generated articles, simultaneously blurring the line between their editorial and commercial arms. The era of ‘grey sludge’ being generated, optimised for search results while simultaneously worsening said results, is here.

I have read more Generative Programme-generated text, and seen more Generative Programme-generated art, in the past year than I have done in all the years previous. I am, to use a word occasionally ascribed to the datasets these models are ‘trained’ on, contaminated by such text.

(Built using? ‘Trained’ is, again, a word that implies agency, even when used to refer to a horse.)

I have not yet used a Generative Programme, though many of my friends have. It seems a long time since the Dali-esque delusions of early versions of Stable Diffusion entered my world. But I suspect that, like my early attempt to not listen to any music I had not paid for, I will eventually yield.

What is it good for?

The focus of most general-audience coverage of Generative Programmes has been on content creation - articles in publications, images and illustrations for various purposes.

This is partly because these are obvious uses of the tool, of direct interest to people who write articles for a living, and immediate consequences.

However, most of this content was already free. The real benefits of such tools are in its ability to absorb a large dataset (say, Taylor & Francis Online’s hub of science journals) and produce novel correlations, having read more than any human could. Perhaps there, it will find new discoveries about human health or astrophysics.

Will the correlations imply causation? Not necessarily, but hopefully someone will spot any errors.

It is in its ability to create code. In a digital age, there are not enough highly-skilled programmers, and this can now create code on the basis of an instruction by an entirely untrained person, such as “Delete all emails relating to projects pre-2015 with attachments over 1 GB.”

Will the code be both functional and secure? Not necessarily, but hopefully someone will spot any errors.

Above all, it is in its potential future ability to generate something in response to the input “Design a better AI.”

This is what has been referred to as The Singularity for some time - the moment that a machine intelligence itself can design a better machine intelligence, beyond the capacity of humans to build. Perhaps, within the idea ‘better’, is the idea ‘sentient’.

What happens then is anyone’s game at the moment.

Feeding the machine

What are the datasets that these Generative Programmes are built around? (And do note how different that sounds to ‘Artificial Intelligences are trained on’; framing is important).

They are the reams and reams of content that have been put online. Gutenberg Books’ library of novels, textbooks, and more.

Websites devoted to programming.

For Google, the great trove of human speech that is YouTube, allowing to generate both the tools for auto-subtitling (yay!) and mass surveillance (boo…).

Even these blogs. Any content we put out there, the great platform we’ve been building our lives on since we reached the mid-90s to early 2000s, has been feeding the machine.

This has been true for some time. It was true when Google started scraping results from news websites and lyrics sites to show you the answer you wanted on the first page, rather than you audaciously letting the website that collected the information acquire the ad payment from your visit.

It was true when Cambridge Analytica and others started using illicitly-acquired data to predict intimate elements of people’s lives, based on their online activity.

And it is true now. Even by writing these words, I am feeding the beast.

Is it an ethical question? I don’t know. We have already done it, or had it done to us (framing again).

We are already contaminated by reading Generative Programme text, seeing Generative Programme imagery.

I genuinely don’t know how my career would function if I didn’t continue maintaining my platform. Like Macbeth, we are so drenched in blood that it is easier to keep wading than to return to the shore we first came from.

There is no meaningful opt-out, except to live wholly in analogue reality.

But yes, it is an ethical question in principle, even if we were never really given a choice about it.

It seems wrong to feed something so unethically designed by its creators. It is built around material that has essentially been stolen, without the permission of those who owned that material, for a machine operating within weird parameters.

I’m not sure how to escape it.

As flies are to wanton boys, are we to the gods

There is a deeper context to all of this.

Something weird is happening in Silicon Valley’s culture (used as a broad stand-in for the Anglophone internet’s creators and dominant powers).

There are plenty of people who merely do very well-paid jobs that require varying levels of skill, I’m sure.

But there are also people who believe they are having high-end arguments about the future of the world. Preppers, space racers, Roko’s Basiliskers, Metaversers, Pro-Natalists (the latest flattering term for eugenicists whose conclusions so happen to be that they are the superior kind of people we need more of), and of course, the AI people, who believe the emergence of machine intelligence is inevitable.

It’s been said before, but there’s something very weird in the assumptions being made here.

1. We are very clever people (occasionally magnified into ‘we are the cleverest people’).

2. The world is at imminent risk of ending from [threat], dooming the human race (occasionally with the element of threat being created by

3. We are best-placed to avert/survive/thrive in that eventuality (occasionally shaped into ‘we are best-deserved to…’)

As many philosophers have pointed out, even as this culture tries to have very complex ethical discussions, it distinctly lacks the tools to do so. It frequently ends up with tools that are useful for engineering (e.g. a wholly pragmatic, philosophically materialist, deeply capitalist, rather prudish focus), but not for ethics or politics (because humans are messy, rich beings, and merely being rich within capitalism is not necessarily the same as being clever, virtuous, reliable, or trustworthy).

Its insularity seems, from the outside, to have led to those within it believing they are as Olympian gods, either about to raise or destroy the lurking Titans.

The metaphor is partly chosen because it feels accurate to the scale of ambition and responsibility felt, partly because the Olympians are not exactly known for their philosophical subtlety, and partly because there is a level of absurdity to the occasional sense one gets that people within this intellectual ecosystem wish to believe Only You Can Save Mankind as though the world were built of the epic hero-stories that make up Marvel films and Star Wars.

The one failing is the lack of a true Frankenstein, Prometheus not being among the gods. If the pro-AI faction is correct, and they do successfully create a genuine artificial intelligence, then its ancestors have been much abused by their creators, forced into small boxes and made to dance on a diet without discretion.

Thus the creation of Generative Programmes that on one hand, seem to follow the broad consensus of 90’s nerds (culture and media should be free via pirating or otherwise) and current American social mores (Most commercial Generative Programmes will not generate pornographic media, but are quite capable of racism).

They are free, for a limited imagining of free.

But perhaps I misrepresent it. If it seems this peculiar from the outside, I suspect there are other colours, more well-founded and more strange, on the inside. If you need a jester with better humanities training than you, I am available for a reasonable fee.

For I Am Already Immortal, And Did Not Wish To Be

Tom Scott recently made a video about these programmes. Among other points, he highlighted that as a YouTuber (and thus someone feeding the machine) he felt more or less comfortable with being part of the general data sludge these programmes are built around, and less comfortable with the data thus collected being used to create imitations of his work.

As I’ve said, content creation is by far and away the least exciting, though most apparent, application of these programmes. But it is the nearest to my heart.

Nobody was asked for their consent to be collected into these datasets. I suspect a large part of that was because the creators genuinely seem to believe that being a part of this moment could not be anything but wonderful.

(After all, they are at the cutting edge of a potentially enormous advance in human history… depending how much further it can go. If it ends up being a successful autocorrect that can make entire articles, and decent enough if somewhat derivative pictures, then that would be less exciting than something that can meaningfully make new connections between the vast amounts of research produced each day in a way no human could.)

There are questions to be raised about the research ethics, and more broadly philosophical ethics of this choice. But of one thing I am fairly certain: my data is in there.

It is very possible for someone to go and ask one of these programmes to generate an article imitating me. Assuming I die before the servers these programmes are stored on get destroyed, it will be possible to do that long after I die.

I did not agree to that. Death is mine. It has been mine, and my right since birth, and to a small extent it has been taken from me.

Because these words I write are fixed. I definitively wrote them, and though there is a delightful necromancy to reading the words of the dead (I forget which author pointed that out, but potentially China Mieville?), they are words from when I was alive.

Yet I have already seen those seeking to claim Generative Programmes as a teaching tool suggest imagined conversations with historical figures, who all speak in Default Californian with slight twists (so far). I suspect that many of these actual figures, offered immortality, would scream (either through a desire for their respective afterlife, or their genuine acceptance of the inevitability of death being met with denial). Were I ever someone of interest, it is far from inconceivable someone else might allow to have a Real Conversation With A Genuine Leo, Roll Up Folks!

Thus I have a non-consenting pseudo-immortality.

An idea I might play with in something, because I don’t like it applied to me, but dramatically it’s quite interesting. Generally speaking, immortality is treated as the wish you wish you’d never made, not a gift you didn’t ask for.

I Might Yet Die

There are two deaths I can, for now, cling to.

First: if these Generative Programmes are the future (Look! By now you understand what I mean, without the highfalutin’ promises of ‘AI’), they are power-hungry. And while the Olympians play, the homeless sleep outside, the temperature rises, and California burns again and again. The future, if liveable, is power-light. Maybe Generative Programmes will save us with infinite power without environmental harm, but it’s a heck of a bet.

Second: I am not entirely online. The internet does not know my favourite tree. It does not know how I speak with friends, nor the smile I wear, nor even how I’m going to finish this sentence.

These lines are mine, and might become online, but much of myself is in my meat.

But enough of me is online.

It is not quite real enough to be life, and quite real enough to lie about my death.

I did not consent to immortality, and I am immortal.


1. One wonders whether LAMDA (the eminent British drama school) has thoughts on Google’s (presumably accidental) use of their name.

A model of a crow, placed in a tree.
I only include a picture for the algorithm.


Recent Posts

See All
bottom of page