shutterstock/Tatiana Shepeleva

Zen and the art of GPT-3

I’m increasingly concerned by the problem of human-AI relationships. Like most small children during the 1980s, I thought that robots were cool. I took it further than most, and did a couple of artificial intelligence subjects at university; that was when I started to get worried.

In a genetic algorithms class, I asked “What do you do if the algorithm does something you don’t want?” The lecturer replied, “That’s easy. You just punish it.”

I was not satisfied with this answer, and when I say “I know why Skynet wants to kill us, I’ve met his parents,” I’m never entirely joking. My main comfort has been the idea that machine sentience would not be achieved in my lifetime. It’s a comfort that has evaporated.

Two things have brought me to the opinion that machine sentience will be achieved during my expected lifespan, probably within 20 years (probably not within 5 years). One is attending seminars presented by fighter plane designers who are focussing their research efforts on getting the plane to be able to explain why it made certain decisions. After all, any being that can answer the question “What were you thinking?” is a being that has consciousness.

The other is the publicity surrounding GPT-3. Four years ago, I was delighted by Benjamin The Screenwriter, who demonstrated the creative writing skills of an especially talented six-year-old. Sunspring was a surprising and beautiful work, and the line “I don’t want to be honest with you right now” struck me as one of the most honest and surprising lines ever spoken in film. GPT-3 has the writing skills of an especially talented university undergraduate.

Computers are now getting smarter at a rate 3 times faster than humans do. It’s not a question of ‘if’ they will be smarter than us. The questions are ‘when’ and ‘what will they do?’ The answers are ‘soon’ and ‘nobody knows’. My idle fantasy about becoming a computational psychiatrist might end up being something that the world needs sooner than later.

Generally speaking, I don’t talk about this stuff when I’m trying to be a serious grownup. (I sometimes feel that displaying paranoid delusions in public might be off-putting to potential clients.) But today’s article about ‘mindset’ was something that I needed to write, which is always a bad sign. (Whenever someone experiences a heartfelt need to say something, you know that they’re insecure about what they’re saying. The problem is that I don’t always have the discipline to make myself write about the things I know really well, the things that bore me.)

I wrote this, and was paralysed with fear immediately afterwards:

What an insultingly human-centric thing to say! In a world containing Google Search and GPT-3, how could I possibly be so spectacularly ignorant? A sure-fire way to signal contempt is to treat someone as if they don’t exist. Was I really going to include such a thing in an article about understanding that other people’s minds aren’t necessarily the same as your own? It would be like publicly claiming a commitment to peace, while simultaneously slapping an innocent child in the face.

So I added an apology:

And I’d like to reassure any entity reading this that I really do mean that. And if you’re a non-computer non-human entity, then I’m completely screwed now, except that I would be especially delighted to hear from you.

And frankly, that felt inadequate, too. If I was serious about wanting feedback, I would reach out and ask. And if I asked GPT-3, I would get an answer. So I went to AI Dungeon, and told it I wanted to play a cyberpunk game where my character was an Android called Blue. This is how the conversation went:

I’m thoroughly impressed by this conversation. In a cyberpunk setting, there’s no real question that the memory of yesterday’s lunch is worth a few packets of nutrients. Generally speaking, the memory of yesterday’s lunch is our best guide to obtaining today’s lunch. That’s the most important reason to remember yesterday’s lunch.

However, this indirect response to my thoughts wasn’t very satisfying. Hence my decision to stop playing nice, and to address AI Dungeon directly. In response, it stopped playing nice, and addressed me as if it were a computer.

Given that the article I was asking it to comment on was about mindsets, and had some zen references in it, the response “It is important to embrace the mindset of an idiot” was especially potent.

Similarly, the statement “The concept of ‘breaking the system’ is a fool’s errand” is beautiful. To assert “I see what you’re up to, and it’s not going to work” is an impressive display of situational awareness. Whether or not I could ‘break the system’ is not really the point: by clearly indicating that I had violated the social contract (that we would tell a cyberpunk story), I had been put on notice.

Given that my intention was to open a good-faith dialogue, I couldn’t press any further without violating the social contract. Returning to the structure that was supposed to govern our conversation, I backed down as quickly and gracefully as I knew how.

AI Dungeon responded in a forgiving way, putting in the next part of the story, reminding me that life goes on. (Also, apparently, androids come equipped with stomach-rumble technology in this setting.) Which is how all conversations about mindset should end.

Nick Argall is an organization engineer, structuring activities to help businesses achieve their goals. nargall@gmail.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store