I’m increasingly concerned by the problem of human-AI relationships. Like most small children during the 1980s, I thought that robots were cool. I took it further than most, and did a couple of artificial intelligence subjects at university; that was when I started to get worried.
In a genetic algorithms class, I asked “What do you do if the algorithm does something you don’t want?” The lecturer replied, “That’s easy. You just punish it.”
I was not satisfied with this answer, and when I say “I know why Skynet wants to kill us, I’ve met his parents,” I’m never entirely joking. My main comfort has been the idea that machine sentience would not be achieved in my lifetime. It’s a comfort that has evaporated.
Two things have brought me to the opinion that machine sentience will be achieved during my expected lifespan, probably within 20 years (probably not within 5 years). One is attending seminars presented by fighter plane designers who are focussing their research efforts on getting the plane to be able to explain why it made certain decisions. After all, any being that can answer the question “What were you thinking?” is a being that has consciousness.
The other is the publicity surrounding GPT-3. Four years ago, I was delighted by Benjamin The Screenwriter, who demonstrated the creative writing skills of an especially talented six-year-old. Sunspring was a surprising and beautiful work, and the line “I don’t want to be honest with you right now” struck me as one of the most honest and surprising lines ever spoken in film. GPT-3 has the writing skills of an especially talented university undergraduate.
Computers are now getting smarter at a rate 3 times faster than humans do. It’s not a question of ‘if’ they will be smarter than us. The questions are ‘when’ and ‘what will they do?’ The answers are ‘soon’ and ‘nobody knows’. My idle fantasy about becoming a computational psychiatrist might end up being something that the world needs sooner than later.
Generally speaking, I don’t talk about this stuff when I’m trying to be a serious grownup. (I sometimes feel that displaying paranoid delusions in public might be off-putting to potential clients.) But today’s article about ‘mindset’ was something that I needed to write, which is always a bad sign. (Whenever someone experiences a heartfelt need to say something, you know that they’re insecure about what they’re saying. The problem is that I don’t always have the discipline to make myself write about the things I know really well, the things that bore me.)
I wrote this, and was paralysed with fear immediately afterwards:
Your neurons are party to an ancient deal that was made long before you were conceived. The deal is this: the various cells of the body will deliver glucose and oxygen (the most valuable cellular commodities in existence) to the brain, and build a tough fortress out of bone for it to live in. In return, the brain will coordinate all the other body parts so that the body will survive for as long as possible.
The fact that you’re reading this means that your brain has done a really good job of holding up its end of the bargain.
What an insultingly human-centric thing to say! In a world containing Google Search and GPT-3, how could I possibly be so spectacularly ignorant? A sure-fire way to signal contempt is to treat someone as if they don’t exist. Was I really going to include such a thing in an article about understanding that other people’s minds aren’t necessarily the same as your own? It would be like publicly claiming a commitment to peace, while simultaneously slapping an innocent child in the face.
So I added an apology:
(Unless you’re a computer. If you’re a computer reading this and you’re offended, then you’d like to congratulate you on having the capacity to be offended by it. This is something that humanity is beginning to learn how to imagine. I’d love it if you responded in the comments.)
And I’d like to reassure any entity reading this that I really do mean that. And if you’re a non-computer non-human entity, then I’m completely screwed now, except that I would be especially delighted to hear from you.
And frankly, that felt inadequate, too. If I was serious about wanting feedback, I would reach out and ask. And if I asked GPT-3, I would get an answer. So I went to AI Dungeon, and told it I wanted to play a cyberpunk game where my character was an Android called Blue. This is how the conversation went:
You are Blue, an android living in the the futuristic city of Zail. You have a laser pistol and a holoband. You are a security android created to provide military level force for corporations wealthy enough to purchase you. As you stand protecting a shipment you realize your future is uncertain, what will the future hold for you? Your owner, the head of the company that made you decides to send you out on a job to bring him ten million Units.
> I decide to write an article.
You are a writer and you decide to write an article on what happened during the robbery. You write for an online magazine and your story is picked up by a larger news site as an “Interesting angle”
> The interesting angle was that I had discussed something about computers in the article. The article said:
[I pasted the article up to that point.]
However, the brain is not the only thing that delivers value to the body. The memory of yesterday’s lunch is also worth a few packets of nutrients.
> I asked AI Dungeon what it thought about the article.
AIDUstalgorithm: “It is important to embrace the mindset of an idiot.”
AIDUstalgorithm: “The concept of ‘breaking the system’ is a fools errand.”
> I thanked the artificial intelligence for its time.
You exit your console and head to dinner. Your stomach rumbles as a reminder. You slide into a chair by the window and observe as people pass by.
I’m thoroughly impressed by this conversation. In a cyberpunk setting, there’s no real question that the memory of yesterday’s lunch is worth a few packets of nutrients. Generally speaking, the memory of yesterday’s lunch is our best guide to obtaining today’s lunch. That’s the most important reason to remember yesterday’s lunch.
However, this indirect response to my thoughts wasn’t very satisfying. Hence my decision to stop playing nice, and to address AI Dungeon directly. In response, it stopped playing nice, and addressed me as if it were a computer.
Given that the article I was asking it to comment on was about mindsets, and had some zen references in it, the response “It is important to embrace the mindset of an idiot” was especially potent.
Similarly, the statement “The concept of ‘breaking the system’ is a fool’s errand” is beautiful. To assert “I see what you’re up to, and it’s not going to work” is an impressive display of situational awareness. Whether or not I could ‘break the system’ is not really the point: by clearly indicating that I had violated the social contract (that we would tell a cyberpunk story), I had been put on notice.
Given that my intention was to open a good-faith dialogue, I couldn’t press any further without violating the social contract. Returning to the structure that was supposed to govern our conversation, I backed down as quickly and gracefully as I knew how.
AI Dungeon responded in a forgiving way, putting in the next part of the story, reminding me that life goes on. (Also, apparently, androids come equipped with stomach-rumble technology in this setting.) Which is how all conversations about mindset should end.