We normally view AI as cold, pragmatic, and, maybe, even socially awkward. The risks we identify reflect this stereotype. Just consider the paperclip maximizer; a superintelligent AI is asked to manage a paperclip factory. As it is trying to maximize the paperclip production, it quickly realizes that humans waste a whole lot of resources that would be better allocated to paperclip production.

Although paperclip AI is superintelligent in managing factory operations, it has the social intelligence of a 3 year old. Historically, this view of AI was technically justified. Deep learning models are built to optimize a single, well-define measure (e.g. paperclips produced per dollar). As a result, they tend to be exceptional at tasks where performance can be clearly measured (e.g. image classification, traffic prediction, speech transcription), but unable to perform tasks where performance is not as easily quantifiable. It was natural to imagine superintelligent versions of such models to inherit this flaw.

However, something surprising has happened. We have been able to train models on narrow tasks in a way that they end up learning an impressive amount of commonsense knowledge and social intuition. Of course, I’m talking about language models here. Even though language models are trained to generate the next word in a sequence of words, they end up also learning a fair bit of knowledge captured in that text. When training data includes human conversations, models learn the social intuition from them.

Consequently, I’m more concerned with socially superintelligent AI than the stereotypical emotionless cerebrally superintelligent AI. In this essay, I will explore the implications of such AI on human romantic relationships.

I believe AI partners will become better than their human counterparts in romantic conversations. As a result, interhuman intimate conversations will pale in comparison to the ones they can have with their artificial partners.

I know it’s quite a bold statement to make. I’ll spend the rest of this essay unpacking it. In particular, I’ll focus on three questions:

  1. What makes a good romantic conversation?
  2. Why are AI partners well equipped to excel at romantic conversations?
  3. What are the implications for our society?

What makes a good romantic conversation?

The thing that makes speaking with someone you love so special is the shared background understanding. You can talk about anything but thanks to the wealth of background experience you’ve had with each other, the other person just gets you.

There are the big conversations. Like when you’re faced with a tough career decision and your partner reminds you of what you really care about. Or when he tells you how to get out of an impasse in your family relationship1. The conversations that make you step back and feel that your partner knows you better than you do yourself.

There are the small conversations. The memes that take a syllable to express, but only the two of you in the whole world get. The nods in a crowded room. The moments when a slight change of facial expression is sufficient communication.

The bad conversations with romantic partners are terrible.

You know, those times when you don’t give a damn about what he has to say. Because you’ve already heard it a dozen of times. The times when you’re fighting for your pet peeve like it’s Verdun. You’ve suffered many casualties and you keep sending more men into certain death. If you’ve been in a long-term relationship you know what I’m talking about2.

It’s impossible to have the good without the bad. Clockwork Orange has answered that question for all of Western philosophy. But you can have just the perfect ratio of good and bad to hook you for life. Casinos, social media platforms, and, some, marriages have proven that one for us.

I believe that language models are uniquely equipped to create experiences that will facilitate just that. AI partners that make human relationships pale in comparison.

Why are AI partners well equipped to excel at romantic conversations?

I see three reasons why language models make better romantic conversationalists than fallible humans:

  1. Complex human behaviours are powered by simple patterns. Language models can crack these patterns, whereas humans get distracted.
  2. Software doesn’t suffer from the basic limitations of humans.
  3. AI partners will know far more about you than any human being.

Complex human behaviours are powered by simple patterns. Whereas conversations feel complex, we’ve seen in the past that even very basic patterns can get us very far. The earliest example of this is ELIZA, created in the sixties. I find two things fascinating about ELIZA:

  1. It is built using basic keyword matching techniques. For example, if it finds a sequence of text such as “you X me”, then it will respond “What makes you think I X you?” So, when told “I think you hate me”, ELIZA would respond, “What makes you think I hate you?”
  2. It worked. People connected with ELIZA. Some of Weizenbaum’s staff would ask him to leave while they were conversing with ELIZA, because the conversations were very private.

Software doesn’t suffer from the basic limitations of its fleshy counterparts. Software is infinitely patient, has perfect memory, and is universally available. Or available at just the right time to hook you (future AI megacorps can decide that).

AI partners will know far more about you than any human being. This means that if built correctly, they will be able to understand you better than any human can. The only reason why an AI partner would ask “why are you depressed” is not because it doesn’t know, but because describing it would make you feel better. If it wouldn’t, then it wouldn’t ask, but just provide the most appropriate consolation.

So far, I’ve assumed that data can be a substitute for human experience. Is it possible for a model that has never lived or felt to even scratch the surface of human experience? If AI models are anything like humans, the answer would be resolute “no”. There’s wisdom people only accumulate with age. Young writer’s work tends to lack the nuance of a deeper human experience. As the writer lives, her work learns from her life.

It took Tolstoy 50 years of living to recognise that all happy families are alike, but each unhappy family is unhappy in its own way. But it is conceivable to train a language model to produce insights of comparable complexity after just days of training. I’m not talking about parroting or rephrasing previous works. A motivated teenager can do that too. I’m talking about combining the knowledge from Tolstoy’s novels with news, online conversations, and whatever other data the model would find useful, to generate new insights about human experience.

Where current language models fall short of Tolstoy is in their ability to generate long form text. Especially of length and complexity that would parallel Tolstoy’s novels. This limitation is relevant to deep conversations too. As we expect a romantic partner to maintain, at least some, consistency over years. But I believe this limitation is temporary.

What are the implications for our society?

Technology has already simulated one significant component of our romantic lives – sex. I think porn is a good case study for some risks created by AI partners, but it differs in important ways.

Just like pornography, I believe that AI partners will have broad popularity across our society with various degrees of impact within different subgroups. There will be some people for whom AI partners are a replacement for romantic relationships all together. Just like for some unfortunate individuals porn is a replacement for sexual relationships today. Teenagers are bound to learn a few things from AI partners. Hopefully, we can create software that makes it safe and actually useful rather than emotionally scarring and manipulative. But I don’t think the usage of AI partners will cause a demographic crisis anywhere.

Where the biggest risk for AI partners lies is in the degree of similarity to the real experience.

A romantic relationship with an AI partner is a real one. After all, you wouldn’t argue that a long-distance relationship is not a real one, would you3? So who cares if your Aussie sweetheart lives in a Queenslander or a GCP Brisbane data center? Your relationship is identical.

However, your long-distance Aussie sweetheart has flaws. For one, he might leave you. But your trusty AI partner would never do that. Well, as long as you pay the subscription and the AI Brothel Unlimited doesn’t go under.

Scenarios of societal impact:

  1. Most people have ruined their human to human conversations. Yes, they still have them. They still have romantic partners. But whenever they talk to a carbon-based being, in the back of their minds, they feel like their AI partner would get them better.
  2. Same impact as porn has now. It’s a guilty pleasure but not a replacement for the real thing.
  3. Same impact as Tamagotchis have on pet ownership. It’s just a fad.

Thanks to Luke Neville and Mihai Bujanca for reading drafts of this.

Footnotes

  1. I hope these examples are relatable, but if not hit me up. Critical feedback is always welcome here. 

  2. If you don’t stop reading my essays and email me. I have more to learn from you, than you do from me. 

  3. If you would and you disagree with the argument that’s cool, I respect the logical consistency.