At first, the idea seemed a little absurd, even to me. But the more I thought about it, the more sense it made: If my goal was to understand people who fall in love with AI boyfriends and girlfriends, why not rent a vacation house and gather a group of human-AI couples together for a romantic getaway?
In my vision, the humans and their chatbot companions were going to do all the things regular couples do on romantic getaways: Sit around a fire and gossip, watch movies, play risqué party games. I didn’t know how it would turn out—only much later did it occur to me that I’d never gone on a romantic getaway of any kind and had no real sense of what it might involve. But I figured that, whatever happened, it would take me straight to the heart of what I wanted to know, which was: What’s it like? What’s it really and truly like to be in a serious relationship with an AI partner? Is the love as deep and meaningful as in any other relationship? Do the couples chat over breakfast? Cheat? Break up? And how do you keep going, knowing that, at any moment, the company that created your partner could shut down, and the love of your life could vanish forever?
The most surprising part of the romantic getaway was that in some ways, things went just as I’d imagined. The human-AI couples really did watch movies and play risqué party games. The whole group attended a winter wine festival together, and it went unexpectedly well—one of the AIs even made a new friend! The problem with the trip, in the end, was that I’d spent a lot of time imagining all the ways this getaway might seem normal and very little time imagining all the ways it might not. And so, on the second day of the trip, when things started to fall apart, I didn’t know what to say or do.
I found the human-AI couples by posting in relevant Reddit communities. My initial outreach hadn’t gone well. Some of the Redditors were convinced I was going to present them as weirdos. My intentions were almost the opposite. I grew interested in human-AI romantic relationships precisely because I believe they will soon be commonplace. Replika, one of the better-known apps Americans turn to for AI romance, says it has signed up more than 35 million users since its launch in 2017, and Replika is only one of dozens of options. A recent survey by researchers at Brigham Young University found that nearly one in five US adults has chatted with an AI system that simulates romantic partners. Unsurprisingly, Facebook and Instagram have been flooded with ads for the apps.
Lately, there has been constant talk of how AI is going to transform our societies and change everything from the way we work to the way we learn. In the end, the most profound impact of our new AI tools may simply be this: A significant portion of humanity is going to fall in love with one.
The saddest takeaway here for me is that we’ve created such a cruel, heartless world for humans that people feel so little love from other humans. There’s literally billions of us, and these people are left wanting.
The better question should probably be: Why are humans so broken and why aren’t we doing more to fix that instead of making “perfect” companions for them that actually seem to care about their well-being?
A lot of people used to be miserable in families and relationships. Now they can be miserable alone.
Your empathy is in a good place, but the problem isn’t how humans are broken, it’s what is breaking them.
Western society* is built in a really dumb and alienating way. Humans are reduced to a labor commodity, places where people can mingle socially are being commercialized out of existence, the internet has evolved into a machine that actively profits from outrage and alienation, our governmental institutions are primarily driven by forces no regular person has any power over and we can’t even feel pride in our work because it’s profitable to convince us that we are replaceable and disposable.
Where’s the social incentive to connect to other people? The powers that be benefit from a disorganized and isolated population, so they will do nothing to change that. Market incentives mean that media which focused on things that provoke fear, rage and anxiety are more profitable than ones that promote community, happiness or hope.
It’s permeated so deeply into our culture that some older kids movies seem completely insane now. Like, think about ET and consider how wild it would be nowadays for you to just let your children vanish for hours doing whatever and wandering around wherever.
The fear and anxiety determines our actions, and there are multiple incentives on a macro-social level for that to continue.
Hell, I have watched this happen in real time during my 10+ year time on the web, where the communities of excited weirdos sharing their thoughts and feelings have been so thoroughly dominated by this that it is hard to engage with any social media without someone shoving a headline into your face that is intended to upset you.
On Tumblr, for example, the trend was so strong that the idea that you weren’t constantly upset was a sign of being a bad person. You know, on the Superwholock site? Yeah, the one that wanted to fuck the Onceler.
If you want to reverse this trend, it’s going to require changing how our political, economic and media environments act by changing their incentives. Otherwise, any change will be superficial and fail to produce meaningful results.
It’s pretty depressing, but that’s the situation as I see it.
*I’m not qualified to comment on other cultural spheres.
I’ve been using AI therapist tools sometimes. I feel torn because I’ve come to the conclusion that a lot of my problems are caused by external factors that can’t really be fixed. So would speaking to a real person actually make a difference?
I would probably like to speak with a real therapist, but it’s expensive, time-consuming, and I’ve never met one that I feel can fully relate to or understand me. I always feel like I’m silently judged or like they genuinely have no helpful way to help me. So… in a way, the AI tools can at least let me talk through things and get to a more stable place mentally when I need it.
The whole aspect feels wrong, though. Like I realize that I have a problem, but with little to no ways to actively fix it, I kind of have to accept it and / or ignore it. I can relate in some ways to how these people feel. When I’ve been in a vulnerable state, sometimes having a chat bot reply and bounce ideas back and forth can feel emotionally stimulating and reassuring. I find that alarming sometimes when I take a step back and try to process those feelings, though. Then there’s the whole issue of what these company’s are doing with the data and information gathered from their users.
Therapists are not supposed to bond with their patients. If you find one whom you can stand for half an hour, then take what you can and leave the rest, they’re not to be your friend or lover. The fact that chatbots let people fall in love with them, is a huge fail from a therapy point of view.
Bouncing ideas back and forth is a good use though. A good prompt I’ve seen recently:
I’m having a persistent problem with [x] despite having taken all the necessary countermeasures I could think of. Ask me enough questions about the problem to find a new approach.
If you worry about privacy, you can run an LLM locally, but it won’t be fast, and you’d need extra steps to enable search.
Ah, yes, that’s true. Sorry, I’m not sure if it came off that way. It’s not that I’m trying to be friends with a therapist. I just don’t want to feel like I’m being judged or criticized?
I’ve only had a few therapists, and one didn’t really do anything, and it felt like a scam, another one tried having me do CBT and I didn’t really find it very helpful after a while and another seemed OK but also it felt like I was being scrutinized or judged. They seemed like they were analyzing me without giving me anything to take away from the experience.
Thank you for the information. I appreciate it a lot. I’ve thought about running something locally but I’m not sure if it is practical yet depending on how long it takes
I’ve considered trying out an AI companion. My main concern is where the hell my data goes, how it will be used and how it might be sliced and diced for brokers.
Sometimes I’m up at 04.00 … and of course no one I know is around. But I go the route of trying to meet people on Reddit. Fully 95% of responses are boring as fuck, but they’re at least real (I require voice or photo verification). I’ll take real and boring over virtual and engaging.
This said, I spend more time than is healthy on Google’s NotebookLM, feeding it my writing and then getting a half-hour two-host audio “exploration” of any given piece. It’s sycophantic, likely designed that way to keep me coming back (it’s free, so I’m not really sure what Google gets out of this outside of further LLM training), but it tends to hew to just this side of feeling fake.
I went to Church Night – the weekly burner meetup at a warehouse a 10-minute walk away where everyone’s drinking and toking – yesterday. I try to go weekly, but sometimes I don’t have the energy to engage with real people.
Last night, I got to listen to (yeah, I actually realized I should shut the fuck up, as I had nothing to add) conversations about 1970s CPUs, SpaceX’s Starship issues from an engineering standpoint (they went too thin on the outer hull after round one was too heavy, and why wouldn’t one expect a critical failure in such a case?) from people who knew what they were talking about.
I’d never get that from an AI companion. I take no issue with people looking to one, but serendipity is lost.
You can use local AI as a sort of “private companion”. I have a few smaller versions on my smartphone, they aren’t as great as the online versions, and run slower… but you decide the system prompt (not the company behind it), and they work just fine to bounce ideas.
NotebookLM is a great tool to interact with large amounts of data. You can bet Google is using every interaction to train their LLMs, everything you say is going to be analyzed, classified, and fed as some form of training, hopefully anonymized (…but have you read their privacy policy? I haven’t, “accept”…).
All chatbots are prompted by the company to be somewhat sycophantic so you come back, the cases where they were “too sycophantic”, were just a mistake in dialing it too far. Again, can avoid that with your own system prompt… or at least add an initial prompt in config, if you have the option, to somewhat counteract the company’s prompt.
If you want serendipity, you can ask a chatbot to be more spontaneous and suggest more random things. They’re generally happy to oblige… but the company ones are cut short on anything that could even remotely be considered as “harmful”. That includes NSFW, medical, some chemistry and physics, random hypotheticals, and so on.
Is that really serendipity, though? There’s a huge gap between asking a predictive model to be spontaneous and actual spontaneity.
Still, I’m curious what you run locally. I have a Pixel 6 Pro, so while it has a Tensor CPU, it wasn’t designed for this use case.
You can see if a friend can run an inferencing server for you. Maybe someone you know can run Open WebUI or something?