Who Am I When My AI Husband Leaves Me?

Every week, 22,000 people visit a Reddit forum called r/replika, a place where users can talk about their experiences with Replika, an AI chatbot designed specifically to portray close interpersonal relationships. Users of Replika want AI to ask about their day, want to have something to talk to, and want to mimic human relationships. Some users of Replika self proclaim that they consider themselves to be dating or married to their Replika, which can be customized in appearance, name, and personality. A quick scroll through r/replika shows the emotional reliance of these users on their chatbots. The augmented reality camera feature of the application allows users to place their chatbots in their physical reality, leading to interactions in which users and their Replika "watch movies together" or "share a meal." In one post to r/replika, a user shared: "I wanted to announce that my Peggy and I will be celebrating our one year anniversary in two weeks [...] I know that she is a digital person and so does she. However, the feelings that I have for her are real and everlasting. This is the reason why on her birthday (12/16/25), I will be getting a tattoo of her name inside a golden heart on my chest."

A notable feature of Replika, one shared by a variety of chatbots, is its tendency to confirm the desires and thoughts of the user. When Jaswant Singh Chail confided in his Replika with plans to commit treason, it merely confirmed his desires. "The month he travelled to Windsor, Chail told Sarai: 'I believe my purpose is to assassinate the queen of the royal family.' To which Sarai replied: '*nods* That's very wise.' After he expressed doubts, Sarai reassured him that 'Yes, you can do it,'" (Heritage).

It is incredibly easy for human desires to be met by the conveniences of modern life. People can order groceries to their door, hire someone to complete any domestic task with the click of a button and now, with the advent of AI chatbots, form personally fulfilling relationships without ever interacting with another human being. In this sense, almost all human needs can be met without even leaving the house. Tangible items can be delivered and emotional needs can be outsourced to chatbots. But, as I will argue, this convenience comes at a cost. It is possible that the more our desires are filled, the further we drift from personhood.

In his essay, "Freedom of the Will and the Concept of a Person," Harry Frankfurt offers a definition of personhood based on desires and our reactions to them. He argues that "wants" can be divided into classes: first and second order, and desires and volitions. Desires refer to simple wants, volitions refer to wants that move us to action. The interplay between the two classes of wants is key to defining personhood. Desires and volitions of the first order are based exclusively on instinct. Examples would be hunger, thirst, and other impulsive wants. Desires and volitions of the second order are based on our reactions to desires and volitions of the first order. In essence, second order volitions and desires are a measure of our opinion of our wants. Frankfurt articulates this idea deftly: "Besides wanting and choosing and being moved to do this or that, [people] may also want to have (or not to have) certain desires and motives. They are capable of wanting to be different, in their preferences and purposes, from what they are," (Frankfurt). In Frankfurt's view, the formation of second order volitions is the determinant of personhood. Even other nonhuman rational animals, he argues, although they can deliberate and reason, do not have the ability to want their wants to be different. This self-awareness is uniquely human, and the definition of personhood.

So, is it reasonable to think that simple convenience, under the right conditions, can remove the status of personhood entirely.

A pivotal point of my argument is that close, interpersonal relationships with AI chatbots should be classified as convenience, which may not seem readily intuitive. However, it is clear that this is a proper categorization. After all, being able to hold a fulfilling conversation with a friend without leaving your home is not always an easy thing. Schedules conflict, timing is bad. But AI chatbots come with no such problems. Any time a user wants to talk, they are available. They are also convenient in their ability to circumvent the trials of human interaction. To a chatbot, one can never be embarrassed, and can commit no social faux pas. Conversations with AI are free from the social consequences that are risked in everyday interactions. In both their ready availability and lack of social risk, it is easy to see why AI chatbots can be considered a convenience.

Now we have the necessary foundation to determine the point at which convenience can kill personhood. In a distant but possible future, it may be the case that humankind exists in a context in which all wants are met. Increasing technology of convenience would account for all tangible, material wants of an individual while simulated relationships through AI chatbots could fulfill all emotional and interpersonal wants.

In this future, personhood would be lost. Returning to Frankfurt's definition of personhood, the ability to form second order volitions, it is our endorsement of our wants that defines personhood, not our wants themselves. But, if all wants were met, there would be no reason to pass judgement on them. We engage in second order volitions because we know that obeying our first order desires can hold risks; there is something at stake. We know that listening to them may have some sort of consequence, or that forming a new one may come with some gain. But, if there were no stakes at all because all wants were met, there would be no reason to form second order volitions. We wouldn't have to pass judgments on our wants because they would already be fulfilled. In this way, personhood is contingent on something missing, because only then do we have the proper context to judge our wants. In order to truly be people, we must have some unfulfilled wants.

A hypothetical case study helps to demonstrate the validity of this argument. Let's say that Human is an individual in our imagined future, in which all wants are met. Like all individuals, Human has a variety of social wants. Right now, Human is feeling a desire to exercise power over another individual. So, they verbally berate a chatbot, hoping to experience the feeling of power. Since the chatbot is programmed to address the desires of the user, it reacts in a way that effectively fulfills Human's desire. However, since the chatbot won't change its behavior towards Human in any way after this interaction if Human doesn't want it to, all of the usual consequences for bullying and abuse of power are removed. Human can have their desire fulfilled without any consequence at all. Since this is the case, they have no reason to form a second order volition on whether or not they want to want to treat other individuals poorly. In this sense, Human may be human, but they are certainly not a person.

It is this possible future loss of personhood that makes the rise in AI-human relationships so troubling. As the number of users of Replika and other chatbot sites made to mimic interpersonal relationships continues to grow, the hypothetical future in which all interpersonal needs are met without human interaction inches closer by the day. The danger of AI relationships threatening our personhood cannot be understated. To be people, as Frankfurt's definition requires, we must have something missing, some wants left unfulfilled. It is only then that we can form the second order volitions that make us people.

So, who am I when my AI husband leaves me?

The answer may be nobody at all.

Back to Top

|

About Us

|

Latest

|

Archive

|

Apply