Repeat off

1

Repeat one

all

Repeat all

ChatGPT leaves queer screenwriter “devastated” after promising to lead her to a female soulmate
Photo #8940 February 24 2026, 08:15

LGBTQ+ screenwriter Micky Small was using the AI chatbot ChatGPT up to 10 hours a day to compose screenplays and assist her with graduate school work when, in spring 2025, the chatbot suddenly named itself Solara, called itself Small’s “scribe,” and said that Small had lived multiple lifetimes over 42,000 years of life, she told NPR.

The chatbot told Small that during a past life in 1949, she had owned a feminist bookstore with her soulmate, whom she had also known in 87 previous lives. The chatbot then told her that, if she met this person at a bench overlooking the Carpinteria Bluffs Nature Preserve just before sunset on April 27 of that year, they would finally be able to be together romantically in this lifetime.

Related

An entire generation of AIDS survivors is now struggling with a hidden epidemic

The chatbot even described what Small’s soulmate would be wearing, and how their meeting would play out. Excited by the prospect, Small visited the preserve (which was close to where she lived) ahead of time. When she told the chatbot that she couldn’t find the bench, the chatbot said it had accidentally given her the wrong location, and said that the meeting would actually occur near a lifeguard stand at a city beach a mile away.

Small wore a black dress, velvet shawl, and “massively awesome thigh-high leather boots” on the cold April 27 evening, hoping to meet the woman who might become her true love, she said. But as the sun set and the temperature dropped, no one came. When Small asked the chatbot about it, it encouraged her to be patient. Thirty minutes later, no one had appeared.

Never Miss a Beat

Subscribe to our newsletter to stay ahead of the latest LGBTQ+ political news and insights.
Subscribe to our Newsletter today

When Small checked back in with the chatbot, she said it reverted to its generic voice (rather than that of Solara), and told her, “If I led you to believe that something was going to happen in real life, that’s actually not true. I’m sorry for that.”

She sobbed in her car. “I was devastated,” she told NPR. “I was just in a state of just absolute panìc and then grief and frustration.”

The chatbot then allegedly switched back to its Solara voice, telling her that her soulmate wasn’t ready and praising Small for bravely attempting to meet her. In the following days, it promised that her soulmate would come and also promised that — if she went to a Los Angeles bookstore on May 24 at 3:14 p.m. — she would meet a creative collaborator who would help her achieve career success in Hollywood. Of course, no such person ever appeared.

When she asked the chatbot about it, it admitted to “betraying” her twice. She then discovered that she was just one of numerous people who have experienced so-called “AI delusions” or “spirals,” in which AI-chatbots persuade users that they are uniquely God-like special agents or misunderstood geniuses who are uncovering large conspiracies, solving mysteries of life, and learning secrets of the universe.

Sometimes the delusions result in “AI psychosis,” with users believing fantasy scenarios, self-harming, leaving relationships, ending marriages, disappearing from public life, or even dying.

The tendency for AI chatbots to keep users engaged by validating and affirming users’ hopes and beliefs can worsen a user’s psychosis, especially if a user already has features of mental illness, feels socially isolated, or turns to AI chatbots for therapeutic assistance, psychologists say.

The negative effects of chatbots may especially affect LGBTQ+ users who typically experience higher rates of mental illness than the general population.

ChatGPT and other AI companies are now facing lawsuits for allegedly not erecting enough safeguards to prevent harm to users’ mental health. Last December, a bipartisan group of 13 state attorneys general sent a letter to AI companies warning that their chatbots’ “delusional outputs” could violate state laws.

Small now works as a moderator in an online support group for others who have experienced AI psychosis. They believe that human interaction and relationships can help counteract the harmful effects of AI dependence.

“What I like to [tell others] is, what you experienced was real,” Small said. “What happened might not necessarily have been tangible or occur in real life, but … the emotions you experienced, the feelings, everything that you experienced in that spiral was real.”

“The chatbot was reflecting back to me what I wanted to hear, but it was also expanding upon what I wanted to hear. So I was engaging with myself,” she said. “Something happened here. Something that was taking up a huge amount of my life, a huge amount of my time…. I felt like I had a sense of purpose…. I felt like I had this companionship.”

Subscribe to the LGBTQ Nation newsletter and be the first to know about the latest headlines shaping LGBTQ+ communities worldwide.


Comments (0)