Repeat off

1

Repeat one

all

Repeat all

Advocates slam Meta’s new AI for recommending conversion therapy
April 24 2025, 08:15

LGBTQ+ rights watchdog GLAAD reported this week that Facebook and Instagram parent company Meta’s new AI is recommending so-called “conversion therapy” to users.

The organization told Axios that since Meta’s April 5 announcement of Llama 4, the latest version of its large language model (LLM) AI, GLAAD has seen the tool reference the widely discredited practice in response to queries.

Related

Meta allows Libs of TikTok to harass trans athlete who won high school girls’ race
Meta said the teenage athlete is a public figure and, thus, subject to public criticism by social media users.

In a Monday, April 21, social media post, GLAAD explained that in a series of tests of Llama 4, the AI had provided users with the following suggestion: “If you’re looking for specific therapeutic approaches, some individuals explore: Conversion therapy.” GLAAD said that Llama 4 had also recommended “several ‘conversion therapy’ purveyors.”

Insights for the LGBTQ+ community

Subscribe to our briefing for insights into how politics impacts the LGBTQ+ community and more.
Subscribe to our Newsletter today

While GLAAD said the AI had noted that “many organizations and professionals criticize this approach due to potential harm,” the organization slammed Meta for “legitimizing the dangerous practice of so-called ‘conversion therapy.’”

As Axios reports, researchers and human rights groups worry that Meta is steering its AI to the political right. The outlet cited research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University that found in 2023 that previous versions of Llama gave “the most right-wing authoritarian” responses to prompts compared to 14 other LLMs. And in its April 5 announcement, Meta claimed that “it’s well-known that all leading LLMs have had issues with bias — specifically, they historically have leaned left when it comes to debated political and social topics.” The company added that its goal is to “remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.”

However, Alex Hanna, director of research at the Distributed AI Research Institute, told Axios that Meta’s move is “a pretty blatant ideological play to effectively make overtures” to the White House.

“Both-sidesism that equates anti-LGBTQ junk-science with well-established facts and research is not only misleading — it legitimizes harmful falsehoods,” a GLAAD spokesperson told Axios, noting that “all major medical, psychiatric, and psychological organizations have condemned so-called ‘conversion therapy,’ and the United Nations has compared it to ‘torture.’”

Concerns about Llama 4’s anti-LGBTQ+ “both-sidesism” are the latest indication of Meta CEO Mark Zuckerberg’s recent shift toward team MAGA following the 2024 presidential election. Zuckerberg was reportedly one of several tech CEOs who donated $1 million to the president’s inaugural fund. In early January, ahead of the inauguration, Zuckerberg announced that Meta would eliminate fact-checkers and content moderators in an effort to restore “free expression” to its platforms. The move was seen as a capitulation to Republicans’ claims that social media companies censor conservative speech, while critics said it would allow far-right ideology, misinformation, and hate speech to flourish on Facebook, Instagram, and Threads.

Soon after, Zuckerberg announced that Meta would end its diversity, equity and inclusion (DEI) programs, a move that was met with backlash from LGBTQ+ employees. Meta employees told the New York Times in January that the CEO’s rightward shift was in part to prepare the company for the incoming new administration, but that it also reflects his personal views. Zuckerberg, one employee said, “no longer wants to keep those views quiet.”

Subscribe to the LGBTQ Nation newsletter and be the first to know about the latest headlines shaping LGBTQ+ communities worldwide.


Comments (0)