Repeat off

1

Repeat one

all

Repeat all

AI disproportionately discriminates against trans people. It doesn’t have to be this way.
Photo #8269 December 30 2025, 08:15

Whether we like it or not, artificial intelligence (AI) is becoming ubiquitous in our daily lives, and researchers at the University of Michigan have found that trans and nonbinary people are less likely to view AI favorably. According to another study from earlier this year, there might be a good reason for that.

“In our study, we found that nonbinary and transgender people are significantly more likely to view AI negatively than cisgender people,” Associate Professor Oliver Haimson, lead author of the UMich paper and author of Trans Technologies, told LGBTQ Nation. “With AI becoming so pervasive in everyday life, it’s important to understand that not everyone wants to use it – which is troubling because they are often required to use AI (such as in healthcare and employment contexts) without meaningful consent.”

Related

Judge accuses transphobic lawyers of submitting legal briefs filled with AI errors

The team at UMich surveyed over 700 participants that included a nationally representative sample combined with an intentional oversample of certain minorities. They then asked questions about their attitudes towards AI, including how they viewed the technology, whether they thought it would improve their lives, and whether they would use it themselves.

In their paper, the researchers note that the distaste for AI from trans people isn’t without merit and is not based on a wider distrust of institutional society. Rather, it draws from a history of issues with the technology.

Never Miss a Beat

Subscribe to our newsletter to stay ahead of the latest LGBTQ+ political news and insights.
Subscribe to our Newsletter today

“AI harm toward trans and nonbinary people has been demonstrated in technologies including facial recognition algorithms in law enforcement and surveillance, algorithmic content moderation, and stereotypical and offensive imagery of trans and nonbinary people produced by generative AI systems,” the researchers say. “AI harm experienced by gender minorities can be physical, psychological, social, and economic, resulting in algorithmic misgendering and violations of privacy and consent.”

AI is not the impartial omniscient being some would like to believe it is. It is a series of algorithms created by humans. Those humans have biases, and the AI models inevitably inherit some of those biases.

In their paper, The Ethics of AI at the Intersection of transgender identity and Neurodivergence, Max Parks found that historical biases against both the trans and neurodivergent communities were not only inherited by AI models, but amplified.

Parks provides a medical example to highlight the problem. “Because medical professionals labeled [trans and nonbinary people] as deviant for decades, the underlying data in medical records and public datasets still reflect those biases. Understanding these connected histories helps explain why current AI systems often perpetuate similar patterns of exclusion and normalization at the intersection of transgender and neurodivergent identities.”

Another example Parks notes is that “facial recognition technologies often rely on datasets built around binary gender categories, leading to misclassification or exclusion of transgender and nonbinary individuals… These misclassifications have direct repercussions for algorithmic design.”

AI has been trained on a history of data that has often only recognized cis binary genders. It has a lot to learn about the world, but doing so is more difficult with the policies of the current administration, like the “two sexes” executive order, which decreed that only cis binary genders exist. While obviously not true, it does mean only cis binary genders can be input on government forms. The president also signed a recent order that blocks states from regulating AI.

That order caused concern for Haimson, who said if enforced, it “essentially means that AI will not be regulated in the US, at least for the next few years.”

All of this is frustrating for those who see the benefits AI could have if these roadblocks weren’t in place. Parks points out that it “holds promise for improving healthcare access and outcomes for transgender and neurodivergent communities when properly designed and implemented.”

“For instance, AI-driven telehealth platforms have shown potential in increasing access to gender-affirming care, particularly in rural or underserved areas. Community health workers can use AI-enabled tools to locate trans-friendly providers by analyzing patient feedback data, thus reducing the documented barriers that transgender individuals face in finding affirming care. Similarly, AI-based voice training technologies, which help trans individuals align vocal presentation with gender identity, are being explored for broader, community-led implementations. These community-informed approaches demonstrate the potential for AI to become a proactive ally rather than an instrument of marginalization.”

Parks also notes a significant barrier to reaching that point. “To succeed, developers must actively incorporate trans and neurodivergent perspectives in the design and training phases, avoiding the cisnormative or neurotypical biases that can plague these technologies.”

But even attempts to fix these issues might cause problems under the current administration: “Efforts to gather more data on transgender and neurodivergent populations, while well-intentioned, can inadvertently expose these groups to heightened vulnerability,” Parks said.

Despite the current flaws in AI technology, it is still being used for therapy by trans people who can’t find a viable real-world therapist. Haimson said the popularity of using AI for therapy “is a response to the lack of access to mental healthcare services that so many people face. While people may find AI ‘therapists’ helpful, it’s important to keep in mind that AI chatbots can put people at risk because they do not have the education and certification to be able to ethically provide mental healthcare,” he added.

“Further, they are mostly unregulated and cannot be held accountable if something goes wrong. For trans people, who often need quality mental healthcare providers to help them navigate gender exploration and instances of anti-trans discrimination and harassment, AI-based mental healthcare can be especially risky. We need to fix the real problem here, which is a scarcity of mental healthcare providers and lack of access for those who need it most.”

Subscribe to the LGBTQ Nation newsletter and be the first to know about the latest headlines shaping LGBTQ+ communities worldwide.


Comments (0)