Offline events, online threats: Analyzing online extremist targeting of LGBTQ+ communities
5 June 2025
ISD analysis of US domestic violent extremism online between August 2024 and January 2025 found a dramatic increase in anti-LGBTQ+ activity around key political events, including the election and inauguration, which saw the level of targeted hate double.
Overall, we detected more than 97,000 comments during this period from more than a 1,000 violent extremist online channels, receiving over 3 million likes, comments and shares. Our analysis deployed bespoke Large Language Model (LLM)-based classifiers trained by experts to detect anti-LGBTQ+ targeted hate, which sought to dehumanize, demonize, express contempt or disgust for, harass, threaten, or incite violence against LGBTQ+ communities.
Our analysis found dramatic changes in the makeup of violent extremists engaged in anti-LGBTQ+ targeted hate across the six month monitoring period. While racially or ethnically motivated extremists (including neo-Nazis and white supremacists) were originally the most active, they were supplanted by more ideologically amorphous extremists during the second half of the monitoring period. These included violent Manosphere and extreme misogyny accounts, violent conspiracy theory networks and communities of violent extremists who blend supremacist ideology with nihilism, misanthropy and an obsession with violence. Our data also showed violent extremists increasingly targeting trans individuals.
Figure 1: Volume-over-time graph of anti-LGBTQ+ hate by violent extremists between August 1, 2024 and January 31, 2025. Excludes data from forums, which was introduced halfway through the monitoring period. August 25th spike unannotated as it was result of inorganic amplification of a Telegram post containing a homophobic slur.
Key findings
- Between August 2024 and January 2025, violent extremists accounts and channels produced more than 97,000 anti-LGBTQ+ posts across nine platforms and four violent extremist-centric imageboards/forums. This content received more than 3 million likes, comments and shares. Over 40 percent of activity came from forums and image boards while more than 20 percent was from the messaging app Telegram.
- Ideologically amorphous accounts became most actively engaged in anti-LGBTQ+ rhetoric over the course of the six-month monitoring period. While racially and ethnically motivated violent extremists started as the most active category for anti-LGBTQ+ targeted hate, the proportion of anti-LGBTQ+ content from ideologically amorphous violent extremists rose from 34 percent in August-September to 48 percent by December-January.
- Violent extremists engaged in anti-LGBTQ+ targeted hate were most animated by discussions targeting the community with accusations of degeneracy and grooming. Qualitative analysis found that accusations of pedophilia within the LGBTQ+ community were common among online violent extremists who engaged in homophobic or transphobic targeted hate. Drawing on distinct classifiers for identifying transphobic and homophobic messages, we found trans people were increasingly subjects of targeted hate in the post-election period, as transphobic rhetoric rose from 35 percent of all anti-LGBTQ+ rhetoric in October-November to 46 percent in December-January.
- Violent extremist targeted hate against the LGBTQ+ community online spiked in response to real-world events and political developments, demonstrating the close relationship between offline developments and online violent extremist activity. Violent extremist accounts were particularly animated by events related to the trans community, such as false accusations claiming Olympic boxer Imane Khelif was transgender following one of her victories, Transgender Day of Remembrance, and the day of President Trump’s inauguration, when the executive order on gender ideology was announced.
Figure 2: Total anti-LGB and anti-Trans comments, broken down by monitoring period. Results do not include data from violent extremist-centric forums which were introduced half way through the analysis.
Methodology
Analysis drew on a dataset of over 1,000 US-linked accounts and channels across a range of platforms and violent extremist ideologies. Accounts were vetted by experts for clear engagement in violent extremist behavior, with rigorous review ensuring that accounts expressed:
- Extremism: Advocating for an extremist ideology or worldview.
- Violence: Promoting terrorism or unlawful violence, association with a group with a history of violence, or supporting designated Foreign Terrorist Organizations.
- US-relevance: Operated by individuals or groups based in the US or which produced content primarily focused on the US.
Data was collected from fringe and mainstream platforms, including Instagram, Facebook, Telegram, X, YouTube, 4chan’s /pol/ board, and several violent extremist-centric forums/ imageboards including 8kun, incels.is and stormfront.org (referred to as ‘the forums’),1 via platforms’ public Application Programming Interfaces (APIs) or via Brandwatch.
Analysis incorporated bespoke Large Language Models (LLMs) to identify key trends in violent extremist discourse such as prominent narratives, targets of violent extremist activity, and the nature and extent of targeted hate against minority communities. To identify anti-LGBTQ+ targeted hate, ISD analysts worked with our tech partners at CASM Technology to develop bespoke LLM-based classifiers, to capture both homophobic and transphobic discourse with a high degree of precision (over 80 percent). A full methodology is available to researchers on request.
Data analysis
Platform trends
ISD researchers found that US violent extremism-linked imageboards and online forums hosted the most anti-LGBTQ+ content in our dataset. These sites accounted for 41 percent of anti-LGBTQ+ activity from across both fringe and mainstream social media platforms, with anti-LGBTQ+ content within these forums increasing by 17 percent during the monitoring period. Messaging app Telegram was also a prominent source for anti-LGBTQ+ violent extremist activity, comprising 22 percent of hateful messages in our dataset. Notably, Bluesky was a major growth platform during the period, showing violent extremists’ embrace of emerging platforms.
Anti-LGBTQ+ content from violent extremists received more than 3m likes, comments and shares across platforms between August 2024 and January 2025. Anti-LGBTQ+ content on X received more than 1.7m engagements, the highest level across the platforms that provide engagement data. Messaging app Telegram also received considerable engagement, with anti-LGBTQ+ targeted hate content receiving 60.7m views and 173,948 likes. Specific anti-trans hate on Telegram received more than 48m views and 149,682 likes.
Threat actors
The ideological composition of anti-LGBTQ+ targeted hate shifted dramatically during the collection period, based on expert classification of violent extremist accounts into different ideological communities. Accounts motivated by racial or ethnically motivated violent extremist ideology (REMVE) engaged in the highest proportion of anti-LGBTQ+ rhetoric in August and September, constituting 50 percent of targeted hateful activity. However, ideologically amorphous actors became the most active violent extremists in October and remained so through to January 2025: in December-January, they comprised 48 percent of anti-LGBTQ+ hate.
Figure 3: Total anti-LGBTQ+ posts from our dataset.
Figure 4: Total anti-Transgender posts from our dataset.
Among REMVE accounts, users motivated by accelerationism (a doctrine which seeks to hasten the collapse of modern societal and political structures) and neo-Nazi beliefs were the most active in promoting anti-LGBTQ+ targeted hate. In August and September, accelerationist and neo-Nazi/white supremacist users accounted for nearly 17 percent of all anti-LGBTQ+ hate across our dataset. In October and November, when ideologically amorphous accounts became the most active across platforms, activity was instead dominated by violent conspiratorial accounts.
Figure 5: Total anti-LGBTQ+ posts from our dataset, broken down by threat category and monitoring period.
The number of anti-LGBTQ+ messages shared by violent extremists spiked in response to real-world events and political developments, demonstrating the close relationship between offline real-world developments and online violent extremist activity. Researchers found spikes in activity from ideologically amorphous accounts following events involving the trans community, such as false accusations claiming Olympic boxer Imane Khelif was transgender following one of her victories, National Transgender Day of Remembrance (November 20) and the executive order on gender ideology, which was announced shortly after President Trump was inaugurated (January 20). REMVE accounts received particularly high levels of engagement on August 2 around discussions relating to Khelif, an Algerian boxer competing in women’s boxing at the Paris Olympics who became the target of online abuse relating to false claims that she is a man.
Mobilizing narratives
Within this violent extremist data set, researchers analyzed broader anti-LGBTQ+ narratives, including conspiratorial claims associating the community with grooming, pedophilia and mental illness. Beyond overt targeted hate detected by our bespoke classifiers analyzed above, broader topic modeling of online violent extremist discourse also found an additional 14,823 posts featuring more ‘mainstream’ homophobic and anti-trans rhetoric within violent extremist conversations across the six-month monitoring period.
Prominent narratives within the dataset of anti-LGBTQ+ posts included:
- Conspiracy theories of degeneracy, grooming and pedophilia
Violent extremists frequently accused LGBTQ+ public figures of illegal activities including grooming young children and engaging in pedophilia. Researchers observed posts received high engagement for framing gender affirming parenting as child abuse and accusing gay people of Satanic and pedophiliac practices.
- Attacks against specific transgender individuals
Analysis revealed an increase in anti-trans rhetoric over the period: messages containing anti-trans language rose from 21 percent of all anti-LGBTQ+ targeted hate in October-November to 46 percent in December-January. During the monitoring period, trans people were most often accused of engaging in pedophilia. In the lead up to and aftermath of the US elections, violent extremists engaging in anti-trans targeted hate often called for transgender people to be removed from military service.
- Explicit support for violence against LGBTQ+ people
Using bespoke classifiers trained to detect violent speech against the LGBTQ+ community, researchers found that 5 percent of anti-LGBTQ+ messages posted by violent extremists contained explicitly violent rhetoric advocating for harm against members of the community. Conspiratorial accounts were the user category in our dataset most likely to call for violence against LGBTQ+ individuals.
Conclusion
This data analysis of the activities of domestic violent extremists in the US shows the close relationship between online targeted hate against LGBTQ+ communities and real-world political developments. It shows that anti-LGBTQ+ targeted hate is advanced by violent extremists across a broad range of ideologies, across a variety of mainstream and fringe platforms.