Back to top

David Myles: The Paradox of Digital Platforms for LGBTQ+ Communities LGBTQ

June 10, 2025

Update : June 10, 2025

The series “Tour d’horizon en trois questions” features research in all its forms together with insight on current events. 

David Myles, a professor at the Institut national de la recherche scientifique (INRS), explores the cultural, social, and economic implications of digital platforms. His work focuses on the intersections between digital life, popular culture, and lesbian, gay, bisexual, trans, and queer (LGBTQ+) communities. 

While digital platforms offer genuine meeting places, they also pose increased risks of online violence. Professor Myles provides an overview of his research on these pressing societal issues. 

In addition to his teaching at the INRS Urbanisation Culture Société Research Centre, David Myles is the director of Qultures, a research Lab on LGBTQ+ cultures. He is also a member of LabCMO, a laboratory on communication and digital media, and UQAM’s Chaire de recherche sur la diversité sexuelle et la pluralité des genres

What role do digital platforms play within LGBTQ+ communities? How are they used and what can they offer these communities? 

The Internet has profoundly transformed social, cultural, and political dynamics within LGBTQ+ communities. From the 1990s and 2000s on, the Internet provided a way for LGBTQ+ individuals to create their own online spaces for socializing, sharing resources, and expressing themselves. This also enabled them to bypass many sources and forms of discrimination, especially in areas or environments hostile to their identities. 

In the 2010s, digital platforms helped create—at least initially—more visible and supportive communities. Dating apps like Grindr and Tinder transformed intimate practices and public space engagement, while social media platforms like Twitter and Instagram enabled activist campaigns, often through hashtags like #LoveWins or #TransRightsAreHumanRights. Platforms like YouTube and TikTok gave influential voices to LGBTQ+ creators, and streaming services like Netflix or Crave expanded the availability of queer or inclusive cultural content. 

Today, some of these platforms are criticized for reproducing forms of discrimination against LGBTQ+ communities and for fostering cyberbullying and hate speech. Research shows how online harassment and hate speech can have real and harmful consequences on the safety and well-being of sexual and gender minorities, especially youth. Moreover, the rise of artificial intelligence (AI) raises new concerns about equity and privacy that disproportionately affect LGBTQ+ individuals. While the Internet was once seen as a refuge for these communities, its benefits are now being questioned due to these evolving dynamics. These issues were explored in a special issue of the Canadian Journal of Communication titled Beyond Rainbows and Unicorns: Revisiting the Promises and Challenges of Digital Media for LGBTQ+ Communities

Recently, especially with Meta’s new guidelines, we’ve seen a rise in hate speech and online discrimination against marginalized or minoritized populations, including LGBTQ+ communities. What role do digital platforms play in the spread of such discourse? 

Digital platforms can play a significant role in the spread of hate speech and online discrimination against LGBTQ+ individuals. Features like connectivity and synchronicity allow content to spread rapidly with little oversight. Additionally, the anonymity offered by some platforms can give users a sense of impunity, encouraging the posting of discriminatory content. While anonymity can be a vital tool for protecting users’ privacy, it can also be misused to target already marginalized groups with few consequences. 

Today, digital platforms are often weaponized to organize coordinated harassment and violence campaigns, particularly targeting trans and non-binary individuals, women, and racialized people. Content moderation strategies developed by tech companies are often inadequate to address this surge in violence. Ironically, these companies may even profit from such controversial campaigns, as they tend to increase user engagement. 

Because user engagement can generate significant profits, it has become a more valuable commodity than concerns for justice or equity. This is exacerbated by the development of moderation and recommendation algorithms that tend to assess content based on popularity or controversy rather than its nature, intent, or consequences. 

AI and algorithms can raise serious issues of social justice and equity in digital contexts. What are the risks and implications of these technologies for LGBTQ+ communities? 

We explored this very question in our article titled Mapping the Social Implications of Platform Algorithms for LGBTQ+ Communities, published in the Journal of Digital Social Research

In it, we identify five main areas of concern related to the development of algorithms by digital platforms, particularly in terms of equity and social justice. 

First, classification algorithms—which seek to automatically categorize or even predict users’ sexual orientation or gender identity—can pose significant risks to safety and privacy, especially if deployed in surveillance initiatives by governments or companies hostile to LGBTQ+ communities. 

Second, recommendation algorithms play a key role in shaping LGBTQ+ identities and cultures. These algorithms are designed to maximize user engagement by suggesting content based on past behavior and preferences. This can reinforce sexual and gender stereotypes, polarize opinions on issues related to sexual and gender diversity, or even unintentionally “out” users. For example, someone who initially engages with homophobic, misogynistic, or masculinist content may be led to consume more of it through algorithmic suggestions. 

Another concern involves algorithms designed to detect discriminatory or hateful content, which often fail to effectively protect LGBTQ+ users. These systems tend to disproportionately censor LGBTQ+ content, frequently misclassifying it as pornographic or offensive. This can have serious collateral effects, particularly financial, for queer content creators. 

Additionally, search algorithms shape the visibility of content on platforms like Google. These algorithms were primarily developed to serve the needs of the majority—especially white cisgender heterosexual men—and may therefore reproduce biases that limit the visibility or reach of content created by and for LGBTQ+ users, who represent a minority group. 

Finally, the article examines how sexual and gender biases embedded in everyday algorithms may be linked to the underrepresentation of LGBTQ+ individuals in the tech industry. This industry is still largely dominated by white cisgender heterosexual men, who may not be best positioned to identify these biases or to develop strategies to mitigate their negative impacts. 

In short, we still know very little about the implications of AI and algorithms for LGBTQ+ communities and other marginalized groups. While ethical debates around artificial intelligence are increasingly present in Canadian media, the perspectives of minoritized groups are unfortunately rarely highlighted. Yet, because of their position at the intersection of various social margins, these groups could offer innovative insights into how AI functions, its role in society, and how it might be harnessed for the greater good. These are precisely the research questions my team and I are exploring.