- Overview
The series “Tour d’horizon en trois questions” highlights research in all its forms and takes an informed look at current events.
Romuald Jamet, INRS Professor
Artificial intelligence (AI) has now become a full-fledged player in the music world. Who creates, distributes, and decides what we discover? And who benefits from these new realities? In Quebec, where language and culture lie at the heart of collective identity, these questions resonate even more strongly. How do we respond to these disruptions?
Romuald Jamet, professor at the Institut national de la recherche scientifique (INRS) and Chairholder of the INRS ID2C Research Chair on Data Industries and Cultural & Creative Industries, explores how AI is transforming cultural practices, creative industries, and Quebec’s cultural and linguistic sovereignty. He also offers essential insights into these ongoing changes.
How is artificial intelligence changing the way we create music in Quebec and beyond?
AI systems—because there are many types—are reshaping musical creation in at least two major ways.
First, they are democratizing tools that were once reserved for professionals. Take mastering, for example, the final step in producing a track, where specialized sound engineers refine its identity and acoustic quality before release. Today, software such as LANDR—developed and commercialized in Montréal—automates this entire process. This is useful for amateur musicians, but it reduces professionals’ control over the final sound.
The problem is that this automation tends to homogenize the resulting audio. And that’s no coincidence. Streaming platforms classify music using audio fingerprints (bass density, vocal-to-instrument ratio, etc.). This creates a worrying feedback loop: music‑creation tools shape tracks so they fit the patterns recognized by streaming algorithms, which then use other AI systems to classify and recommend those same tracks to listeners. Music becomes optimized for machines rather than for human ears. A trained listener might notice it quickly during focused listening, but it’s much less noticeable in settings where background music is not meant to draw attention.
Second, generative AI is fuelling the massive production of music designed for passive listening—classic “elevator music.” While background music for grocery stores, waiting rooms, or gyms isn’t new, what’s changed is that AI can now create huge quantities of it without paying any musicians. In the past, artists were compensated per piece for these recordings (such as the famous Muzak catalogue). Now, machines replace that labour entirely.


Can AI make Quebec’s French‑language music more discoverable?
In theory, yes. As AI improves at identifying genres, detecting listener moods, and recognizing Quebec-made French-language music in its databases, its recommendations should get better. And indeed, over the past decade, streaming platforms have significantly refined their recommendation systems. Our recent research shows that listeners in Quebec who seek out local music generally receive a decent amount of French-language recommendations—though not always high‑quality or relevant ones.
However, does being recommended by AI necessarily mean Quebec artists become more discoverable? This is where a paradox emerges.
Generative AI can now produce “Quebec‑like” music: Udio and Suno, the two leading AI‑music platforms, can create a folk‑inspired song in about three minutes, complete with expressions and vocal phrasing that sound distinctly Québécois. To do so, these systems have effectively “scraped” the entire Quebec (and global) musical repertoire to train their models, extracting stylistic traits and remixing them without paying any musicians or rights holders.
This creates a contradictory situation unique to Quebec. On the one hand, the government aims to promote local artists and improve the discoverability of French‑language content (as stated in Law 109, adopted December 12, 2025). On the other hand, it is heavily investing in AI to position the province as a global leader in the sector. These two goals clash—and discoverability suffers as a result.
How has AI transformed relationships among artists, the music industry, and platforms? Is an ethical approach to AI possible?
Power dynamics have become far more complex. Artists fight to protect their rights while streaming platforms set the rules. Meanwhile, major music labels negotiate with generative‑AI companies, whose systems rely on massive datasets built without paying for licences or copyright. In its current form, AI undermines fair compensation for everyone in the music ecosystem. Notably, few digital platforms are truly profitable—Spotify only became profitable last year, and even then its enormous debt isn’t factored into the equation. Yet Spotify continues to invest money it doesn’t really have into AI to avoid falling behind competitors.
Under these conditions, AI companies cannot claim to respect even basic ethical standards, since their business models depend on extracting value from creators without compensating them.
A more ethical AI is possible, though. Some artists already use specialized AI tools designed for experimentation and creative assistance that do not violate copyright. But for such an approach to last, we need artists to continue producing new material—because AI systems depend on human‑generated works to “feed the machine.” Researchers now warn of “AI model collapse” or “self‑poisoning,” which happens when AI systems start training on AI‑generated content rather than human creations.
These issues go far beyond music and raise broader questions about how cultural sectors should be regulated in the age of AI. The European Union has begun taking this seriously, but many governments—including Quebec—remain hesitant. Their desire to attract AI giants and stay competitive globally often outweighs the need to protect local cultural ecosystems.