- Unlikely Perspectives
We teach the next generation of researchers to develop scientific, social, and technological innovations.
We find solutions through interdisciplinary research and industry or public and community partnerships.
We play an active role in Québec's economic, social, and cultural development.
January 29, 2026
Update : January 29, 2026
The “Unlikely Perspectives” series shines a spotlight on unusual, thought-provoking research topics.
Cybersecurity is an integral part of our everyday lives: complex passwords, multi-factor authentication, and facial and voice biometrics are among the methods used to secure online access.
Yet cybersecurity challenges keep evolving, especially with the widespread democratization of artificial intelligence (AI), which can easily be misused for criminal purposes through increasingly sophisticated, hard-to-detect schemes.
Erosion of privacy, fake news, content manipulation, and the creation of misleading images are just some of the challenges posed by this new, still largely unregulated technology.
It is in this context that Anderson Avila, a professor specializing in information science and cybersecurity at the Institut national de la recherche scientifique (INRS), is focusing on AI—seeing it both as an object of study and as a defensive tool.
Professor Avila believes AI can be one of our best allies in understanding and managing its risks. AI can be used to cause harm—but also to prevent it.
“For example, in the case of voice biometrics—a technology that enables identity verification by analyzing a person’s voice—we are working on AI-based techniques to better distinguish real human voices from those generated by AI,” the specialist explains.
When falsely attributed to public figures, synthetic voices can be used to damage an individual’s credibility or reputation by spreading misinformation at scale and, in some cases, steering public opinion in political or economic settings.
Additionally, voice biometrics can help mitigate identity theft by adding an extra layer of security for users conducting financial transactions over the phone.
By helping ensure the integrity of the human voice, Professor Avila’s team hopes to strengthen digital identity protection, reduce certain types of fraud, and increase citizens’ trust in AI-based solutions.
Beyond security, Professor Avila’s research also examines disinformation by studying hallucinations and biases in AI models. His goal is to develop reliable methods to identify and limit ideological influences in language models such as ChatGPT, leading to more factual and neutral outputs.
“The more people rely on AI-driven tools for recommendations and decision-making, the more opportunities malicious actors have to manipulate public opinion,” the expert emphasizes.
His team analyzes linguistic patterns by comparing stylistic differences between real and fake news. “Through our research, we want to strengthen critical thinking by equipping society with tools that detect false information and misleading rhetoric.”
“Ultimately, my goal is to use AI to defend against technological threats, while also promoting reliable systems that can withstand malicious manipulation.”
Professor Avila is intent on raising awareness of digital dangers by placing his knowledge in the service of cybersecurity as a whole. In recent years, the rise of AI has completely redefined the landscape of cyberattacks, offering a wide range of new possibilities to actors with malicious intent.
“AI systems are often presented as an ideal solution without emphasizing its potential risks: its ability to clone voices, bypass security systems, generate convincing phishing emails, or even manipulate users’ behavior by exploiting personal data,” the expert notes.
In his view, it is essential that the public be better informed about ongoing debates around AI ethics, privacy, and security so people can engage critically with these technologies, which are taking an increasingly prominent place in our lives.
“Concerns about privacy are growing, but we often don’t know the extent to which AI can influence behavior at scale,” he concludes.
Share