Cyber Threats to Canada’s Democratic Process — 2025 Update

The Communications Security Establishment (CSE) is Canada’s centre of excellence for cyber operations — protecting the government’s most critical networks, collecting foreign signals intelligence, and supporting law enforcement with advanced technical expertise. Within CSE, the Canadian Centre for Cyber Security (Cyber Centre) serves as Canada’s unified authority on cybersecurity, offering expert guidance and support to help protect Canadians and Canadian organizations from global cyber threats.
As part of Skyway’s security-hardened network, we provide clients with real-time security incident and event notifications from the Cyber Centre for any compromised IP addresses under their control.
Executive Summary
This new CSE report is an update to the 2023 publication, Cyber Threats to Canada’s Democratic Process, which assessed how foreign actors sought to influence democratic institutions around the world. While many of the earlier findings remain valid, the 2025 update focuses on the explosive growth of artificial intelligence (AI) tools and their use in cyber and disinformation campaigns.
Over the past two years, AI tools have become far more powerful and accessible — now central to global efforts at political disinformation, harassment of public figures, and cyber espionage.
While AI-enabled interference poses a growing risk, the CSE assesses it is very unlikely (10–30% chance) that such activities would fundamentally undermine the integrity of Canada’s next federal election. However, as adversaries refine their AI tactics, the threat will continue to increase.
Key Findings
- AI use in elections is surging. Between 2023 and 2024, CSE tracked 102 cases of AI-driven interference across 41 global elections — up from just one case in the previous two years. These efforts typically involved AI-generated disinformation, social media manipulation, and harassment of politicians.
- Russia and China lead state-sponsored activity. The report finds it almost certain that these states — along with non-state actors — are leveraging AI to spread false narratives and sow division within democracies.
- Synthetic disinformation is expanding. Of 151 global elections held between 2023 and 2024, there were 60 AI-generated disinformation campaigns and 34 cases of AI-enabled social botnets. The scale of such activity is expected to grow as generative AI becomes cheaper and easier to use.
- Amplification remains a key factor. Most foreign AI-generated content doesn’t gain traction on its own — but when amplified by domestic influencers or online commentators, it can reach wide audiences and shape perceptions.
- AI is powering more convincing attacks. Threat actors are already using AI to enhance social engineering, creating personalized phishing and impersonation attempts. These methods could soon target political figures and election systems directly.
- More advanced malware is coming. Adversaries are expected to use AI to improve the stealth and precision of malware targeting politicians, voters, and election infrastructure.
- Massive data collection fuels targeting. The People’s Republic of China and others are gathering billions of data points on politicians and citizens worldwide, using AI analytics to deepen their understanding of democratic societies and refine influence operations.
- Deepfake harassment is rising. Cybercriminals are using AI to create deepfake pornography of politicians and public figures — overwhelmingly women. While often not tied to direct influence campaigns, these attacks discourage participation in democracy and, in at least one known case, were used to sabotage a political campaign.
As the CSE warns, AI has become a powerful amplifier for disinformation and cyber threats. For Canadian organizations, maintaining vigilance — through network monitoring, real-time threat alerts, and strong cybersecurity practices — is now more critical than ever.
Read Skyway’s earlier summary of the 2023 report and the full 2025 CSE update here.