Shadows of Surveillance: How AI Exploitation Undermines National Security and Human Rights

Artificial intelligence is rapidly transforming how people think, communicate, and make critical decisions. From influencing news consumption and political beliefs to shaping public opinion and voter behavior, AI is becoming a powerful force in the information landscape. Yet, in countries without robust, enforceable data protection and AI laws, the risks associated with AI-driven manipulation and disinformation are accelerating at an unprecedented pace.

The Rise of Open-Source AI and Its Implications:

The proliferation of open-source AI technology, such as ChatGPT, has significantly complicated the information landscape. Cybercriminals are now exploiting these tools to execute sophisticated phishing attacks. According to Interpol, phishing attacks leveraging AI technology increased by 32% from 2023 to 2024, as AI-generated content was weaponized to impersonate trusted sources and deceive victims into revealing sensitive information (Interpol Report). This trend is a wake-up call for policymakers and cybersecurity experts alike.

 

The Danger of Deregulation:

A dangerous trend is emerging globally with prominent political figures like Emmanuel Macron and Donald Tusk advocating for a rollback of stringent data protection and AI regulations to stimulate economic growth. This movement towards deregulation not only undermines existing protections but also opens the door for AI systems to be weaponized without accountability. This trend raises critical questions about privacy, security, and ethical AI use in democratic societies already struggling to contain the spread of disinformation.

 

Risks to Democratic Societies:

AI-generated misinformation has increasingly targeted marginalized communities, particularly during the 2024 U.S. elections. Sophisticated disinformation campaigns employed AI tools to create realistic fake images and narratives aimed at suppressing voter turnout and deepening social divides. For instance, AI-generated images depicted Donald Trump with Black voters to sway political opinions, while Spanish-language disinformation targeted Latino voters with false information about voting rights (HRW). Foreign adversaries, including Russia and China, leveraged AI technologies to interfere in the elections, creating content designed to exploit societal divisions and influence voter behavior (WSJ). These tactics illustrate how AI can be strategically deployed to undermine democratic processes.

 

Gen Z: The New Age of Digital Brainwashing

Children and Teenagers as Targets of Propaganda:

Unlike the European Union, where TikTok’s algorithmic content curation fosters unregulated echo chambers, in Russia and China, the platform operates under state-sanctioned frameworks that align with government propaganda objectives. In Russia, TikTok restricts foreign content while amplifying Kremlin-approved narratives, creating a pro-war information ecosystem. Meanwhile, Douyin, the Chinese version of TikTok, enforces stringent censorship and promotes content favorable to the Chinese Communist Party (AP News).

 

In Western countries, TikTok remains a powerful tool for spreading misinformation and propaganda unchecked. Donald Trump, who once labeled TikTok a national security threat, eventually embraced it as a key propaganda tool during his political campaigns. This shift underscores how the platform’s algorithmic reach can be leveraged to shape global narratives.

 

Targeting the Elderly: AI-Driven Phishing Scams:

In 2025, cyberattacks targeting older adults surged by 32%, driven by increasingly sophisticated AI-driven phishing campaigns. Cybercriminals are leveraging AI to clone voices, create deepfake videos, and impersonate trusted individuals, exploiting the limited digital literacy of older adults. In the United States alone, seniors reported nearly $2 billion in fraud losses last year (Cybersecurity Ventures).

Connected But Disconnected

Interpol has issued urgent warnings about the rise in financial fraud facilitated by AI, urging governments to implement enhanced cybersecurity measures to protect vulnerable populations from emerging digital threats (Interpol Report).

 

The Ideological Divide and Civil Unrest:

The ideological divide between the U.S. and Europe regarding AI governance is widening. While Europe has implemented stricter regulations through frameworks like the GDPR and the EU AI Act, the U.S. continues to prioritize free speech, even when it enables extremist content. At the 2025 Munich Security Conference, J.D. Vance framed divisive rhetoric as free speech, reflecting America’s reluctance to regulate AI-generated content despite its potential for inciting violence and division.

Global AI Competition: China vs. U.S.:

The global race for AI dominance is largely centered around the U.S. and China, sidelining Europe in the process. While the U.S. advances generative AI tools like ChatGPT, China is heavily investing in state-controlled AI systems that reinforce government narratives. This power struggle raises critical questions about data sovereignty and security, particularly given the U.S. Federal Intelligence Surveillance Act (FISA), which allows government access to data without user consent (ABC News).

 

National Security Risks and Data Sovereignty:

Most AI-generated data is stored on U.S.-based servers, posing significant risks to European and Middle Eastern countries. Tools like ChatGPT operate as 'black boxes,' offering users little insight into how their data is processed, stored, or potentially accessed by U.S. authorities under FISA. This lack of transparency creates vulnerabilities that malicious actors can exploit to manipulate narratives, create deepfakes, or automate phishing scams.

Previous
Previous

How to Operationalize Human Oversight in HR: Key Steps for AI Governance and Privacy

Next
Next

Trump’s Actions Could End EU US Data Transfers. What Every C-Level Executive Should Prioritize Now