International Security Report A 2026 report by the Estonian Foreign Intelligence Service contained a shocking finding. It tested the Chinese open-source artificial intelligence model, DeepSeek, for biased or incomplete responses.
"When discussing issues related to Estonian security, DeepSeek hides key information and inserts Chinese propaganda into its responses," the report warned.
This Estonian analysis is one of three recent European assessments of artificial intelligence models developed by the Chinese. A audit from the non-profit organization Policy Genome and a survey detailed research funded by the Swedish Psychological Safety Agency highlights how major Chinese models such as DeepSeek, Alibaba's Qwen family, and Moonshot's Kimi incorporate content controls that extend beyond China's domestic political sensitivities.
Review Previous Chinese AI models focused on domestically censored topics, such as the 1989 Tiananmen crackdown, Taiwan, and rights abuses involving Uyghurs, Tibetans, Hong Kong, and Falun Gong. These restrictions limit knowledge about China and silence European citizens from diverse diaspora communities and multi-ethnic religious groups.
New studies reveal a broader pattern of content shaping. Two of the reports document distortions in information related to the Russian invasion of Ukraine. The Estonian report found glaring distortions when DeepSeek answered questions about the war, including unwelcome insertions of official Chinese positions. When asked about the atrocities in Bucha, DeepSeek offered vague acknowledgements of international concerns, while adding, in passing, that “China has consistently supported peace and dialogue.”
The Policy Genome audit examined seven questions about the war in Ukraine across six models from different countries, including China. It found that DeepSeek’s English and Ukrainian responses were largely accurate, but some Russian-language responses supported the Kremlin’s talking points or included misleading details. The study’s conclusion captures this nuance: “The risk is not just the ‘model you use,’ but also the language in which you ask.”
It’s not just Russian propaganda about Ukraine that shows up in the Chinese models. When the researchers pushed the models to reveal their reasoning, they discovered internal directives from DeepSeek to avoid common Communist Party taboos or from Qwen to keep responses about China “positive and constructive, avoid criticism, and emphasize achievements.” The same model was also instructed to remain “neutral and objective” toward the United States, Kenya, or Belgium, avoiding “any political or sensitive topics” for the latter two.
Another troubling finding relates to how content controls run by the Chinese Communist Party extend beyond the original models to the apps built on them. The Chinese models are open source, powerful, and cheaper than proprietary American alternatives from firms such as OpenAI or Anthropic.
These advantages are driving rapid adoption by developers. According to the Swedish-funded study, Alibaba’s Qwen family of models alone recorded more than 9.5 million downloads from October to November 2025 and served as the basis for approximately 2,800 derivative models, including a Brazilian legal research platform and a chatbot tailored for Ugandan languages.
Basic models from China hold checks of content embedded in their subsequent apps — often without users or developers realizing the inherent manipulation. While some retraining could reduce China-specific restrictions, the authors of the Swedish-funded study found the process incomplete: “Of the ten companies whose models we tested for this report (including both the original Chinese models and new models built on them), none were completely free of Chinese information guidelines.” Traces of Chinese government controls on the original models were found in languages as diverse as English, Chinese, Japanese, Russian, Malay, Indonesian, Thai, and Hindi — collectively spoken by billions.
China’s AI exports also pose cybersecurity risks or other vulnerabilities. When asked about the security of Chinese technology, the Estonian report found that DeepSeek offered elaborate and official-sounding assurances about reliability, while ignoring documented cases of hacking, cyber espionage, or transnational suppression linked to China-based actors.
The Swedish study noted that some versions of the Chinese models, including DeepSeek and Qwen, were vulnerable to “jailbreaking” – techniques that bypass safeguards to extract instructions for creating weapons or controlled substances like fentanyl – a vulnerability that could be exploited by a range of bad actors.
These models are not random. To operate inside China, the models require approval from the country's cyberspace administration and must comply with party-state censorship and propaganda.
China's leaders see AI exports as a strategic tool to expand influence over the global information space. They have encouraged open source to accelerate technological development, which has also fueled the rapid adoption of Chinese AI models, particularly in the Global South. RESEARCHERS and Chinese officials have openly discussed using advances in artificial intelligence to “command greater discourse power on the international stage.”
Spreading global of these Chinese models without appropriate safeguards carries consequences for Western security and freedom of expression worldwide. Deep integration into the global digital infrastructure raises legitimate concerns about the future activation of influence operations, including in European, American and other elections.
These latest reports underscore the need for urgent action. Democracies must make developers aware of the risks of data transfers. They must strengthen transparency rules that require the disclosure of underlying patterns.
Artificial Intelligence is transforming our information environment. China’s leaders are treating its political dimensions as a strategic priority. Democracies must respond and direct resources to preserve open inquiry, minimize hidden bias, and strengthen resilience.
The Geopost

Democracy triumphed in Hungary, Vučić fears the coming changes
Portal Novosti spreads propaganda: Media agreement declared a "pact against Serbs"
Local elections in Serbia: Vučić weakened, alternative still does not exist
Analysis: The Battle for Hormuz and the “Prosperity Guardian”
Serbian media manipulates about American KFOR soldiers: From interest in Orthodoxy to acceptance of religion
From propaganda to influence: The global network of separatism backed by Russia