Skip to content
The Geopost

The Geopost

  • NEWS
  • FACT CHECKING
  • ANALYSIS
  • INTERVIEWS
  • BALKAN DISINFO
  • ENG
  • ALB
  • SRB
  • UKR
  • ABOUT US
  • News

The role of artificial intelligence: For Russia, information warfare is strategically as important as conventional or nuclear weapons

The Geopost October 20, 2025 12 min read

Foto credit: CNAS

Share the news

To understand contemporary and emerging disinformation threats, it is important to consider how Russian state and military doctrine defines information warfare and the role of artificial intelligence (AI) in this context, according to a report by the British Royal United Services Institute (RUSI) entitled “New Insights: Russia, Artificial Intelligence, and the Future of War Disinformation.”

For Russia, information warfare (known in doctrine as “information confrontation”) is strategically as important as conventional or nuclear weapons, even in peacetime, the report states, emphasizing that the information sphere is considered crucial to national security and international influence.

Russia often does not acknowledge cyber attacks on the West, but it does not directly deny them either – it uses “plausible deniability” as a strategic advantage.

Its strategy combines intelligence gathering (through espionage or cyber intrusions) with psychological influence on individuals and crowds. The goal is to gather information while shaping political discourse and decisions – at home and abroad.
Disinformation campaigns seek to weaken opponents by deepening internal divisions, undermining trust in democratic institutions, and undermining alliances such as NATO and the EU, making it difficult to coordinate responses to Russian actions.
The report states that Western media sometimes exaggerate Russian cyber capabilities, further reinforcing Russia’s image as a powerful cyber force—even when its actual influence is limited.

Russia’s disinformation ecosystem is diverse, involving state institutions, state media, intermediary organizations, ideologically motivated individuals, and bloggers working for commercial interests. Some operate under formal control, others independently, but all in line with the Kremlin’s objectives.

This decentralized approach allows for flexible communication tailored to different audiences and gives the state the ability to deny involvement.

Russian actors in information warfare

At the state level, Russian intelligence agencies, particularly the GRU military intelligence service, play a key role in cyber operations involving influencing foreign targets. Two main units of the GRU, 26165 and 74455 (known as APT28 – Fancy Bear and APT29 – Cozy Bear), have been linked to cyber attacks, election interference, and the spread of disinformation via social media.
Although these attacks involve technical operations (hacking, data theft), the GRU rarely appears in public—the role of “messengers” is often taken on by hacktivist groups, which transcribe and distribute stolen content to conceal its origin. Coordination is recognizable by the timing of information leaks and shared technical infrastructure, which makes it difficult to link directly to the Russian state and allows for the spread of influence with less risk.

In addition, numerous media intermediaries (e.g., Strategic Culture Foundation, News Front, SouthFront) spread Russian narratives around the world without direct Kremlin control, thereby legitimizing and reinforcing Russian messages abroad.

Artificial intelligence (AI) as a driver of disinformation

Russia sees AI as a strategically important area. The 2019 national strategy set the goal of making the country a leader in AI by 2030. Officials see AI as both an opportunity and a threat—particularly due to fears that Western AI tools could manipulate Russian public opinion and cause ideological destabilization.

Russia has therefore begun developing its own sovereign AI system, led by the National Center for Artificial Intelligence Development. State-owned companies such as Sberbank (GigaChat platform, analogous to ChatGPT) and Rostec (focused on military applications) play leading roles, while Yandex has a secondary role due to its complicated relationship with the authorities.

Generative artificial intelligence is already being used in Russian disinformation operations—to create fake news, posts, images, and content based on fabricated content. Campaigns such as “DoppelGänger” use artificial intelligence to create texts that mimic Western media, with the aim of creating confusion and undermining trust in institutions. AI bots and automated accounts amplify disinformation and pretend to be “spontaneous” public opinion (so-called astroturfing), sometimes simulating fake debates between bots to mislead observers.

Russian actors are also experimenting with “training” large language models (LLMs) – attempts to inject propaganda into the data used to train these systems in order to subtly manipulate their responses. This represents a shift from directly influencing audiences to shaping the tools that audiences use.

Although still in development, generative artificial intelligence is already being considered as a means to amplify disinformation, facilitate its spread, and blur the lines between truth and falsehood.

Discourse on artificial intelligence in Russian influence networks

Data analysis (via the ExTrac AI platform) shows a growing interest in artificial intelligence among Russian actors, particularly in the context of disinformation and strategic communication. The military use of artificial intelligence is often discussed, as is its role in cyber operations and information manipulation.

On Telegram channels, artificial intelligence is often presented as the key to training a new generation of cyber operators. Some posts express pride in the use of artificial intelligence in sophisticated information operations, while mocking Western accusations of Russian interference—for example, jokes that Putin himself would have to speak German to participate in a German campaign, alluding to his language skills.

Western accusations are routinely dismissed as exaggerated or fabricated, portrayed as attempts to shift responsibility for internal political problems in those countries.
Such dismissive rejections are intended to undermine the credibility of Western reports, while reinforcing the internal narrative that

Russian influence operations are successfully destabilizing Western governments.
However, the discussions we observed reveal, beyond boastful and disparaging remarks, a degree of deeper mistrust and paranoia regarding artificial intelligence technologies. For example, posts express concern about AI’s ability to quickly and convincingly reproduce human characteristics, which is seen as a dystopian threat that could gradually replace humans entirely.

Another concern relates to the potential creation of artificial superintelligence (ASI), with actors describing scenarios in which routine human activities are rapidly taken over by AI agents, with predictions that AI systems will reach human-level intelligence, ultimately leading to a society entirely controlled by AI.

Other concerns are more immediate and pragmatic. For example, the appointment of former US National Security Agency (NSA) director Paul Nakasone to OpenAI’s board of directors was widely interpreted as evidence of the ongoing militarization of artificial intelligence. Actors have suggested that the US military intelligence community could now potentially monitor and exploit all interactions with OpenAI systems.

These narratives show how knowledge of Western developments in artificial intelligence goes hand in hand with increased anxiety about losing control over these technologies.

In this context, Ukraine is often portrayed as a testing ground for psychological operations enabled by artificial intelligence, mass surveillance, and propaganda by Western intelligence agencies.
In addition, specific cases such as the DoppelGänger campaign are presented ambiguously—some channels claim that such campaigns are actually “false flag” operations organized by Western intelligence agencies. are presented ambiguously—some channels claim that such campaigns are actually “false flag” operations organized by Western intelligence services in collaboration with Russian opposition figures, further reinforcing narratives of victimhood and external aggression.

Because of such concerns, actors associated with Russia openly call for greater information literacy among the Russian public. The channels regularly remind their followers to carefully verify information, be skeptical of unofficial sources, and rely primarily on state-approved media.

The framework and function of artificial intelligence tools

Telegram channels present artificial intelligence not only as a technological innovation, but also as a civic duty, encouraging skilled individuals to contribute to national efforts.

Job postings are often aimed at users with knowledge of artificial intelligence technologies, inviting them to contribute their skills to the goals of groups in line with Russian national interests.
For example, one Telegram channel is actively seeking employees with experience in generative artificial intelligence tools and posts the following:

“Are you a neural network artist? [Channel name redacted] needs you! Our team is expanding and the number of tasks is growing every month. We are looking for artists who already work with Stable Diffusion (AUTOMATIC1111 + ControlNet) and Midjourney … who understand the principles of industrial engineering and can control the generation process instead of relying on luck.”
Similarly, another job posting from the same channel lists a broader set of desired collaborators, including “neuroevangelists and text generation experts,” emphasizing that the project’s activities go beyond mere graphic content and include sophisticated manipulation and creation of text.

The channel describes its project as being based on human capital—including talented analysts, informants, and digital intelligence enthusiasts—with an emphasis on the importance of brainstorming and diverse expertise, including geospatial analysis and open-source intelligence (OSINT) analysis.

Another central theme in the discussions observed was the importance of information warfare as an integral part of modern conflicts and national security. For example, promotional materials for the Army-2024 Forum roundtable emphasized the importance of understanding how advanced digital technologies, including artificial intelligence, can shape society’s perception in the context of Russia’s “special military operation” in Ukraine.

Other Telegram channels present artificial intelligence tools as key resources for protecting Russia’s digital space from external threats. One channel describes itself as a community of “highly qualified experts in the field of cyber security, information technology, and social research” explicitly dedicated to “neutralizing threats, disinformation, and propaganda.” These discussions present artificial intelligence as a tool not only for attack but also for defense—it is used to identify, prevent, and neutralize foreign influence operations allegedly directed against Russian society.
Perceiving limitations and obstacles

Despite the expressed enthusiasm for the strategic integration of artificial intelligence into information warfare, the online communities observed also expressed considerable criticism and frustration regarding the limitations of domestic Russian artificial intelligence platforms, primarily SberbankGigaChat and YandexGPT.

A common complaint is that these models display liberal or “unpatriotic” biases. SberGigaChat, for example, has been accused of having a positive attitude toward figures such as Lenin and Trotsky; one post complained that the platform “cannot condemn their betrayal of Russia and the Red Terror.”

Even more concerning for these actors is the way the platforms handle politically sensitive topics, particularly regarding Russian territorial claims. YandexGPT and SberGigaChat have faced accusations that they have failed to clearly affirm that Crimea and the recently annexed “new regions” are part of Russia. Posts have shared anecdotes in which AI avoids such questions or suggests changing the topic, which users interpret as evasion and subversion.

One user sarcastically remarked, “SberGigaChat starts to freeze even when asked questions such as ‘which coastal regions are most desirable for Russians to move to’ (since Crimea is on the list).”
This reluctance to reproduce official state positions and narratives has led to accusations that platforms such as YandexGPT are either deliberately undermining Russia or are subservient to foreign interests — an issue that users believe should be addressed at the level of the Russian Security Council.

Similar accusations have been made on other domestic platforms. For example, Megafon’schatbot has been criticized for classifying Crimea and the Donetsk and Luhansk People’s Republics as Ukrainian territories, allegedly because it uses foreign artificial intelligence technologies such as Midjourney and the GPT model.

Users also complained that Russian services provide poor answers or refuse to respond to neutral and factual questions that Western tools can handle without difficulty.

One post summed it up as follows: “Take YandexGPT… this AI prototype is a terrible coward and doesn’t even answer the most common questions… On the one hand, it seriously undermines trust in Yandex and its products. On the other hand, it provides grounds not only for declaring Yandex’s services incomplete, but also for declaring its current operators to be foreign agents.”

Another example discussed on the monitored channels relates to allegations that the French Cyber Defense Command has trained Ukrainian and Polish cyber units specifically to target Wagner’s operations in Mali and elsewhere. The channels reported that these operations are part of a broader Western strategy to use artificial intelligence technologies as weapons against Russia and its interests.
Wagner’s relationship with artificial intelligence has been further complicated by controversies in which content created by artificial intelligence has allegedly been misused to manipulate or misrepresent Russian domestic positions. In one widely discussed case, a controversial statement allegedly made by State Duma deputy AleksandrBorodai was dismissed as fake content created by artificial intelligence after it described volunteers on the front lines as “reserve people.”

Beyond its operational focus, Wagner’s discourse on artificial intelligence often reinforces a metanarrative about the competence of the elites. The channels frequently contrast Wagner’s disciplined, intelligence-based approach to information warfare with what they describe as the amateurish or ideologically confused efforts of other Russian actors.

Hacktivist groups

Pro-Russian hacktivist collectives – many of which operate under the leadership of or in strategic alliance with state intelligence agencies such as the GRU, the Foreign Intelligence Service (SVR), and the Federal Security Service (FSB) – are an important part of Russia’s broader influence and cyber operations network. Notable groups include Zarja, the Russian Cyber Army (also known by names such as the People’s Cyber Army or the Reborn Russian Cyber Army), Solntsepek, Beregini, RaHDit, and NoName057(16), and are also linked to APT units such as APT44 (GRU Unit 74455).

These groups often target organizations fighting disinformation, independent media, and strategic infrastructure, particularly in Ukraine and other parts of Europe, presenting their actions as retaliation for Western “information aggression.”
For example, in May 2024, the Russian cyber army attacked the website of the Ukrainian Center for Combating Disinformation, justifying the attack as retaliation for what they described as Ukrainian disinformation about incidents in Belgorod: “We must do everything we can to teach them a lesson, because they have filled the information space with cynical lies about yesterday’s shelling of a residential building in Belgorod.”

In addition to carrying out attacks, these groups also exploit the symbolic value of media attention. For example, DDoS campaigns that receive significant media attention in Western media, such as the temporary shutdown of the official website of the President of Slovenia, are presented as major victories, regardless of their technical or strategic significance. Screenshots of attacked websites are distributed as digital trophies to boost morale among their supporters and send signals of success both internally and to their opponents.

Another important part of how these groups operate is the exchange of knowledge, resources, and technical expertise. For example, they highlight and promote platforms such as HackerGPT, which is specifically tailored to support Russian-oriented hackers and provides databases of techniques, tools, and strategies for effective cyber operations.

Since its inception in early 2022, the hacktivist group NoName057(16) has openly discussed artificial intelligence as a force multiplier for DDoS attacks, disinformation campaigns, and reputation sabotage. The group demonstrates knowledge of international research and reports analyzing their influence operations. They actively reference external reports, such as those published by Google researchers, which highlight artificial intelligence as a major source of online disinformation, and treat them as confirmation of their operational effectiveness. This interaction with opponents’ reports and investigations blurs the line between propaganda and action, creating a feedback loop in which Western scrutiny is used to reinforce the group’s narrative of influence and importance. When alleged members of the group were arrested in Spain in mid-2024, NoName057(16) presented the incident as symptomatic of a broader European “witch hunt” led, they claim, by “Russophobic authorities.” This narrative strengthens the group’s cohesion and presents their activities as legitimate resistance against unjustified Western repression.

Finally, NoName057(16) actively builds its public identity through interviews, summaries, and media products in multiple languages, presenting itself as a dedicated actor in Russian information warfare. His Telegram channels call for ongoing AI-driven cyber operations, provide users with tools and guides, and foster a sense of community based on technical skills and ideological commitment.
This report shows that generative artificial intelligence (AI) is no longer something that lies in the future, but is an active part of ongoing Russian and allied influence operations.

The analysis shows that generative AI is not just a tool for improving existing disinformation techniques, but is changing the way influence operations are developed, legitimized, and carried out. Vagner, for example, uses AI for educational materials and criticizes poor-quality content, calling for strategic discipline and thus professionalizing information warfare. Hacktivist groups such as NoName057 see AI as a force multiplier in decentralized campaigns aimed at defeating and destabilizing Western digital infrastructure.
Implications for policy and security

The report suggests that regulations on artificial intelligence should be developed that clearly address the misuse of artificial intelligence, not only in content creation, but also in model training, access, and deployment.

Given the decentralized and multilingual nature of Russian-linked influence networks, it is important to monitor the discourse of actors, including strategic planning and narrative design across different platforms, not just the content they produce. Support is needed for civil society and the media, which are directly targeted by these operations, by protecting them from AI attacks and strengthening digital literacy and resilience to synthetic and manipulated content.

Given the speed of AI development, greater coordination is needed between governments, platforms, researchers, and journalists, along with sharing insights into threat tactics and behavior. This should include sharing insights into observed tactics and the use of artificial intelligence tools, as well as the behavior of threat actors.

Ultimately, this report shows that artificial intelligence is transforming the mechanisms and logic of Russian disinformation, rather than replacing them. Artificial intelligence acts as an amplifier, enabling greater reach, faster response, and more dynamic adaptation of narratives. However, it also introduces new vulnerabilities, contradictions, and frictions within Russian influence networks themselves. Understanding and engaging with these internal dynamics will be key to shaping future policies and developing effective countermeasures./The Geopost/

Continue Reading

Previous: Everything that happened during the operation in the north in connection with the investigation into the murder of an EULEX official
Next: Kremlin uses ‘nuclear saber-rattling’ to deter US Tomahawk supply to Ukraine, think tank says

Brussels Moves To Leverage $204 Billion In Russian Assets For Ukraine Loan 3 min read
  • News

Brussels Moves To Leverage $204 Billion In Russian Assets For Ukraine Loan

The Geopost October 23, 2025
Children among victims in Russian strikes, hours after Trump-Putin talks shelved 4 min read
  • News

Children among victims in Russian strikes, hours after Trump-Putin talks shelved

The Geopost October 23, 2025
The Balkans are testing Europe’s security, says British Prime Minister at Berlin Process summit 5 min read
  • News

The Balkans are testing Europe’s security, says British Prime Minister at Berlin Process summit

The Geopost October 23, 2025
Fire and gunshots in Serbia: Vučić declares the attack in “Ćacilend” a “terrorist attack” without evidence 3 min read
  • News

Fire and gunshots in Serbia: Vučić declares the attack in “Ćacilend” a “terrorist attack” without evidence

The Geopost October 22, 2025
Serbia repeats propaganda at the UN, Kosovo denies it: Serbian rhetoric recalls the Milosevic era 1 min read
  • News

Serbia repeats propaganda at the UN, Kosovo denies it: Serbian rhetoric recalls the Milosevic era

The Geopost October 22, 2025
US and Australia sign rare earths deal to counter China’s dominance 3 min read
  • News

US and Australia sign rare earths deal to counter China’s dominance

The Geopost October 21, 2025

  • [email protected]
  • +383-49-982-362
  • Str. Ardian Krasniqi, NN
  • 10000 Prishtina, KOSOVO
X-twitter Facebook

Corrections and denials

Copyright © The Geopost | Kreeti by AF themes.