Shortly after the invasion of Ukraine, Russian networks and channels spreading Kremlin lies began flooding the internet with so much material that it affected the “learning” process of popular AI models. As a result, more than 33 percent of the content delivered to users by these systems contains elements of Russian disinformation, NewsGuard analysts found.
Russia uses a mechanism whereby AI models “learn” by gaining “knowledge” from huge data sets on the internet. Flooding it with a huge number of publications containing a specific lie causes the chatbot to pass it on to the user as fact.
McKenzie Sadeghi and Isis Blachez of the NewsGuard platform, which rates the credibility of news sites, analyzed this phenomenon on 10 of the most popular AI chatbots and found that as much as 33% of the content served by the systems contained elements of Russian disinformation. The researchers emphasized that this poses a serious threat to the security and functioning of democratic processes in other countries.
According to research conducted by another organization, The American Sunlight Project, which aims to counter threats to American democracy, among the numerous channels through which the Kremlin “corrects the knowledge” of chatbots, the key role is played by the “Pravda” network, launched shortly after the aggression against Ukraine. It consists of over 150 domains and thanks to it, an average of over 20,000 articles containing pro-Kremlin messages, manipulations and fake news penetrate the network within two days.
Michał Marek from the Center for Research on the Contemporary Security Environment, in an interview with PAP, pointed out that the content appearing in the Polish-language version of the “Prawda” portal is most often translations of Russian materials previously published, among others, on the Ria Nowosti portal and in other sources fully controlled by the Russian side. “There are also materials created by Poles involved in disinformation activities on behalf of the Russian side, which are published, among others, on social networks. In this way, a given source obtains materials thanks to which it can position itself as an efficiently operating portal providing information on current topics,” the expert pointed out.
However, as Dr. Ilona Dąbrowska, an expert in social media, media production, online journalism and e-communication from Maria Skłodowska-Curie University in Lublin, noted, “+old dogs, new tricks+. Previously, we had troll farms or bots on Facebook, then deepfakes. It was only a matter of time before the Kremlin used the large language grooming model to mislead and manipulate public opinion.”
The PAP interviewee emphasized that the implementation of one huge strategy, which consists in pouring a huge amount of false information into the network, achieves the first benefit in the form of someone reading it and being fooled, and the second – that it “learns” language generators. “Russia is perfectly aware of how popular various AI generators are, and at the same time how poorly secured they are. That is why it is such a perfect tool for manipulation,” added Dąbrowska.
Especially since Russia does not have to rely on Pravda and its own networks and channels to groom large language models. “Unfortunately, this phenomenon is deepened by services unrelated to the Kremlin, which willingly, although unknowingly, replicate its messages due to their loudness and controversy, which ensures greater interest, more views, and consequently – greater profits,” the expert emphasized.
In her opinion, the huge profits that the Putin regime is already reaping from these actions will increase over time. “Russians know perfectly well that 'the system begets the system'. The current systems, far from perfect, will be used to create the next ones that will appear in the future. Infecting these modern ones with disinformation will cause future technological solutions to repeat this mistake,” Dąbrowska warned and pointed out that her observations show that we increasingly often draw knowledge about the world not from reliable sources, but through AI generators; we also do not verify information because we simply do not have time for it in our daily lives.
Can countries exposed to Russian disinformation effectively counteract this phenomenon? As Ilona Dąbrowska argued in an interview with PAP, we should use every opportunity to educate society about the dangers associated with new media. “At every step, we should be made aware of how imperfect the technological solutions we admire are.”
The second pillar should be an increase in funding for scientists, placing emphasis on the development of technology, for example, creating solutions that would check at the systemic level whether a text is untrue and whether, for example, it was generated by some kind of organization.
“In times of unrest and hybrid warfare, we should re-adjust our thinking. Engage forces and resources not only to acquire weapons, but also to secure societies, and therefore entire countries, against manipulation and disinformation,” Dąbrowska concluded. (PAP)
dec/ mhr/