• Wyślij znajomemu
    zamknij [x]

    Wiadomość została wysłana.

     
    • *
    • *
    •  
    • Pola oznaczone * są wymagane.
  • Wersja do druku
  • -AA+A

Russia using AI to spread propaganda in Poland, says OpenAI

Russia among countries using AI to conduct covert operations in Poland

12:42, 31.05.2024
  aa/kk;   PAP
Russia among countries using AI to conduct covert operations in Poland Groups from Russia, China, Iran, and Israel used AI models to conduct covert influence operations on social media in Europe, including Poland.

Groups from Russia, China, Iran, and Israel used AI models to conduct covert influence operations on social media in Europe, including Poland.

Photo: Jaap Arriens/NurPhoto via Getty Images.
Photo: Jaap Arriens/NurPhoto via Getty Images.

Podziel się:   Więcej
The results of the investigation, published on Thursday by the American AI research company OpenAI, uncovered five user networks: two from Russia, one from China, and one each from Iran and Israel.

Among them, one previously unknown operation, dubbed “Bad Grammar” by OpenAI, was discovered to be using the company’s ChatGPT to enhance a bot on the messaging app Telegram and create short comments in Polish and English.

In addition, in a now-famous operation dubbed “Doppelganger” linked to Russian intelligence, the Russians used AI models to create posts in French, German, Italian, and Polish that were posted on the X platform and the 9GAG meme portal, as well as to create entire articles posted on sites imitating real media.

OpenAI has since announced the deletion of accounts linked to these groups. In a similar way, to enhance bots and create articles and comments, OpenAI platforms were used by groups from China, Iran, and Israel in the long-known operation “Spamouflage.”

Ben Nimmo, the report’s author and head of the Reconnaissance and Investigations team, wrote: “Content published as part of these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the war in Gaza, elections in India, politics in Europe and the United States, and criticism of the Chinese government by Chinese dissidents and foreign governments.”

Content failed to reach wider audience

Speaking at a press briefing, Nimmo said that while AI models have made the work of the groups behind influence operations easier and more effective, none of them have reached a wider audience with their content. However, he cautioned that these attempts should not be underestimated.

History shows that influence operations that have been unsuccessful for years can suddenly achieve a breakthrough if no one pays attention to them, he said.

On the same day, the Politico portal reported that pro-Russian disinformation campaigns are still circulating on the Meta platform, just a week away from the European elections.

According to researchers from non-profit groups AI Forensics and CheckFirst, some 275 sponsored posts containing anti-Ukrainian and anti-EU content reached more than three million Facebook users in France, Germany, Italy, and Poland in May, the portal wrote, recalling that the European Commission launched an investigation into the matter a month ago.

The rash of illegal ads violating platform rules is a wake-up call both for the Meta and for regulators to enforce existing rules more thoroughly, said Amaury Lesplingart, co-founder of NGO CheckFirst.

Polish users of the platform were targeted by ads with the message: “We are all used to constant reports of theft in Ukraine, but sometimes we are surprised by the cynicism of Ukrainian thieves.”
źródło: PAP