• Wyślij znajomemu
    zamknij [x]

    Wiadomość została wysłana.

     
    • *
    • *
    •  
    • Pola oznaczone * są wymagane.
  • Wersja do druku
  • -AA+A

Deepfake crimes on the rise

Experts warn of growing threat of ‘deepfake’ crimes

11:45, 18.05.2024
  fb/kk;   wroclaw.tvp.pl, nask.pl
Experts warn of growing threat of ‘deepfake’ crimes Deepfake crimes using AI to manipulate the voices and faces of well-known individuals are increasingly rampant on the Internet, experts from the Research and Academic Computer Network (NASK) have said in a recent report.

Deepfake crimes using AI to manipulate the voices and faces of well-known individuals are increasingly rampant on the Internet, experts from the Research and Academic Computer Network (NASK) have said in a recent report.

Photo: Jaap Arriens/NurPhoto via Getty Images
Photo: Jaap Arriens/NurPhoto via Getty Images

Podziel się:   Więcej
According to NASK, more scams featuring manipulated images of public figures like football superstar Robert Lewandowski, President Andrzej Duda, and Health Minister Izabela Leszczyna are appearing online. These videos use “lip sync” techniques to match AI-generated voices with facial movements and gestures.

AI technology allows criminals to easily manipulate audiovisual materials. With text-to-speech technology, only a few seconds of recorded voice are needed to create a new audio track that can be synchronized with video footage. For more complex speech-to-speech technology, about one minute of original material is required to mimic the voice’s intonation and emotions.

Ewelina Bartuzi-Trokielewicz, head of NASK’s deepfake analysis team, explained that in May, there was another surge of deepfake videos. Celebrities, actors, and trusted figures are particularly vulnerable to identity and voice theft.

Social media users should be cautious of video content that appears unverified or suspicious, especially materials that could influence public perception of significant figures and institutions. Deepfake frauds are becoming harder to detect as AI evolves, allowing for more precise voice imitations. However, it’s still possible to identify fakes by examining technical aspects and analyzing the content.

Technical signs of deception include distortion around the mouth, unnatural head movements and expressions, word declension errors, and unusual intonation.

Fraudsters increasingly add noise and various spots to recordings to obscure the image’s clarity, hide AI-generated artifacts, and confuse automatic deepfake detection algorithms. These videos often use social engineering tactics to lure viewers, such as promises of quick profits, exclusive offers with limited access, urgency, and emotional appeals. If such content is spotted, NASK urges prompt reporting to prevent others from falling victim to fraud.
źródło: wroclaw.tvp.pl, nask.pl