Is the next stage the mass use of public persona images in porn?

In recent years, the problem of growing AI-assisted fraud has become more prominent and acute. Famous personalities are falling victim to the activities of scammers who use AI to spoof their voices and images to massively deceive internet users. These fake materials are spread through social media, messengers and other online platforms, sometimes reaching huge audiences. Unfortunately, many people fall victim to such activities, being manipulated and psychologically influenced.

Deepfake fraud of videos, photos and audio recordings have become widespread on various online platforms, facilitated by technological advances big language models such as Midjourney, Google’s Gemini or OpenAI’s ChatGPT.

With some customization, anyone can create seemingly real images or make the voices of prominent personalities or economic figures and artists and TV hosts say anything. While creating a dipfake is not in itself a criminal offense, any governments are moving toward stricter regulation of AI use to prevent harm caused by AI-assisted fraud, they are nonetheless moving toward stricter regulation of the use of artificial intelligence to prevent harm to the parties involved.

In addition to the main focus of deepfake creating non-consensual pornographic content featuring mostly female celebrities, the technology can also be used for identity fraud by making fake ID’s or impersonating others over the phone. As our chart, based on the latest annual report from identity verification provider Sumsub , shows, the number of identity fraud cases involving deepfake will increase dramatically in many countries around the world between 2022 and 2023.

The Future of AI-Assisted Fraud and Its Impact

As the capabilities of artificial intelligence potentially expand even further, as evidenced by products like the Sora AI generator, deepfake fraud attempts could spread to other areas. “In recent years, we’ve seen deepfakes become more convincing, and this will only continue to grow into new types of fraud, such as voice deepfake,” says Paul Goldman-Kalaidin, head of artificial intelligence and machine learning at Sumsub, in the aforementioned report. “Both consumers and businesses need to remain extra vigilant against synthetic fraud and look for multi-layered solutions to combat fraud, not just deepfake detection.”

These assessments are shared by many cybersecurity experts. For example, a survey of 199 cybersecurity leaders attending the World Economic Forum’s 2023 Annual Meeting on Cybersecurity found that 46 percent of respondents were most concerned about “the development of adversarial capabilities – phishing, malware development, diplfeaks” in terms of the risks associated with artificial intelligence. poses a threat to cybersecurity in the future.

Protect Yourself Against AI-Assisted Fraud: 

Contact us if you are a victim of AI-assisted fraud and ensure your protection in the digital world! Discover solutions to combat fraud fueled by artificial intelligence. Protect yourself and your information from cyber fraudsters with our innovative technology and expert guidance. 

Skip to content