New Delhi– Innovations like deepfake detection tools by several startups such as OpenAI, Sensity AI, and others are enhancing the detection of manipulated content and enabling more secure digital environments, a new report said on Friday.
According to the data and analytics company GlobalData, in the dynamic field of cybersecurity, transformative technologies like artificial intelligence (AI)-powered deepfake detection, real-time monitoring, and advanced data analytics are revolutionising digital security and authenticity.
“AI-generated deepfakes have become increasingly sophisticated, posing significant risks to individuals, businesses, and society,” said Vaibhav Gundre, Project Manager, Disruptive Tech at GlobalData.
“However, cutting-edge detection methods powered by machine learning (ML) are helping to identify and flag manipulated content with growing accuracy,” he added.
He also mentioned that from examining biological signals to utilising powerful algorithms, these tools are fortifying defences against the misuse of deepfakes for “misinformation, fraud, or exploitation”.
Sam Altman-run OpenAI has recently introduced a deepfake detector designed specifically to identify content produced by its image generator, DALL-E.
Sensity AI uses proprietary API (Application Programming Interface) to detect deepfake media such as images, videos, and synthetic identities.
DeepMedia.AI’s deepfake detection tool DeepID uses pixel-level modifications, image artifacts, and other signs of image manipulation for image integrity analysis.
As per Gundre, these improvements in deepfake detection are transforming cybersecurity toward ensuring digital content authenticity.
“However, as this technology evolves, we must critically examine the ethical considerations around privacy, consent, and the unintended consequences of its widespread adoption,” he suggested. (IANS)