New Delhi– The Indian government on Tuesday announced five projects selected under the second round of its “Safe and Trusted AI” program, part of the IndiaAI initiative to promote secure, ethical, and transparent artificial intelligence systems nationwide.
According to the Ministry of Electronics and Information Technology (MeitY), the projects will drive advancements in real-time deepfake detection, forensic analysis, bias reduction in AI models, and the creation of reliable evaluation tools for generative AI — ensuring that AI deployed in India is trustworthy, secure, and inclusive.
IndiaAI, a division under MeitY, said the projects were chosen for government support from more than 400 proposals submitted by universities, startups, research organizations, and civil society groups. A multi-stakeholder technical committee reviewed all submissions before making the final selections.
“These projects put the vision of ‘Safe and Trusted AI’ into action by combining resilience testing, bias audits, and strong governance frameworks to support the responsible development and deployment of AI,” the ministry said in a statement.
Among the selected efforts, the Indian Institute of Technology (IIT) Jodhpur, in collaboration with IIT Madras, will lead a multi-agent retrieval-augmented generation framework for deepfake detection and governance. IIT Mandi and the Directorate of Forensic Services in Himachal Pradesh will jointly develop “AI Vishleshak,” designed to improve detection of audio-visual deepfakes and signature forgeries.
Other projects will focus on tools to assess gender bias in agricultural large language models and on developing penetration-testing systems for large language and generative AI models.
IndiaAI serves as the implementing agency for the IndiaAI Mission, which aims to democratize access to AI, strengthen India’s global leadership in the field, and ensure ethical, responsible, and self-reliant technology growth. (Source: IANS)





