Govt drafts new IT rules to label AI-generated content

by The_unmuteenglish

New Delhi, Oct 22: The Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (IT) Rules, 2021, to mandate labelling and clear visual or audio markers for AI-generated and synthetic content. The ministry said the move aims to help users distinguish authentic information from deepfakes and artificially created material while ensuring accountability among major social media intermediaries.

In its draft notification, the ministry noted that the rise of generative AI tools has led to an “alarming proliferation” of deepfakes and synthetic media capable of spreading misinformation, manipulating elections, and impersonating individuals. “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods,” the accompanying explanatory note said.

Under the proposed framework, significant social media intermediaries (SSMIs)—those with over 50 lakh users—will be required to obtain user declarations on whether uploaded content is synthetically generated. They must also deploy “reasonable and proportionate” technical measures to verify such claims and ensure all synthetic content carries visible or audible labels covering at least 10 per cent of the screen or the initial 10 per cent of audio duration.

The amendments define synthetically generated information as any content artificially or algorithmically created, modified, or altered using computer resources in a way that appears authentic or true. Such content must also include embedded metadata or unique identifiers that allow users to immediately recognise it as synthetic.

MeitY said the changes are intended to “enhance user awareness, traceability, and accountability while maintaining an enabling environment for AI innovation.” It has invited public feedback on the draft proposals until November 6.

Globally, policymakers are raising concerns over the growing misuse of AI tools to produce deepfakes and non-consensual imagery, mislead the public with fabricated news, or conduct financial fraud. The proposed rule changes, the ministry said, seek to offer statutory protection to intermediaries that act against such content based on user grievances or reasonable efforts.

Related Articles