The federal government on Wednesday proposed modifications to IT guidelines, mandating the clear labelling of AI-generated content material and rising the accountability of enormous platforms like Fb and YouTube for verifying and flagging artificial info to curb person hurt from deepfakes and misinformation.
The IT Ministry famous that deepfake audio, movies and artificial media going viral on social platforms have demonstrated the potential of generative AI to create “convincing falsehoods”, the place such content material could be “weaponised” to unfold misinformation, harm reputations, manipulate or affect elections, or commit monetary fraud.
The proposed amendments to IT guidelines present a transparent authorized foundation for labelling, traceability, and accountability associated to synthetically-generated info.
Other than clearly defining synthetically generated info, the draft modification, on which feedback from stakeholders have been sought by November 6, 2025, mandates labelling, visibility, and metadata embedding for synthetically generated or modified info to differentiate such content material from genuine media.
The stricter guidelines would enhance the accountability of great social media intermediaries (these with 50 lakh or extra registered customers) in verifying and flagging artificial info by cheap and acceptable technical measures.
The draft guidelines mandate platforms to label AI-generated content material with outstanding markers and identifiers, masking a minimal of 10 per cent of the visible show or the preliminary 10 per cent of the period of an audio clip.
It requires important social media platforms to acquire a person declaration on whether or not uploaded info is synthetically generated, deploy cheap and proportionate technical measures to confirm such declarations, and be sure that AI-generated info is clearly labelled or accompanied by a discover indicating the identical.
The draft guidelines additional prohibit intermediaries from modifying, suppressing, or eradicating such labels or identifiers.
“In Parliament in addition to many boards, there have been calls for that one thing be achieved about deepfakes, that are harming society. Individuals utilizing some outstanding particular person’s picture, which then impacts their private lives, and privateness. Steps we now have taken intention to make sure that customers get to know whether or not one thing is artificial or actual. It can be crucial that customers know what they’re seeing,” IT Minister Ashwini Vaishnaw mentioned, including that necessary labelling and visibility will allow clear distinctions between artificial and genuine content material.
As soon as guidelines are finalised, any compliance failure may imply lack of the secure harbour clause loved by giant platforms.
With the rising availability of generative AI instruments and the ensuing proliferation of synthetically generated info (deepfakes), the potential for misuse of such applied sciences to trigger person hurt, unfold misinformation, manipulate elections, or impersonate people has grown considerably, the IT Ministry mentioned.
Accordingly, the IT Ministry has ready draft amendments to the IT Guidelines, 2021, with an intention to strengthen due diligence obligations for intermediaries, notably important social media intermediaries (SSMIs), in addition to for platforms that allow the creation or modification of synthetically-generated content material.
The draft introduces a brand new clause defining synthetically generated content material as info that’s artificially or algorithmically created, generated, modified or altered utilizing a pc useful resource in a way that seems fairly genuine or true.
A word by the IT Ministry mentioned that globally, and in India, policymakers are more and more involved about fabricated or artificial photographs, movies, and audio clips (deepfakes) which can be indistinguishable from actual content material, and are being blatantly used to provide non-consensual intimate or obscene imagery, mislead the general public with fabricated political or information content material, commit fraud or impersonation for monetary acquire.
The most recent transfer assumes significance as India is among the many prime markets for international social media platforms, similar to Fb, WhatsApp and others.
A senior Meta official mentioned final 12 months that India has change into the most important marketplace for Meta AI utilization. In August this 12 months, OpenAI CEO Sam Altman had mentioned that India, which is at present the second-largest marketplace for the corporate, may quickly change into its largest globally.
Requested if the modified guidelines would additionally apply to content material generated on OpenAI’s Sora or Gemini, sources mentioned in lots of instances, movies are generated however not circulated, however the obligation is triggered when a video is posted for dissemination. The onus in such a case could be on intermediaries who’re displaying the media to the general public and customers who’re internet hosting media on the platforms.
Over the therapy of AI content material on messaging platforms like WhatsApp, sources mentioned that after it is dropped at their discover, they must take steps to stop its virality.
India has witnessed an alarming rise in AI-generated deepfakes, prompting courtroom interventions. Most up-to-date viral instances embrace deceptive advertisements depicting Sadhguru’s pretend arrest, which the Delhi Excessive Court docket ordered US digital large Google to take away.
Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google in a lawsuit that seeks Rs 4 crore in damages over alleged AI deepfake movies.
– Ends
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be a part of our rising neighborhood at nextbusiness24.com