This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

India targets deepfakes and AI-generated content: key changes under MeitY’s 2026 amendments to the IT Rules

On 10 February 2026, India’s Ministry of Electronics and Information Technology (MeitY) introduced the 2026 Amendments (the Amendments) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (the IT Rules).

The IT Rules, first introduced in 2021, established a framework to regulate social media platforms and online content providers (including digital news publishers), imposed due diligence measures on “intermediaries”, i.e., any person who, on behalf of another person, receives, stores, sends, or provides any service related to electronic records, including telecom and network operators, internet and web-hosting providers, search engines, online payment sites and marketplaces. 

The Amendments introduce the concept of synthetically generated information (SGI), which includes deepfakes and other forms of AI-generated media. They also impose expanded due diligence obligations, the implementation of filtering technology, labelling requirements, and substantially shorter take down times. 

The Amendments come into force on 20 February 2026, leaving intermediaries with very little time to adjust their policies and procedures.

Intermediaries and Significant Social Media Intermediaries (SSMIs) should:

  • assess whether their existing compliance procedures align with the new due diligence standards;
  • evaluate the technical feasibility of deploying automated tools for SGI;
  • prepare for significantly compressed response and takedown timelines; and
  • establish clear procedures for user declarations, content labelling and metadata practices.

Beyond the immediate compliance steps, there are still important open questions about how the regime will work in practice. These include, for example, the scope of the obligation to take “reasonable and appropriate technical measures” to prevent the creation or spreading of unlawful content, the operational burden placed on platforms to detect and moderate AI-generated content, and the practicalities of enforcing two-hour takedown requirements for sensitive imagery.

This blogpost summarises the key elements of the new framework.

Synthetically generated information, or SGI

The core feature of the Amendments is the introduction of the new concept, SGI.

SGI is defined as:

audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic, or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.

This definition includes a broad range of AI content, including deepfakes, AI-generated or AI-altered images and videos, voice cloning and other forms of realistic, algorithmically generated audio-visual content.

Notably, text generated by AI does not fall within SGI, although intermediaries’ existing obligations under the IT Rules and IT Act continue to apply to unlawful text-based content.

Activities excluded from the definition of SGI

The Amendments expressly carve out three categories that do not constitute SGI:

  • Routine or good faith editing, such as formatting, technical corrections, colour adjustment, transcription or compression, provided these do not materially alter the substance, context or meaning of the underlying content.
  • Routine or good faith document creation, including presentations, PDFs, educational or training materials and research outputs, provided no false documents or false electronic records are created.
  • Accessibility improvements, including improving clarity, quality, translation or searchability through computer-based tools, provided no material part of the underlying content is altered.

New due diligence obligations for intermediaries

Under the 2021 IT Rules, intermediaries were already subject to several due diligence obligations, including requirements to publish their terms of use, issue annual user notices and comply with prescribed takedown timelines. Failure to observe these requirements resulted in intermediaries losing their safe harbour protection and being exposed to monetary fines under the IT Act and potential criminal liability (such as under the Indian Penal Code). 

The Amendments build on this by introducing new and expanded due diligence obligations for intermediaries. Key changes include:

  • Broader user information obligations. Intermediaries must now inform users every three months (previously, once a year), of the consequences of non-compliance with platform rules, including (i) suspension or termination of the user’s account, (ii) removal of or disabling of access to offending content, (iii) potential penalties under the IT Act and other laws, and (iv) the intermediary’s mandatory reporting obligations for serious offences.

Furthermore, intermediaries which provide computer resources that enable or facilitate the creation or dissemination of SGI must inform their users that, in addition to the consequences mentioned in the previous sentence, the directing, instructing or otherwise causing the computer resource to create, modify or disseminate any SGI that is in violation of applicable law may require the intermediary to disclose the user’s identity to the victims of the user’s acts.

  • Substantially reduced takedown times. Intermediaries must now act:
    • within three hours of receiving a court order or government notice to remove unlawful content (previously 36 hours). See below for details on the categories of content that constitute ‘unlawful content’;
    • within seven days to resolve general user grievances (reduced from 15 days);
    • within 36 hours for complaints relating to content concerning an individual (reduced from 72 hours); and
    • within two hours for high-risk content categories including content depicting nudity or sexual acts, NCII, impersonation, or artificially morphed images (reduced from 24 hours).

In addition to acting when notified, intermediaries must also take expeditious and appropriate action when they become aware of any violation of law at their own accord. The Amendments confirm that in such case, this will not result in the intermediary losing its safe harbour status under the IT Act.

  • Technical measures against unlawful content. Intermediaries offering computer resources that could enable the creation or dissemination of SGI must implement “reasonable and appropriate technical measures including automated tools or other suitable mechanisms” to prevent users from creating or spreading unlawful content, which includes (i) child sexual exploitative and abuse material, non-consensual intimate imagery and obscene or sexually explicit content; (ii) content creating false documents or false electronic records; (iii) content relating to explosives, arms or ammunition; and (iv) content falsely depicting individuals or real-world events in a manner likely to deceive.
  • Labelling requirements. The draft amendments released in October 2025 originally proposed mandatory watermarking for SGI content, covering at least 10% of the surface area for visual content or displaying a disclaimer during the first 10% of the duration for audio content. However, this 10% requirement has since been dropped in the final rules. Instead of a fixed size, the rules now require that non-prohibited SGI be “clearly and prominently labelled”. Visual SGI must include visual labels, and audio SGI must include audio disclosures. Where feasible, intermediaries must also embed permanent metadata or unique identifiers to trace the computer resource used to generate or alter the content. Intermediaries must not allow the removal or modification of labels or metadata.

Non-compliance with these due diligence obligations may lead to intermediaries losing their safe harbour protection from liability for third party content.

Enhanced obligations for Significant Social Media Intermediaries

The Amendments introduce additional, heightened requirements on SSMIs, i.e., social media platforms with over five million registered users in India under the 2021 Rules. This captures most of the global platforms operating in India, including Meta (Facebook, Instagram, WhatsApp), Alphabet (YouTube), X (formerly Twitter), and LinkedIn. 

Key new obligations for SSMIs include:

  • requiring users to declare whether uploaded content is SGI;
  • deploying technical measures (including automated tools) to verify the accuracy of such declarations - SSMIs cannot rely solely on user representations; and
  • displaying clear and prominent labels or notices wherever content is SGI.

An SSMI that knowingly permits, promotes or fails to act on unlawful SGI will be deemed to have failed to exercise due diligence and may risk losing their safe harbour status. 

Notably, the Amendments also replace earlier “endeavour”-based language with a mandatory obligation to implement “reasonable and appropriate technical measures” to ensure compliance.