India has introduced sweeping amendments to its Information Technology Rules 2021, requiring social media platforms to remove illegal or unlawful content within three hours after receiving official instructions from courts, law enforcement agencies, or government authorities.
The updated regulation significantly reduces the earlier 36-hour compliance window, placing greater operational pressure on technology companies to respond quickly to takedown requests. Officials say the rule is designed to strengthen efforts against misinformation, harmful online activity, and unlawful digital content.
The amendments also introduce new requirements related to artificial intelligence-generated media. Platforms must ensure that AI-created images, audio, and videos are prominently labeled, while users posting such material must disclose that it was generated using artificial intelligence tools. Companies are additionally expected to use automated systems to monitor and limit the spread of misleading or harmful AI-generated content.
The new framework, scheduled to take effect on February 20, is expected to reshape moderation practices across one of the world’s largest internet markets, where platforms must balance rapid compliance with the operational challenges of reviewing large volumes of content. Some technology law experts have noted that the shortened response time could pose significant logistical challenges for global platforms operating in the country.
Government officials maintain that the policy is necessary to curb the rapid spread of deepfakes, false viral videos, and other forms of harmful digital misinformation, particularly those targeting young users or public institutions.







