The Government of India has introduced stricter regulations to curb the spread of deepfake videos and AI-generated fake content by amending the Information Technology Act, 2021. The new rules will come into force from February 20.
Under the amended regulations, any photo, video, or audio created using artificial intelligence and shared on social media platforms must clearly disclose that it is AI-generated. The move aims to ensure transparency and protect users from being misled by realistic but fake digital content.
Social media platforms and online intermediaries have been directed to remove misleading or fake AI-generated content within three hours of receiving a complaint or official order. This significantly shortens the earlier response time and places greater responsibility on platforms to act swiftly.
The government has also mandated stricter action in cases involving AI-generated content related to children, private or intimate images, and violent material. Such content will attract immediate takedown and could lead to severe legal consequences.
Platforms that fail to comply with the new rules may face legal action, including penalties under existing cyber laws. The government has emphasized that these measures are necessary to counter misinformation, protect individual dignity, and maintain trust in digital platforms.
The new regulations are part of India’s broader efforts to address the growing misuse of artificial intelligence technologies and to create a safer and more accountable digital ecosystem.
0 Comments