India is withdrawing its recent AI advisory after receiving criticism from many entrepreneurs and investors, both domestic and international.
The Ministry of Electronics and IT on Friday shared the latest AI recommendations with industry stakeholders, no longer requiring users in South Asian markets to receive government approval before launching or deploying AI models.
The revised guidelines recommend that companies instead label poorly tested and unreliable AI models to inform users of their potential fallibility and unreliability. Masu.
The revision comes after India's IT ministry came under harsh criticism from a number of prominent figures earlier this month. Martin Casado, a partner at venture firm Andreessen Horowitz, called India's move a “travesty.”
The March 1 recommendations also signaled a shift from India's previous hands-off approach to AI regulation. Less than a year ago, the ministry had refused to regulate the growth of AI, saying the field was critical to India's strategic interests.
The new advisory, like the original from earlier this month, has not been published online, but TechCrunch reviewed a copy of it.
The ministry said earlier this month that while the recommendations were not legally binding, they signaled that they were the “future of regulation” and that governments would seek compliance.
The advisory emphasizes that AI models should not be used to share content that is illegal under Indian law and should not allow bias, discrimination, or threats to the integrity of the electoral process. Intermediaries are also encouraged to explicitly notify users of the unreliability of AI-generated output using a “consent pop-up” or similar mechanism.
The Department continues to focus on making deepfakes and misinformation easier to identify, recommending that intermediaries label or embed unique metadata and identifiers in their content. There is. Companies no longer need to invent techniques to identify the “originator” of a particular message.