Meity issues advisory, asks cos to take permit before launching AI products | India News


Ministry of Electronics and IT (Meity) on Friday issued an advisory to social media intermediaries and artificial intelligence (AI) platforms and asked them to take permission before launching AI products in the country.


Further, the ministry has also asked platforms to ensure that the biases arising out of their AI models or platforms do not hamper the electoral process in India.


“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under-testing,” said Minister of State for electronics Rajeev Chandrasekhar.


Meity has asked all concerned platforms to submit an action taken-cum-status report to it within 15 days of the advisory. 


Google’s AI tool, Gemini, was recently criticised by the government for allegedly generating biased responses towards prime minister Narendra Modi.


The government has also asked the Intermediaries to ensure that any potentially misleading content is labelled with unique metadata or identifiers, allowing for the identification of its origin and the intermediary involved, in order to facilitate tracking of misinformation or deepfakes and their originators.


“The platforms should figure out a way of embedding a metadata or some sort of identifier for every thing that is synthetically created by their platform,” said Chandrasekhar.


Failure to adhere to it could lead to potential legal repercussions for intermediaries or platforms, including prosecution under the IT Act and other relevant criminal statutes, said the minister.


He also said, ” You don’t do that with cars or microprocessors. Why is that for such a transformative tech like AI there are no guardrails between what is in the lab and what goes out to the public.” 

 


According to the minister, all the conventional internet intermediaries and platforms that are not strictly intermediaries but have some AI embedded in it, or the digital platforms that enable the creation of deep fakes and image manipulation, will be required to comply with the advisory.


Further, companies that are hosting unreliable or under-testing AI platforms and want to create a sandbox on the internet for testing must get permission from the government and should label the platform as ‘under-testing’.


Additionally, there should also be a clear mention of the “possible and inherent fallibility or unreliability” of the output generated to the public by these platforms, which could be done through a ‘consent popup’ mechanism.


“In a lot of ways this advisory is signalling towards our future regulatory framework that is aimed at creating a safe and trusted internet,” he added.

Originally appeared on: TheSpuzz

Scoophot
Logo