Nasscom releases guidelines for generative AI: What it means for developers, researchers

National software and services company body – Nasscom has released guidelines for the ‘responsible’ use of generative artificial intelligence (AI) by developers and researchers. These draft guidelines are the result of consultations with the technology industry, a multi-disciplinary group of AI experts, researchers, and practitioners, with representations from academia, and civil society. 

The guidelines, Nasscom says will be instrumental in defining frameworks and act as common standards and protocols for researching, developing, and using Generative AI (GenAI) responsibly in the country. 

Objective of the guidelines

The guidelines focus on research, development, and use in relation to GenAl. The guidelines define GenAl as a type of artificial intelligence technology that can create artefacts such as image, text, audio, video, and various forms of multi-modal content. 

The object of these guidelines is to promote and facilitate responsible development and use of GenAl solutions by different stake- holders. The guidelines also intend to achieve a robust, common understanding of normative obligations amongst stakeholders to help them improve their net social impact with GenAl and to foster trust in the adoption of GenAl technologies across industries. 

What are the guidelines?

The guidelines highlight certain obligations for researchers, developers, and users with emphasis on demonstrating reasonable caution and foresight by conducting comprehensive risk assessments. They are also advised to maintain an internal oversight throughout the entire lifecycle of a GenAI solution. 

To promote further transparency and accountability, public disclosures of data and algorithm sources used for modelling and other technical will be mandatory. In addition, developers should reveal non-proprietary details about the solution’s development process, capabilities, and limitations. 

To preserve user’s privacy, the guidelines focus on establishing privacy-preserving norms and standards, safety testing of GenAI models in regulated environments, and strict adherence to data protection and intellectual property rules during the model training process. 

Obligations defined for those conducting fundamental  and applied research on GenAI model

As per the guidelines, those conducting fundamental  and applied research on GenAI model will have to

  • Demonstrate reasonable caution and foresight by systematically and rigorously anticipating and evaluating both positive and negative contingencies that might arise from the conduct of research using techniques like horizon scanning, scenario planning, etc. 
  • Demonstrate transparency and accountability by releasing public disclosures about the values, goals, and motivations for driving or funding a research project and by describing the methodologies, model training datasets, and tools adopted for the conduct of research in all such disclosures. 
  • Demonstrate reliability and safety by adhering to established privacy-preserving norms and standards in research data collection, processing, and usage, and conducting safety testing of GenAl models in regulated environments. 
  • Demonstrate inclusion by accounting for the risk of harmful bias in research and deploying protocols and measures to mitigate it, and by publishing research findings in open-source formats, wherever possible, to democratise framing of new problem statements to advance the state-of-the-art in GenAl, foster collective inquiry into the potential risks and benefits from the adoption of GenAl technologies, and engender prevalent societal  values in GenAI. 

Updated: 07 Jun 2023, 06:28 PM IST

Originally appeared on: TheSpuzz