AI Weekly: UN recommendations point to need for AI ethics guidelines

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The U.N.’s Educational, Scientific, and Cultural Organization (UNESCO) this week approved a series of recommendations for AI ethics, which aim to recognize that AI can “be of great service” but also raise “fundamental … concerns.” UNESCO’s 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place “strong enforcement mechanisms and remedial actions” to protect human rights.

“The world needs rules for artificial intelligence to benefit humanity. The recommendation[s] on the ethics of AI is a major answer,” UNESCO chief Audrey Azoulay said in a press release. “It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its … member states in its implementation and ask them to report regularly on their progress and practices.”

UNESCO’s policy document highlights the advantages of AI while seeking to reduce the risks that it entails. Toward this end, they address issues around transparency, accountability, and privacy in addition to data governance, education, culture, labor, health care, and the economy.

“Decisions impacting millions of people should be fair, transparent, and contestable,” UNESCO assistant director-general for social and human sciences Gabriela Ramos said in a statement. “These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.”

The recommendations follow on the heels of the European Union’s proposed regulations to govern the use of AI across the bloc’s 27 member states. They impose bans on the use of biometric identification systems in public, like facial recognition — with some exceptions. And they prohibit AI in social credit scoring, the infliction of harm (such as in weapons), and subliminal behavior manipulation.

The UNESCO recommendations also explicitly ban the use of AI for social scoring and mass surveillance, and they call for stronger data protections to provide stakeholders with transparency, agency, and control over their personal data. Beyond this, they stress that AI adopters should favor data, energy, and resource-efficient methods to help fight against climate change and tackle environmental issues.

Growing calls for regulation

While the policy is nonbinding, China’s support is significant because of the country’s historical — and current — stance on the use of AI surveillance technologies. According to the New York Times, the Chinese government — which has installed hundreds of millions of cameras across the country’s mainland — has piloted the use of predictive technology to sweep a person’s transaction data, location history, and social connections to determine whether they’re violent. Chinese companies such as Dahua and Huawei have developed facial recognition technologies, including several designed to target Uighurs, an ethnic minority widely persecuted in China’s Xinjiang province.

Underlining the point, contracts from the city of Zhoukou show that officials spend as much on surveillance as they do on education — and more than twice as much as on environmental protection programs.

Given China’s expressed intent to surveil 100% of public spaces within its borders, it seems unlikely to reverse course — UNESCO policy or not. But according to Ramos, the hope is that the recommendations, particularly the emphasis on addressing climate change, have an impact on the types of AI technologies that corporations, as well as governments, pursue.

“[UNESCO’s recommendations are] the code to change the [AI sector’s] business model, more than anything,” Ramos told Politico in an interview.

The U.S. isn’t a part of UNESCO and isn’t a signatory of the new recommendations. But bans on technologies like facial recognition have picked up steam across the U.S. at the local level. Facial recognition bans had been introduced in at least 16 states including Washington, Massachusetts, and New Jersey as of July. California lawmakers recently passed a law that will require warehouses to disclose the algorithms and metrics they use to track workers. A New York City bill bans employers from using AI hiring tools unless a bias audit can show that they won’t discriminate. And in Illinois, the state’s biometric information privacy act bans companies from obtaining and storing a person’s biometrics without their consent.

Regardless of their impact, the UNESCO recommendations signal growing recognition on the part of policymakers of the need for AI ethics guidelines.  The U.S. Department of Defense earlier this month published a whitepaper — circulated among National Oceanic and Atmospheric Administration, the Department of Transportation, ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service — outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. NATO recently released an AI strategy listing the organization’s principles for “responsible use [of] AI.” And the U.S. National Institute of Standards and Technology is working with academia and the private sector to develop AI standards.

Regulation with an emphasis on accountability and transparency could go a long way toward restoring trust in AI systems. According to a survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to trust AI systems in general. That’s not surprising, given that biases in unfettered AI systems have yielded wrongful arrests, racist recidivism scores, sexist recruitment, erroneous high school grades, offensive and exclusionary language generators, and underperforming speech recognition systems, to name a few injustices.

“It is time for the governments to reassert their role to have good quality regulations, and incentivize the good use of AI and diminish the bad use,” Ramos continued.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


Originally appeared on: TheSpuzz

Scoophot
Logo