Why 2022 is only the beginning for AI regulation

Did you miss a session at the Data Summit? Watch On-Demand Here.


As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly.

In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used. From the UK to the EU to China, regulations on how industries should monitor their algorithms, best practices for auditing and frameworks for more transparent AI systems are on the rise. In the U.S. there has been much less progress made on regulating artificial intelligence than in other geographies. Yet, over the past year, the federal government has begun to take steps toward regulating artificial intelligence across industries.

The threat to civil rights, civil liberties and privacy is one of the biggest considerations made in regulating AI in the U.S. The debates concerning  how AI should be handled this year have focused on three areas of interest: Europe and the UK, the individual U.S. states, and the U.S. federal authorities.

Europe and the UK: Paving the way for AI regulation

Europe is moving quickly toward comprehensive legislation regulating how AI  can be used across industries. In April, the European Commission announced a framework to help enterprises monitor their AI systems. In the UK, there have been several steps in creating more regulations around AI auditing practices, AI assurance and algorithmic transparency. 

Recently, Germany put forth the world’s first guidance for the specific criteria published by public authorities to be put into a framework for lifecycle management of AI. The  AI Cloud Service Compliance Criteria Catalogue (AIC4)  responds to the calls for AI regulation by clearly outlining the essential requirements for promoting robust and secure AI practices.

Movement at the state and local levels in the U.S.

Meanwhile, the United States has taken a less-centralized approach to AI regulation. States legislatures have taken steps to regulate this agile technology, but the federal government has made little progress compared to Europe. The federal action the United States took this year, while promising, is largely unbinding. 

In the U.S., states and local governments have begun the move toward more accountable and enforceable AI regulation. In Colorado, the state legislature created SB21-169, Restrict Insurers’ Use of External Consumer Data, to ensure insurance companies are held accountable for the discriminatory practices of their AI systems.

Meanwhile, local governments in New York City and Detroit have taken to regulations to mitigate biases and discriminatory practices of algorithms. In New York, the City Council passed the nation’s first attempt to rein in harmful, AI-based hiring practices. Earlier in 2021, Detroit’s city council passed a city ordinance to require more accountability and transparency in the city’s surveillance systems. 

U.S. federal agencies take aim at decentralized AI governance

A year ago,, the National Security Commission and Government Accountability Office (GAO) submitted their final report to Congress, recommending that the government take domestic legislative action to protect privacy, civil rights and civil liberties in AI development by government entities. Highlighting the lack of public trust in AI for national security, the intelligence community and law enforcement, this report advocates for the private sector to lead the way in promoting more trustworthy AI. In June, the GAO published a report of key practices to guarantee accountability and responsibility in AI use by federal agencies. 

In April, the Federal Trade Commission (FTC) issued guidance on how to responsibly build AI and ML systems. Through lifecycle monitoring to identify bias and discriminatory outcomes, embracing streamlined auditing and setting clear expectations for what AI systems can accomplish, the FTC hopes to promote more trust in these complex systems. Fundamentally, the FTC believes the current law is sufficient for enforcement and will be enforced on AI systems when necessary. 

The FTC also has taken a firm stance advocating for more transparent and fair hiring processes, including placing controls and clear expectations on AI systems. Through greater accountability and transparency, the FTC believes there will be greater trust and confidence in AI, making the U.S. more competitive.

Under the guidance of the National Defense Authorization Act of 2021, the Department of Commerce directed the National Institute of Science and Technology (NIST) to develop a voluntary risk management framework for AI systems. 

Additionally, the Department of Commerce established the National Artificial Intelligence Advisory Committee (NAIAC) in September under the advice of the National AI Initiative Act of 2020. The committee will advise the president and other federal agencies on a range of issues relating to AI: competitiveness, the science around AI, issues that come along with AI in the workplace and how AI can enhance social justice issues.

Earlier this year, the Food and Drug Administration (FDA) released the Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD) Action Plan. This plan outlines how the FDA intends to oversee the use and development of AI- and ML-based SaMD devices used to treat, diagnose, cure, mitigate or prevent disease and other medical conditions. This plan updates the original proposal penned in 2019. In November, this plan was refreshed again in partnership with Canada and the UK.  

In October, the Equal Employment Opportunity Commission (EEOC) launched an initiative to mitigate AI bias and promote algorithmic fairness. The initiative has yet to publish any formal procedure, but plans to author a framework in the near future thanks to industry leaders, enterprises and consumers are in the works. This guidance will be used to promote more transparency and fairness in artificial intelligence in the hiring process. 

Last summer, the National Institute of Science and Technology (NIST) requested information from enterprises and technology experts to help inform their proposed artificial intelligence risk management framework. In an effort to promote more transparency and trust in how enterprises use artificial intelligence, this RFI generated many responses from a variety of stakeholders working to promote innovation and security.

White House addresses concerns over privacy, civil liberties

When it comes to AI, the Biden administration has, thus far, primarily focused on protecting consumer privacy. In July, the White House – as a part of the National Artificial Intelligence Initiative – began to gather information from enterprises, academia and experts on how to create a comprehensive AI Risk Management Framework. This framework is intended to address concerns about  trust and transparency in AI systems. Additionally, it will work toward the implementation of more responsible and equitable artificial intelligence.

In September, the U.S.-EU Trade and Technology Council (TTC) released its first joint statement. In it, the council vows to develop “AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values.” To achieve this, both the EU and U.S. promised to uphold the OECD Recommendation on Artificial Intelligence for more trustworthy AI and evaluation tools. Additionally, the TTC intends to conduct a joint economic study examining the impact of AI on the labor market’s future. 

In October 2021, the White House Office of Science and Technology Policy expanded the conversation around regulating AI, protecting consumer privacy and guaranteeing safety. Working with experts across industries, academia and government agencies, the Office accepted public interest submissions to inform the creation of a Bill of Rights for an AI-powered world.

In Congress: Facebook whistleblower is a catalyst for change

In October,  questions regarding Facebook’s data practices were brought into the limelight when former staff member Frances Haugen was questioned by Congress. The hearing revealed  the ways in which Facebook knowingly continued business practices that caused harm to vulnerable groups. Following the initial hearing, members of Congress began to introduce legislation to place regulations on major technology companies to prevent them from causing harm to vulnerable communities.

In an effort to implement safeguards to govern how AI is used in the United States, Rep. Frank Pallone (D-NJ) introduced the Justice Against Malicious Algorithms Act of 2021. This comes after Facebook stated its intent to place new safeguards on algorithms to protect children from harm. It would remove existing protections websites have under Section 230 of the Communications Decency Act of 1996 that insulate them from liability of what is posted on their platforms. 

In another attempt to regulate how algorithms are used on online platforms, a bipartisan group of members of Congress sponsored the Filter Bubble Transparency Act in November. This bill would force major corporations to grant their users options on how their data is used by opaque algorithms.

As concerns over privacy and trust in artificial intelligence continue to rise, legislation of this type is likely to continue throughout the next several years. Legislators want to grant Americans more autonomy in how their data is used by the platforms that have become so ingrained into daily life.

What to expect this year

In 2021, strides were made toward regulating AI  across the globe, expect that to continue in 2022 as an increased number of  corporations, analysts and governments are recognizing the importance of safeguards, risk management strategies and governance frameworks placed on AI systems. 

Predictions for 2022 often discuss a more widespread adoption of explainable AI, stronger AI-centered risk management strategies and increased monitoring of AI systems. Discussions about AI ethics will be central to the development of more regulations as more calls for ethical hiring practices, bans on facial recognition and much more continue to grow across the globe.

Chris J. Preimesberger, editor of this article, is a former editor of eWEEK and a regular VentureBeat contributor who has been reporting on and analyzing IT trends and products for more than two decades. 

Anthony Habayeb, an AI/ML governance thought leader and industry commentator, is the founding CEO of Monitaur, an AI governance and ML assurance software provider. Habayeb is dedicated to guiding enterprises to build and deploy responsible AI and machine learning models. He has a newscast series on YouTube.


Originally appeared on: TheSpuzz

Scoophot
Logo