Blog

An Overview of Global AI Regulation and What’s Next

By: Jordan Shapiro / Jillian Cota / 03.08.2023

Artificial intelligence (AI) is the new subject of large-scale regulation by governments around the world. While AI has many benefits, such as increased productivity and cost savings, it also presents some risks and challenges. For example, AI systems can sometimes be biased or discriminatory, leading to unfair outcomes. They can also raise concerns about privacy and data security, as these systems often rely on large amounts of personal data.

As a result, governments around the world are starting to introduce regulations to ensure that AI is developed and used in a safe, responsible, and ethical manner. These regulations cover a range of issues, from data privacy and security to algorithmic transparency and accountability.

This piece will unpack the novel AI regulation in the U.S., EU, Canada, and China and how each country approaches the technology as they seek to balance economic, social, and public priorities with innovation.

European Union: Artificial Intelligence Act (AIA)
The European Union introduced the Artificial Intelligence Act (AIA) on April 21, 2021. The current text proposes a risk-based approach to guide the use of AI in both the private and public sectors. The approach defines three risk categories: unacceptable risk applications, high-risk applications, and applications not explicitly banned. The regulation prohibits the use of AI in critical services that could threaten livelihoods or encourage destructive behavior but allows the technology to be used in other sensitive sectors, such as health, with maximum safety and efficacy checks by regulators. The legislation is still under review in the European Parliament.

The AI Act is a type of legislation that regulates all automated technology rather than specific areas of concern. It defines AI systems to include a wide range of automated decision-makers, such as algorithms, machine learning tools, and logic tools, even though some of these technologies are not considered AI.

Canada: The Artificial Intelligence and Data Act (AIDA)
In June of 2022, Canadian Parliament introduced a draft regulatory framework for Artificial Intelligence using a modified risk-based approach. The bill has three pillars, but this piece will just examine the section dealing with AI, the Artificial Intelligence and Data Act (AIDA). The goal of Canada’s AI rules are to standardize private companies’ design and development of AI across the provinces and territories.

The modified risk-based approach is different from the EU’s approach as it does not ban the use of automated decision-making tools, even in critical areas. Instead, under the AIDA regulation, developers must create a mitigation plan to reduce risks and increase transparency when using AI in high-risk systems. The plan should ensure that the tools do not violate anti-discrimination laws. These mitigation plans or impact assessments aim to decrease risk and increase transparency in the use of AI in social, business, and political systems.

United States: AI Bill of Rights and State Initiatives
The United States has yet to pass federal legislation governing AI applications. Instead, the Biden Administration and the National Institute of Standards and Technology (NIST) have published broad AI guidance for the safe use of AI. In addition, state and city governments are pursuing their own regulations and task forces for AI use. In a break from the EU model, regulation thus far targets specific use cases rather than seeking to regulate AI technology as a whole.

At the federal level, the Biden Administration recently released the AI Bill of Rights, which addresses concerns about AI misuse and provides recommendations for safely using AI tools in both the public and private sectors. This AI strategy would not be legally binding. Instead, the Bill of Rights calls for key safety strategies such as greater data privacy, protections against algorithmic discrimination, and guidance on how to prioritize safe and effective AI tools. While the blueprint is not legally binding, it serves as a guide for lawmakers at all levels of government who are considering AI regulation.

In addition, NIST, which is an agency in the Department of Commerce that develops technology standards, published standards for managing AI bias. NIST also tracks how the public sector integrates AI tools across the federal government.

In 2022, 15 states and localities proposed or passed legislation concerning AI. Some bills focus on regulating AI tools in the private sector, while others set standards for public-sector AI use. New York City introduced one of the first AI laws in the U.S., effective from January 2023, which aims to prevent AI bias in the employment process. Colorado and Vermont created task forces to study AI applications, such as facial recognition, at the state level.

China: Algorithm Transparency and Promoting AI Industry Development
China has set a goal for the private AI industry to make $154 billion annually by 2030. China has yet to pass rules on AI technology at large. Recently, however, the country introduced a law that regulates how private companies use online algorithms for consumer marketing. The law requires companies to inform users of AI for marketing purposes and bans the use of customer financial data to advertise the same product at different prices. However, not surprisingly, the law does not apply to the Chinese government’s use of AI.

Along with China’s federal regulation, in September of 2022, Shanghai became the first province to pass a law focused on private-sector AI development. The law titled Shanghai Regulations on Promoting the Development of the AI Industry provides a framework for companies in the region to develop their AI products in line with non-Chinese AI regulations.

Next Steps for Global Regulation:
Artificial intelligence is a promising tool that is stimulating global growth and driving the future of innovation. Despite the positive impacts of AI, there is no question that some regulation is needed to combat the misuse of AI and to protect consumers.

The different approaches summed in this piece offer methodologies for how policymakers around the world are approaching specific harms from AI, as well as AI as a whole. The EU’s approach regulates the use of any automated decision-making tools and outlines the sectors where they can and cannot be used. The U.S. offers voluntary recommendations and standards at the federal level, with states and cities pursuing their own targeted studies and rules based on specific harms. The modified risk-based approach in Canada regulates all AI tools but stops short of banning the technology in certain spheres by allowing companies to define their own risk-mitigation strategies. And the Chinese approach seeks to increase transparency for consumers and become a global power in AI standards.

To prepare, companies will need to further develop global stances on AI ethics and compliance for their products in order to meet transforming regulations. In addition, legislators should focus on legitimate harms to consumers and keep apprised of how stricter regulatory regimes affect AI innovation.