Follow us on Twitter

How Might the European Union's AI Act Impact Companies in the US?

by Clay Turner, Co-Founder

First proposed in April 2021, the EU's Artificial Intelligence Act represents a significant legislative move towards regulating AI technologies. The European Parliament’s negotiators reached a provisional agreement last week so the text will now undergo finalization, endorsement by member states, and formal adoption by co-legislators. The AI Act is notable for being the first of its kind globally, potentially setting a standard for AI regulation similar to the impact of the GDPR on data protection.

Through the Lens of GDPR

GDPR not only impacted US companies operating in EU markets, but influenced legislation in other markets as well, both foreign and domestic. To understand how much that landmark legislation in the EU affected US companies, just consider some of the indirect changes we've seen as a result:

  • US Consumer Expectations: GDPR has influenced consumer expectations in the US, with more individuals now expecting greater control over their data and transparency from companies about how their data is used.
  • Inspiration for US State Laws: GDPR has inspired individual states to enact their own data protection laws. Some examples are the California Consumer Privacy Act (CCPA) of 2018, the recent California Privacy Rights Act (CRPA) in 2023, and the Virginia Consumer Data Protection Act (VCDPA), also in 2023.
  • Changes to US Contractual Agreements: GDPR has impacted the way US companies draft contracts and agreements, especially in situations involving international data transfers and third-party data processing.
  • Carbon Copy Domestic Adoption: Many US companies have adopted a more global approach to data protection, seeing the EU legislation as the gold standard on the subject matter. In other words, many have opted to apply GDPR-like standards across all operations, not just those involving EU data subjects.

And then, of course, there have been the direct consequences...

  • Enhanced Privacy Policies and Data Protection Measures: US companies that operate in the European Union or deal with EU residents' data have had to comply with GDPR standards. This has led to more robust privacy policies, enhanced data protection measures, and greater transparency in how personal data is collected, used, and stored. Many have appointed Data Protection Officers and implemented stricter compliance measures to avoid heavy fines associated with GDPR violations.
  • Shift in Marketing Strategies: Because GDPR restricts how personal data can be used for marketing purposes, US companies have had to modify their marketing strategies, particularly in the context of email marketing, targeted advertising, and consent management.
  • Cybersecurity Investments: The regulation has led to increased investments in cybersecurity and data protection technologies among US companies to ensure compliance and protect against data breaches.
  • Impact on Small Businesses: Smaller US businesses that operate internationally have faced challenges in complying with GDPR, often requiring significant adjustments in their data handling practices.

The European Union's AI Act In Brief

General Objectives:

The Act aims to ensure that AI systems used in the EU are safe, respect fundamental rights, and align with EU values. It also seeks to promote investment and innovation in AI across Europe.

Risk-Based Approach:

AI systems are regulated based on their potential to cause harm, with stricter rules for higher-risk systems.

Definitions and Scope:

  • Clarifies the definition of AI systems, aligning it with the OECD approach.
  • Excludes AI systems used exclusively for military, defense, research, innovation, or non-professional purposes.

Classification of "High-risk" AI Systems:

Subject to strict requirements for EU market access.

Prohibited AI Practices:

Bans practices like cognitive behavioral manipulation, untargeted scraping of facial images, emotion recognition in workplaces, social scoring, biometric categorization for sensitive data, and certain predictive policing methods.

Law Enforcement Exceptions:

  • Allows emergency deployment of high-risk AI tools by law enforcement in urgent cases.
  • Permits real-time remote biometric identification in public spaces under strict conditions and for specific purposes like preventing terrorist attacks or searching for serious crime suspects.

General Purpose AI Systems and Foundation Models:

  • Introduces specific transparency obligations for general-purpose AI systems.
  • Imposes stricter regulations for high-impact foundation models capable of performing a wide range of tasks.

New Governance Architecture:

  • Establishes an AI Office within the Commission to oversee advanced AI models.
  • Forms an AI Board for coordination and advisory roles, involving member states’ representatives.
  • Creates an advisory forum for stakeholders like industry representatives, SMEs, academia, and civil society.

Penalties:

  • Sets fines as a percentage of global annual turnover or a predetermined amount, with higher caps for serious violations.
  • Introduces proportionate fines for SMEs and start-ups.

Transparency and Protection of Fundamental Rights:

  • Mandates a fundamental rights impact assessment before deploying high-risk AI systems.
  • Requires increased transparency in the use of high-risk AI systems, including registration in the EU database.
  • Obligates users of emotion recognition systems to inform people when they are exposed to such systems.

Measures in Support of Innovation:

  • Facilitates AI regulatory sandboxes for testing innovative AI systems in real-world conditions.
  • Provides testing conditions and safeguards for AI systems.
  • Includes support measures for smaller companies and specific derogations.

Enforcement Timeline:

The Act will apply two years after its entry into force, with some specific provisions having different timelines.

Possible Implications for US Companies

Market Access for High-Risk AI Systems:

US companies developing or distributing high-risk AI systems in the EU will need to comply with the stringent requirements set forth by the Act. This includes ensuring these systems meet the safety, transparency, and fundamental rights standards as defined by the EU.

Prohibited AI Practices:

The ban on certain AI practices, like cognitive behavioral manipulation, untargeted scraping of facial images, and certain types of predictive policing, will affect US companies engaged in these activities. They will need to modify or discontinue these practices for their products and services in the EU market.

Law Enforcement Exceptions:

US companies providing AI solutions to law enforcement agencies in the EU will have to adhere to the specific conditions and exceptions laid out in the Act, particularly concerning real-time remote biometric identification systems.

General Purpose AI and Foundation Models:

Companies involved in developing general-purpose AI systems or foundation models will be subject to transparency obligations and potentially stricter regulations for high-impact foundation models. This could necessitate significant adjustments in development and disclosure practices.

Governance and Compliance:

The establishment of an AI Office within the European Commission and the AI Board means US companies will have to navigate a new regulatory landscape, potentially dealing with additional bureaucratic processes and compliance checks.

Penalties for Non-Compliance:

The Act’s provisions for fines based on global annual turnover can lead to significant financial consequences for US companies violating the regulations. Understanding and adhering to the Act’s requirements will be crucial to avoid these penalties.

Transparency and Fundamental Rights Protection:

Requirements for a fundamental rights impact assessment, as well as increased transparency in the use of high-risk AI systems, will compel US companies to adopt more rigorous assessment and disclosure practices.

Innovation and Testing Environments:

While the Act supports innovation through regulatory sandboxes, US companies will need to understand and utilize these provisions effectively to test and develop AI systems in the EU.

Note: The above are interpretations based on current publications by the European Council. Information is subject to change.

More articles

Comparing Perplexity to Promethia for Industry Research

We compared the outputs of Perplexity and Promethia to a simple query for industry research. Promethia took longer, but produced more thorough results

Read more

Comparing o1-preview and sonnet 3.5 to a team of agents

We compared the outputs of o1-preview, Sonnet 3.5, and the agent framework Promethia. The extraordinary results should help you understand some of the hype behind AI Agents.

Read more

Try out Promethia today for free.