Press "Enter" to skip to content

From EU’s AI Act to US AI Policies: A Global Shift Towards Regulating Artificial Intelligence

And there has been a lot of legislative news.

European Union officials have reached a landmark deal on the AI Act, a comprehensive law to regulate artificial intelligence. The law aims to classify risk, enforce transparency, and financially penalize tech companies for noncompliance. It requires tech companies to disclose data, conduct rigorous testing, and ban high-risk uses of AI. The legislation includes restrictions for foundation models but provides exemptions for open-source models. Violators of the AI Act could face fines of up to 7% of global revenue. The law positions Europe as a leader in tech regulation and may serve as a model for other jurisdictions.

The act includes obligations for high-impact AI systems, transparency requirements, and the right for citizens to launch complaints. The law aims to limit the use of AI and protect against its risks, such as job automation, misinformation, and national security threats.   The law is expected to come into force no earlier than 2025.

The EU’s deal on the AI Act has been criticized for being too broad and potentially stifling innovation. Critics warn of negative consequences for the European economy and the technology sector, with concerns that it may drive away European tech startups and businesses.  European SMEs are raising concerns about potential changes to the EU’s AI Act, specifically regarding the regulation of foundation AI models. France, Germany, and Italy propose that Big Tech companies self-regulate these models, but SMEs argue that this would shift responsibility to smaller businesses. The European Digital SME Alliance suggests that providers of large foundation models undergo third-party conformity assessments to ensure compliance. Amnesty International also opposes the proposal, warning that it could jeopardize the adoption of the AI Act.

And the EU isn’t the only group working on legislation.  According to the Washington Post, Senate Majority Leader Charles E. Schumer and the bipartisan working group on artificial intelligence expect key committees to increase efforts to craft legislation on AI in 2024. The group has discussed principles but has not endorsed any specific proposals yet. Lawmakers have introduced various measures to regulate AI tools and promote development. Some lawmakers, like Sen. Josh Hawley, are getting impatient and plan to push for action. The discussions aim to take action before the 2024 elections to prevent the misuse of AI in online political discourse. 

For example, the use of artificial intelligence (AI) in healthcare will be discussed in an upcoming congressional hearing by the House of Representatives Energy & Commerce Committee. The hearing will focus on the federal government’s role in addressing AI in the marketplace and include witnesses from various government departments.

And last week, the House passed the POST IT Act unanimously, requiring the government to create an online portal for small businesses to access regulations and compliance guidance. Additionally, the House passed the Small Business Contracting Transparency Act to increase government contracts awarded to small businesses in disadvantaged areas or owned by women or disabled veterans. The government currently aims to provide 5% of contracts to women-owned small businesses, 3% to disabled veteran-owned small businesses, and 3% to small businesses in poorer areas.  Agencies routinely miss these targets, which this bill seeks to address.

Finally, Unidentified governments are reportedly spying on smartphone users through push notifications, according to a letter from US Senator Ron Wyden. The letter states that foreign officials demand data from Google and Apple, who have unique insight into the traffic flowing from apps to users. Wyden has called on the Department of Justice to address this issue and allow public discussions on push notification spying. Both foreign and U.S. government agencies have asked Apple and Google for push notification data. Apple is updating its transparency reporting to include these requests.  This technique, which relies on the alerts received when friends contact users via email or text, was used to gather information about the U.S. Capitol rioters on January 6, 2021.

Why do we care?

In the EU, The AI Act’s potential impact on small and medium-sized enterprises (SMEs) and the exemption for open-source models are critical. While it aims to protect against AI risks like job automation and misinformation, there’s a valid concern that it might stifle innovation and deter tech startups.

In the US, it’s a sign there will be structured AI regulation.    Plus, bonus wins for awareness of opportunity with the federal government, and growing willingness to take on big tech.