Trending

Tesla Dojo: Elon Musk’s Big Plan to Build an AI Supercomputer, Explained

Never Miss a Text Again: RPLY’s AI-Generated Responses Are Here!

GitHub Copilot Introduces Vision: Turning Mockups Into Code with Just an Image

Table of Contents

EU Bans AI Systems with ‘Unacceptable Risk’

Read Time: 3 minutes
EU AI Act
EU AI Act

Table of Contents

The EU’s new AI Act bans AI systems that pose an “unacceptable risk,” including those used for social scoring and biometric manipulation. Companies must comply by August 2025 to avoid hefty fines.

In a bold move to regulate artificial intelligence, the European Union has officially banned AI systems that are deemed to pose an “unacceptable risk” to society. This new legislation marks the first major compliance deadline for the EU AI Act, which was approved in 2024 after years of intense discussions. This comprehensive framework is designed to ensure that AI technologies align with ethical standards, protecting individuals from harm. But what does this mean for companies and developers using AI in the EU? Let’s break it down.

The EU AI Act: What’s at Stake?

On February 2, 2025, the European Union reached the first compliance milestone under the AI Act, which seeks to regulate how AI interacts with people across various domains, from consumer apps to real-world environments. The Act categorizes AI systems into four risk levels:

  1. Minimal Risk (e.g., email spam filters) – No regulation.

  2. Limited Risk (e.g., customer service chatbots) – Light regulatory oversight.

  3. High Risk (e.g., healthcare recommendations) – Heavy regulation.

  4. Unacceptable RiskBanned entirely.

The ban specifically targets AI applications that could have harmful or manipulative effects on individuals, such as those used for social scoring, manipulating decisions, or exploiting vulnerabilities like age or socioeconomic status.

Which AI Systems Are Banned?

Some of the most controversial AI practices now banned under the EU AI Act include:

  • AI used for social scoring (e.g., building behavior-based risk profiles).
  • Subliminal AI manipulation that influences decisions without the user’s awareness.
  • AI exploiting vulnerabilities such as age, disability, or socioeconomic status.
  • Predictive AI trying to determine if someone will commit a crime based on their appearance.
  • Biometric AI inferring personal characteristics like sexual orientation or emotional state in workplaces or schools.
  • Real-time biometric data collection in public places for law enforcement.

These systems will be subject to heavy fines, including penalties of up to €35 million or 7% of annual global revenue, whichever is higher, for non-compliance.

Who’s Affected and What Are the Deadlines?

While the February 2 deadline is a formality, companies have until August 2025 to ensure full compliance with the Act. From that point forward, fines and enforcement actions will begin. Several large tech companies, including Amazon, Google, and OpenAI, signed the EU AI Pact, pledging to align their practices with the AI Act in advance.

However, Meta and Apple, as well as some startups like Mistral, chose not to sign the pact. Despite this, experts believe that most companies will avoid using risky AI systems and thus comply with the ban naturally.

Exceptions to the Ban: What’s Allowed?

Despite the strict regulations, there are a few exceptions to the AI bans outlined in the Act. For example:

  • Law enforcement can use biometric systems for targeted searches (e.g., finding missing persons) under certain conditions, with prior authorization.

  • Emotion-recognition AI can be used in workplaces or schools for medical or safety purposes, like therapeutic systems.

The European Commission is expected to release additional guidelines in early 2025 to clarify these exceptions further. However, ambiguities remain, especially about how the AI Act will interact with other EU regulations like GDPR and NIS2.

Fines and Future Challenges

Though the enforcement of penalties won’t begin until August 2025, companies are advised to begin aligning their AI systems with the AI Act’s guidelines now. This regulatory framework has already set a global precedent for how AI development will proceed, with the EU leading the way on AI ethics and privacy protection.

The big challenge moving forward is for organizations to stay informed and ensure that their AI tools comply with all regulations. This includes understanding how the AI Act will mesh with other laws such as data protection, as well as ensuring clarity on compliance standards.

The Future of AI in the EU: What’s Next?

The EU’s bold stance on AI regulation could pave the way for similar moves in other parts of the world, as countries and regions look to protect their citizens from the potential harms of AI misuse. As the AI landscape evolves, ethical considerations and responsible AI development will take center stage, driving further innovation while ensuring that the technology works for the benefit of society.

The introduction of the EU AI Act is a wake-up call for developers and companies using AI systems. While many applications are allowed under the new framework, those that pose unacceptable risks are now banned. With heavy fines and strict compliance timelines, it’s crucial for businesses to understand the impact of these regulations. Stay ahead of the curve and ensure your AI systems are safe, ethical, and compliant with the evolving global AI standards.

Get Instant Domain Overview
Discover your competitors‘ strengths and leverage them to achieve your own success