EU AI Act Penalties Take Effect as Member States Launch First Compliance Audits
Three weeks after the February 2 deadline for prohibited AI practices, EU member states begin their first formal compliance audits — targeting emotion recognition in workplaces, social scoring systems, and real-time biometric surveillance.
Three weeks after the February 2, 2026 deadline for prohibited AI practices under the EU AI Act, member states have begun their first formal compliance audits — marking the transition from regulatory preparation to active enforcement of the world's most comprehensive AI legislation.
First Audit Targets
The initial audits focus on the AI Act's outright prohibitions: emotion recognition systems in workplaces and educational institutions, social scoring by public authorities, real-time remote biometric identification in public spaces (with limited law enforcement exceptions), and AI systems that exploit vulnerabilities of specific groups. National AI authorities in France, Germany, and the Netherlands have issued their first formal information requests to companies suspected of operating prohibited systems.
Industry Response
Large technology companies have largely complied with the prohibited practices provisions, having had two years to prepare since the Act's adoption in March 2024. The greater challenge is for mid-market and enterprise software companies that may have embedded AI capabilities — sentiment analysis, behavioral scoring, or biometric processing — into existing products without recognizing that these features now fall under the Act's prohibitions. Industry groups report a surge in demand for AI compliance auditing services, with consulting firms specializing in EU AI Act readiness seeing three to four times their normal client volumes.
Penalty Framework
Violations of the prohibited practices provisions carry the Act's steepest penalties: up to 35 million euros or 7% of global annual turnover, whichever is higher. For context, a 7% revenue penalty applied to a major technology company could exceed $10 billion — a figure designed to make non-compliance economically irrational even for the largest firms. The first penalties are not expected until later in 2026, as regulators work through the audit-to-enforcement pipeline, but the audits themselves signal that the era of voluntary AI governance in Europe has ended.
Related Articles
NVIDIA GTC 2026 Keynote: Jensen Huang Unveils Vera Rubin Platform and Six New Chips
NVIDIA CEO Jensen Huang opened GTC 2026 in San Jose with the formal unveiling of the complete Vera Rubin GPU platform — six new chips featuring 288 GB of HBM4 memory, 336 billion transistors, and 50 PetaFLOPS of FP4 performance. Over 30,000 attendees from 190 countries gathered for the AI industry's most anticipated annual event.
OpenAI Acquires Promptfoo to Strengthen AI Agent Security and Red-Teaming
OpenAI has agreed to acquire Promptfoo, the open-source AI security and red-teaming platform used by over 25% of the Fortune 500, in a deal that will integrate the tool directly into OpenAI's enterprise agent platform. The acquisition signals OpenAI's growing focus on safety infrastructure as it pushes deeper into autonomous AI agent deployment.
NVIDIA Releases Nemotron 3 Super: Open 120B-Parameter Model Targets Enterprise Agentic AI
NVIDIA has released Nemotron 3 Super, a 120-billion-parameter open-weights model built on a hybrid Mamba-Transformer architecture with a one-million-token context window. The model delivers 5x throughput improvements over its predecessor and is designed specifically for enterprise agentic AI workflows.