The European Union's Artificial Intelligence Act has officially entered into force, creating a sweeping regulatory framework that will reshape how AI systems are designed, deployed, and monitored worldwide.
A Risk-Based Approach
The Act classifies AI systems into four risk tiers — Unacceptable, High, Limited, and Minimal — with obligations scaling accordingly. Systems used in critical infrastructure, biometric surveillance, or medical diagnosis face the strictest scrutiny, requiring mandatory conformity assessments before deployment.
Key Obligations for High-Risk AI
- Transparency: Clear disclosure when users are interacting with an AI system.
- Human Oversight: Mandatory mechanisms for human review and intervention in automated decisions.
- Data Governance: Strict controls on training data quality, bias detection, and data provenance audits.
- Incident Reporting: Serious incidents must be reported to national authorities within 15 days.
What This Means for Indian Tech Companies
Any organization that sells AI-powered software to EU customers — regardless of where they are headquartered — must comply. For Indian SaaS companies with EU clients, this means building compliance pipelines into their development lifecycle now, not later.
Bajillion Labs' Perspective
We view this regulation as a net positive for the industry. Responsible AI has always been central to our engineering philosophy. We are actively helping our clients assess their AI-driven products against the Act's requirements and building the observability tools needed to demonstrate ongoing compliance.