In the wake of the rapid proliferation of advanced generative AI, the world’s leading economies have moved to establish clear guardrails for the technology’s most powerful developers. The G7 Hiroshima AI Process represents a landmark consensus, creating a set of international guiding principles and a voluntary Code of Conduct aimed directly at organizations building sophisticated AI systems.
Launched during the G7 Summit in Japan in May 2023, this initiative is a direct response to the opportunities and challenges posed by technologies like large language models (LLMs). While frameworks like the OECD and UNESCO provide broad ethical guidance for all AI actors, the Hiroshima Process zooms in on the unique responsibilities of those at the cutting edge of frontier AI development. You can read the official G7 Leaders’ Statement on the Hiroshima AI Process website.
For business leaders and AI practitioners, especially those working with foundation models, understanding this process is not optional—it’s essential for navigating the future of AI governance, managing risk, and maintaining a social license to operate.
The Risk-Based Framework: Understanding the Four Tiers
The AI Act categorizes AI systems into four distinct levels based on the risk they pose to the health, safety, and fundamental rights of individuals. This tiered approach allows the regulation to be targeted and proportionate.
The Risk-Based Framework
Unacceptable Risk: BANNED
AI practices that pose a clear threat to the safety, livelihoods, and rights of people are strictly prohibited.
High-Risk: STRICTLY REGULATED
Systems that can have a significant impact on people’s lives are subject to rigorous compliance obligations.
Limited Risk: TRANSPARENCY
Users must be informed they are interacting with an AI system. Includes chatbots and deepfakes (which must be labeled).
Minimal Risk: UNRESTRICTED
The vast majority of AI systems, such as spam filters or AI in video games, face no new legal obligations.
Unacceptable Risk: Prohibited AI Practices
The Act explicitly bans certain AI applications that are deemed to pose an unacceptable threat to fundamental rights. As of February 2, 2025, the following practices will be prohibited in the EU:
- Social scoring systems that evaluate individuals based on their social behavior or personal characteristics.
- AI-based manipulation that uses subliminal techniques to distort behavior and cause harm.
- Untargeted scraping of facial images from the internet or CCTV to create facial recognition databases.
- Emotion recognition in workplaces and educational institutions.
- Biometric categorization based on sensitive attributes like race, political opinions, or sexual orientation.
- Real-time remote biometric identification in public spaces for law enforcement, with very limited exceptions.
- Predictive policing systems that assess an individual’s risk of committing a crime.
- AI systems that exploit the vulnerabilities of specific groups, such as children or people with disabilities.
High-Risk AI Systems: The Core of the Regulation
This is the most regulated category, subject to extensive compliance obligations before and after market entry. High-risk systems fall into two main groups:
- Safety Components: AI systems that are safety components of products already covered by existing EU regulations, such as those in medical devices, vehicles, or toys.
- Specific Use Cases (Annex III): AI systems used in eight critical areas, including:
- Biometrics and identification
- Management of critical infrastructure (e.g., water, gas, electricity)
- Education and vocational training (e.g., scoring exams, admissions)
- Employment and worker management (e.g., recruitment, performance evaluation)
- Access to essential services (e.g., credit scoring, insurance)
- Law enforcement
- Migration and border control
- Administration of justice and democratic processes
Organizations developing or deploying high-risk AI systems must adhere to strict requirements, including risk management, data governance, technical documentation, human oversight, and high standards of accuracy and cybersecurity.
Limited Risk: Transparency is Key
AI systems classified as limited risk must meet specific transparency obligations to ensure users know they are interacting with an AI. This category primarily includes:
- Chatbots and other conversational AI.
- AI systems that generate synthetic content (deepfakes), which must be labeled as artificially generated.
Minimal Risk: The Majority of AI
Most AI applications, such as AI-enabled video games or spam filters, fall into the minimal risk category. These systems face no additional regulatory requirements under the AI Act, allowing innovation to proceed without new legal hurdles.
General-Purpose AI (GPAI) Models: A New Regulatory Layer
The AI Act introduces specific rules for General-Purpose AI (GPAI) models, like the large language models that power many generative AI tools. These are regulated based on their computational power, measured in floating-point operations (FLOPs).
General-Purpose AI (GPAI) Models
Standard GPAI Models
All GPAI models must meet baseline transparency requirements:
- Provide technical documentation
- Publish summary of training data
- Implement a copyright compliance policy
GPAI with Systemic Risk
The most powerful models (>10²⁵ FLOPs) face stricter obligations:
- Conduct thorough model evaluations
- Assess and mitigate systemic risks
- Report serious incidents to authorities
- Ensure high-level cybersecurity
- Standard GPAI Models: All GPAI models must provide technical documentation, a summary of their training data, and comply with EU copyright law.
- GPAI Models with Systemic Risk: The most powerful models (trained using more than 10²⁵ FLOPs) face stricter obligations. They must conduct thorough model evaluations, assess and mitigate systemic risks, report serious incidents, and ensure a high level of cybersecurity.
Implementation Timeline and Enforcement
The AI Act will be implemented in phases, giving businesses time to adapt. Enforcement will be handled by a combination of a central European AI Office, which will oversee GPAI models, and national competent authorities in each member state.
Implementation Timeline
Prohibitions Apply
The ban on unacceptable-risk AI practices takes effect.
GPAI Rules Apply
Obligations for General-Purpose AI models become applicable.
Full High-Risk Compliance
Full implementation of all requirements for high-risk AI systems is enforced.
Penalties for non-compliance are severe, with fines reaching up to €35 million or 7% of a company’s worldwide annual turnover for using prohibited AI practices.
Strategic Recommendations for Business Leaders
Proactive compliance is not just about avoiding fines; it’s about building a competitive advantage. Here is a roadmap for navigating the AI Act.
Strategic Roadmap for Business Leaders
Immediate Actions (0-6 Months)
- Conduct AI System Audit: Inventory all AI systems in use.
- Perform Risk Classification: Categorize systems into the four risk tiers.
- Establish AI Governance: Appoint a leader or team for compliance.
- Develop AI Literacy: Train relevant employees on the new rules.
Medium-Term Implementation (6-18 Months)
- Implement Risk Management: Build a continuous framework to assess AI risks.
- Enhance Data Governance: Ensure high-quality, representative datasets.
- Create Documentation: Standardize processes for technical documentation.
- Design Human Oversight: Implement meaningful human control for high-risk AI.
Long-Term Strategy (18+ Months)
- Build Competitive Edge: Market your compliance as a sign of trust.
- Engage with Sandboxes: Test innovative systems with regulatory guidance.
- Develop a Global Standard: Use your EU compliance as a global foundation.
Immediate Actions (0-6 Months)
- Conduct an AI System Audit: Inventory all AI systems used across your organization to understand your exposure.
- Perform Risk Classification: Determine which of your AI systems fall into the high-risk, limited-risk, or minimal-risk categories.
- Establish an AI Governance Structure: Appoint a leader or team to oversee AI Act compliance efforts.
- Develop AI Literacy Programs: Train relevant employees on the new requirements.
Medium-Term Implementation (6-18 Months)
- Implement Risk Management Systems: Establish a continuous framework for identifying, assessing, and mitigating AI risks.
- Enhance Data Governance: Ensure your datasets are high-quality, representative, and managed in line with both the AI Act and the GDPR.
- Create Documentation Frameworks: Standardize your processes for maintaining the required technical documentation.
- Design Human Oversight Mechanisms: Implement meaningful human control systems for your high-risk AI.
Long-Term Strategic Integration (18+ Months)
- Build Competitive Differentiation: Use your compliance with this “gold standard” regulation as a market advantage to build trust.
- Engage with Regulatory Sandboxes: The Act requires each member state to establish AI regulatory sandboxes. Use these to test innovative systems with regulatory guidance.
- Develop a Global AI Standard: Use your EU AI Act compliance framework as the foundation for meeting AI regulations as they emerge in other jurisdictions.
Turning Compliance into a Competitive Edge
The EU AI Act represents a fundamental shift in how artificial intelligence is governed. Like the OECD AI Principles and the UNESCO Recommendation on the Ethics of AI, it signals a global move towards a more responsible and human-centric approach to technology.
For business leaders, this legislation is a strategic inflection point. Organizations that view the AI Act as an opportunity—to build trust, improve system quality, and differentiate their brand—will be best positioned to thrive. By investing in a robust AI governance framework today, you can transform a complex regulatory challenge into a source of sustainable competitive advantage in the global marketplace.