How to Get Ready for the New Legal Framework of the AI Act
The AI Act is expected to be gradually implemented starting from the second half of 2024. What does it mean to organizations using AI for business? How can you adapt to the new regulation of AI and secure from the related risks?
In this article, we will review the main provisions of the AI Act, analyze potential risks and penalties, and give recommendations to prepare for the new AI legal framework.
What Is the AI Act?
The AI Act provides a comprehensive legal framework to regulate AI systems deployed and used in the EU. It aims to address the risks of using AI around human oversight, safety, transparency, and data privacy, as well as threats to the fundamental human rights in the EU. The main objectives of the new regulation of AI include the following:
- Ensuring that AI systems represented on the EU market are secure and respect human rights.
- Facilitating the development of a safe, trustworthy, and lawful market of AI applications.
- Providing legal certainty to support innovation and investment in AI systems.
- Enhancing governance and enforcing ethics and safety requirements related to AI use.
The proper enforcement of the new AI regulation calls for a range of new governing bodies:
- An AI Office within the Commission to enforce the AI Act rules across the EU.
- An AI Board with representatives from all member states to assist the Commission and member states in the efficient application of the AI Act.
- A scientific panel of independent experts to support the AI Act adoption activities.
- An advisory forum for stakeholders to support the AI Board and the Commission with technical expertise.
The EU AI Act is a significant step in creating the balance between technical innovation and ethical considerations in the development of AI within the EU. It will be adopted gradually by the end of the first quarter of 2026:
What Parties Are Covered / Not Covered with the Newest AI Regulation?
Given the tight implementation schedule of the new AI regulation, you should clearly understand whether the AI systems in your organization measure up to the requirements of the AI Act. To start from, define if the AI systems you use come within the provisions of the AI Act.
Parties Covered by the EU AI Act
Parties Not Covered by the EU AI Act
- Any provider delivering or putting AI systems in the service on the EU market
- Any provider of AI systems outside the EU, whose system output can / is intended for use in the EU
- Any EU-based AI system provider
- Any importer or distributor putting AI systems on the EU market
- Manufacturers of products with AI systems delivering or putting them into services on the EU market under their name or trademark
- EU-based users of AI products or services
- AI systems developed or used exclusively for military purposes
- AI systems from EU-based providers used outside the EU for law enforcement or judicial cooperation within the EU under international agreements
- AI systems for solely scientific research and discovery
- AI systems under research, development, and testing phases before being placed on the market or put into service, including free and open-source AI components
- People using AI for personal use
Simply put, all AI systems used in the territory of the EU come under the AI Act requirements. The exclusions are AI systems for military, scientific, law enforcement or judicial cooperation, and personal use.
Risk-Based Approach in Assessing AI Applications
The AI Act classifies AI systems based on the potential risks they carry. Within, each category, you will find specific measures as it is shown in the picture below:
EU obligations vary based on the category of AI system. According to the current version of the Act , you may face the following responsibilities:
AI system category
Obligation
Unacceptable risk
Since the AI systems in this category pose an unacceptable risk, their use is prohibited.
High-risk AI systems
- Adequate risk management to identify, evaluate, and mitigate risks during the lifecycle of the AI system.
- Appropriate data governance and management practices (training, validation, and testing) to ensure dataset quality.
- Technical documentation must demonstrate compliance with obligations and allow for compliance assessments.
- Logging of events to ensure traceability of the system’s functioning.
- Record keeping regarding tracing and monitoring high-risk situations, conforming to standards, and ensuring that the AI systems’ output has not led to any discriminatory effects.
- Minimum logging must include usage, data, and personnel identification.
- Registration in the EU database for high-risk AI systems.
Limited-risk AI systems
- People must be informed that they are interacting with an AI system.
- People exposed to a (non-prohibited) emotion recognition or biometric system must be informed about the system's presence.
- Deepfake content must be disclosed as being artificially generated or manipulated.
General purpose AI (GPAI)
- All GPAI model providers must provide technical documentation, instructions, compliance with the Copyright Directive, and public summary of the data used for training.
- Free and open-license GPA model providers must comply with copyright and provide a public summary of the data for training.
- For the GPAI models that present a systemic risk, providers must additionally conduct model evaluations, adversarial testing, reports on serious incidents, and ensure cybersecurity protections.
Minimal-risk AI systems
Not the subject to stringent obligations except for adhering to general product safety standards.
Fines and Penalties by the New Regulation of AI
If you still use AI in the company’s products, services, or inner operations simply neglecting the current regulations, it will imply the following penalties:
- €35 million or 7% of global annual turnover for violations of the prohibited AI systems.
- €7.5 million or 1.5% of global annual turnover for breaching transparency and data requirements.
- €15 million or 3% of global annual turnover for violating other AI Act’s obligations.
The fines for SMEs and start-ups will be applied proportionally based on the company’s income. To avoid such crucial penalties, businesses are recommended to prepare for the AI Act starting from now. It would significantly simplify compliance and reduce the potential risks of fines and penalties.
How to Get AI-Act-Ready?
You can proactively begin preparing your organization for compliance to prepare for the AI Act. The key practical steps to take include:
- Stay current on the relevant legal landscape. Review the AI regulation news to be aware of any changes and react promptly.
- Evaluate the need to adjust existing and upcoming AI applications:
- Map, classify, and categorize AI systems used and/or under development based on the AI Act’s risk levels.
- Define what AI systems need to be removed or redesigned to comply with the regulations.
- Specify the adjustments to be made. Infopulse performs an AI Act compliance assessment to help organizations manage this step. Our AI engineers and compliance experts also offer an individual workshop and suggest enhancements to make AI solutions fully compliant.
- Implement required changes to AI apps and develop new ones in accordance with the AI Act requirements. Infopulse also helps companies to implement all necessary adjustments. By the way, AI/ML services by Infopulse are already compliant with the most up-to-date AI regulations.
- Prepare documentation. Prepare documentation and reporting mechanisms to ensure compliance by defining the appropriate policies and procedures. Create transparency and contestability mechanisms for the AI systems you use or deliver to customers.
- Training. Conduct continuous training sessions on approved policies and procedures to help your team follow the new regulations.
- Governance. Establish a governance framework and define the roles responsible for AI regulation compliance, including data protection officers, compliance managers, risk analytics, and others, based on your AI use specifics.
While the AI Act doesn’t replace all relevant AI laws, organizations should combine AI Act compliance with existing procedures. This would allow streamlining compliance workloads and avoid duplication in related processes.
Conclusion
If you are dealing in the EU market, you must learn more about AI Act coverage, risk classifications and associated obligations, proactive organizational and technical adjustments help effectively mitigate potential fines.
What Infopulse can do is to help you get prepared for the AI Act entry with:
- AI Act compliance assessment
- Individual workshops to define required enhancements
- Assistance in implementing necessary changes in existing AI systems
- Developing new AI-powered solutions in full accordance with the current regulations
- Consultancy service to cover all aspects of developing and maintaining compliant AI systems.
Leverage the power of AI technologies, mitigating fines, reputational risks, and other damages by staying fully compliant to the up-to-date legal framework.