EU policymakers prepare to shut down early aspects of AI regulation

The first less controversial parts of the historic EU law on AI have already been clarified at a technical level, while the provisions to promote innovation and the obligation of an impact assessment on fundamental rights could follow suit.

The AI ​​Act is a legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The bill is currently in the last phase of the legislative process, the so-called trilogues between the Council, Parliament and the EU Commission.

The Spanish presidency of the EU Council of Ministers shared on Monday (10 July) a document, seen by EURACTIV, in preparation for the trilogue, in which it asks other member states to confirm parts of the text that have already been agreed and flexibility on more political problems.

The parts ready for political confirmation are the obligations of suppliers and users of high-risk systems, conformity assessment bodies and technical standards, while flexibility is requested on the provisions on innovation and impact assessment on fundamental rights.

The dossier will arrive on the table of the Permanent Representatives Committee on Friday, the same day that EU negotiators will meet for the last technical meeting before the political trilogue on July 18 to discuss the innovation chapter and the final provisions.

Obligations for high-risk suppliers and users

The Artificial Intelligence Act provides for a stricter regime for AI model providers with a high risk of causing harm.

Information requirements have been made more prescriptive, including supplier branding and contact address. They will also have to draw up an EU declaration of conformity and ensure that the systems comply with EU accessibility requirements.

The quality management system required was compatible with those required by other regulations as well, as in the case of financial services. The obligation to keep the technical documentation expires 10 years after the system has been placed on the market.

Additionally, high-risk vendors must maintain automatically generated logs for at least six months. The provisions for the representatives of high-risk suppliers and the obligations for importers and distributors of high-risk systems have also been defined.

Notification authorities and notified bodies

The AI ​​Regulation requires systems using biometric identification technologies to undergo third-party conformity assessment by certified auditors, so-called notified bodies. A national authority must authorize such bodies, the notifying authority.

The agreed text states that there should be at least one notifying authority in each Member State which will have to set up the necessary designation procedure in cooperation with other EU counterparts, which will also, together with the Commission, have the possibility to challenge the accreditation of a body.

These authorities should be adequately staffed with technologically and legally competent staff. Adequate IT security has been introduced as part of the requirements for notified bodies, together with a clause with a six-month cooling-off period for potential conflicts of interest.

Additional steps have been introduced for these conformity assessment bodies to avoid unnecessary burdens for suppliers. The procedure for the suspension by a competent authority of the authorization of an organization that no longer meets the requirements has also been further refined.

Standards and conformity assessment

Under the AI ​​law, the European Commission can adopt harmonized technical standards. These standards greatly facilitate the compliance efforts of AI developers because compliance with the AI ​​regulation is assumed to be achieved by following the standard.

The Commission’s discretion in drafting standardization requests has been significantly limited, as the text has been made more prescriptive and the EU executive will have to consult the IV Office and the Advisory Forum before issuing them.

Similarly, the Commission also has less discretion to issue common specifications.

Innovation measures

Measures in favor of innovation is another chapter that the Spanish presidency wants to close at the next trialogue. However, as there was no time to draft a concrete text, Madrid asked member states for flexibility after an initial discussion.

The European Parliament requires every EU country to have at least one regulatory sandbox – a controlled environment where companies can experiment under the supervision of a public authority.

Spain suggested accepting this approach under the condition that sandboxes can also be implemented jointly with other member states or the obligation fulfilled by joining an EU-wide sandbox.

Furthermore, MEPs want AI developers using a sandbox to benefit from a presumption of conformity to incentivize participation. The Presidency has proposed to accept this proposal on the condition that the exit reports are included in the declaration of compliance.

Finally, delegations were asked for flexibility regarding the potential inclusion of Notified Bodies in the sandboxes and whether they are willing to introduce more stringent safeguards to allow participants to conduct tests in real-world conditions.

Impact assessment on fundamental rights

MPs have introduced a requirement for users of high-risk systems to conduct a fundamental rights impact assessment before they are used. The presidency believes that including such an obligation might be necessary to reach a package deal.

At the same time, the Spaniards propose to limit the scope of this obligation only to public bodies and say that it should not overlap or conflict with existing obligations such as data protection impact assessment.

The presidency also wants to make stakeholder consultation more voluntary than mandatory.

[Edited by Zoran Radosavljevic]

Read more with EURACTIV

#policymakers #prepare #shut #early #aspects #regulation
Image Source :

Leave a Comment