Skip to main content

Europe’s AI Act: a new role for the Dutch Data Protection Authority?

|

"The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether." This quote from Stephen Hawking in 2017 is more relevant today than ever.

The transformative power of Artificial Intelligence (AI) is reshaping how we live, work, and communicate at an unprecedented pace. While the possibilities seem boundless, so do the risks. In response to this, significant strides have been made at the European level to regulate AI responsibly. On December 8, 2023, a provisional agreement was reached between the European Parliament and the Council of the EU on the 'AI Act,' making Europe the pioneer continent to establish specific regulations for AI use. A milestone!

Each member state is obliged to appoint a national supervisory authority. How will the Netherlands fulfill this role? We'll outline this below. But first, let's briefly delve into what the AI Act will regulate.

 

AI Act in short: a risk-based approach

The AI Act aims to establish rules for the use of artificial intelligence within the European Union. These rules are designed to increase citizens' and consumers' trust in AI applications while simultaneously protecting the rights and safety of individuals. The rules apply to both providers and users of AI systems.

A risk-based approach has been chosen. This entails classifying AI systems based on various risk levels, considering the extent to which AI systems pose a risk to the health, safety, or fundamental rights of natural persons. The following risk levels are distinguished:

  • Unacceptable risk: Systems with an unacceptable risk are prohibited. These include systems that violate fundamental rights, such as emotion recognition in the workplace or education, or systems that manipulate human behavior.
  • High risk: These systems are allowed provided they meet strict requirements. These systems must comply with legal provisions regarding data and data management, documentation and record-keeping, transparency and information provision to users, human oversight, robustness, accuracy, and security.
  • Low/minimal risk: Low-risk systems are allowed but must provide transparency to users, such as in chatbots or deep-fakes.

Violation of the AI Act is heavily penalized. Fines for non-compliance with the rules can amount to €35 million or a certain percentage of turnover (7% of global turnover or €7.5 million or 1.5% of turnover, depending on the infringement and size of the company).

To enforce the AI Act at the member state level, the establishment of national supervisory authorities is foreseen. These supervisors will play a crucial role in ensuring compliance with AI rules within the respective member states and will also collaborate at the European level to ensure consistent enforcement.

 

The Dutch Data Protection Authority as national supervisory authority under the AI Act? 

The exact identity of the national supervisory authority in the Netherlands is not yet known. However, there's a high probability that the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) will assume this role.

The Dutch Data Protection Authority itself is clear: it welcomes this task. This is evident from its position paper 'AP Inzet Artificial Intelligence Act' dated March 15, 2022. It takes a firm stance: "The choice for the national supervisory authority on the AI Act at the AP contributes to a more harmonized regulatory approach, supports a consistent interpretation of the provisions, and helps avoid potential inconsistencies in regulation enforcement."

This would be a logical step as the Dutch Data Protection Authority has also assumed the role of 'algorithm watchdog' since 2023. As part of this task, the AP publishes a Report on AI & Algorithm Risks in the Netherlands every six months, to provide an overview of recent developments, current risks and associated challenges. It seems efficient for the Dutch Data Protection Authority to coordinate supervision on the AI Act in tandem with these roles.

Furthermore, the AI Act designates the European Data Protection Board as the supervisor for all institutions, agencies, and bodies within the EU falling under the AI Act. Hence, it would also be logical for the national data protection authority to be designated as the national supervisor for AI, aligned with this directive.

 

Conclusion

The recent development of the 'AI Act' in Europe marks a significant step towards responsible regulation, setting strict requirements for high-risk AI systems. The Netherlands faces the task of appointing a national supervisory authority, with the Dutch Data Protection Authority emerging as a logical choice. With the Dutch Data Protection Authority already serving as a 'privacy watchdog' and 'algorithm watchdog,' coordinating supervision on the AI Act would seamlessly complement these roles.

 

Next stepsThe AI Act represents a welcomed step in regulating AI in the EU, and things seem to be moving forward: on Friday, February 2, 2024, the member states agreed on the final draft of the AI Act. The act first needs the formal approval of the European Parliament before it will enter into force. After its entry into force, the AI Act will apply after two years. This does nog apply to AI systems falling within the unacceptable risk category; these systems must withdraw from the market within six months.Returning to Stephen Hawking's quote: The genie is indeed out of the bottle, but will soon be floating around in a legal framework.

 

Next steps

The AI Act represents a welcomed step in regulating AI in the EU, and things seem to be moving forward: on Friday, February 2, 2024, the member states agreed on the final draft of the AI Act. The act first needs the formal approval of the European Parliament before it will enter into force. After its entry into force, the AI Act will apply after two years. This does nog apply to AI systems falling within the unacceptable risk category; these systems must withdraw from the market within six months.

Returning to Stephen Hawking's quote: The genie is indeed out of the bottle, but will soon be floating around in a legal framework.

 

Article provided by INPLP member: Bob Cordemeyer (Cordemeyer & Slager, Netherlands)
Co-Author: Emmely Schaaphok

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

 

 

Cloud Privacy Check (CPC). Data Privacy Compliance in the Cloud Made Easy

Understand Cloud and Data Protection Law in only 4 easy steps. Plus highly relevant legal information for 33 countries. Provided by EuroCloud and 53 European lawyers.

VIEW STREAM

About Us

EuroCloud is an independent non-profit organization and consists of a two-tier setup where organisations form all European countries can apply to participate in as long as they respect the EuroCloud Statutes.

To act as a true European player, all programs that are developed are intended to be European activities. These European programs are the strength of EuroCloud as a whole. Respect to local cultures along with the will to promote a real European spirit.

{$page.footerData}