AP and RDI: Supervision of AI systems requires cooperation and must be arranged quickly
Cooperation between supervisory authorities is of paramount importance in the supervision of artificial intelligence (AI), the Dutch Data Protection Authority (Dutch DPA) and the Dutch Authority for Digital Infrastructure (RDI) write in their advice to the Dutch government. Decisions on which bodies will carry out the different supervisory tasks need to be made soon, as the first parts of the new European AI Act will come into force at the start of 2025.
The Dutch DPA and the RDI emphasise that sufficient budget and staff must be available in time for all supervisory authorities involved, to ensure they can begin their tasks, such as, for example, in the areas of guidance and enforcement, promptly.
The advice has been prepared by the Dutch DPA and the RDI in cooperation with 20 other Dutch supervisory authorities that may play a role in monitoring AI. For over a year, these supervisory authorities have been jointly preparing for the supervision of AI. With this joint vision on a national supervisory structure for AI, Dutch supervisors are leading the way in Europe.
AI Act
Last month, European ministers voted in favour of the AI Act, the world's first comprehensive law on artificial intelligence. The AI Act stipulates that high-risk AI systems may only be placed on the market and used if they meet strict product requirements. These systems will be given a CE marking, as has been mandatory for years for e.g. lifts, mobile phones and toys.
Aligning with existing product supervision
The Dutch DPA and the RDI recommend that AI supervision in various sectors be aligned as much as possible with existing supervision. The supervision of high-risk AI products that already require a CE marking can remain the same. For example, the Netherlands Food and Consumer Product Safety Authority (NVWA) will continue to inspect toys, even if they contain AI, and the Health and Youth Care Inspectorate (IGJ) will supervise AI in medical devices.
Angeline van Dijk, Inspector General of the RDI: ‘Cooperation is key when it comes to the concentration of knowledge and coordination in practice. Effective supervision that considers innovation can only come into being if relevant supervisory authorities cooperate with developers and users of high-risk AI. Companies and organisations can explore together with the RDI whether they need to comply with AI regulations and how they can do so. The efforts of the RDI to set up regulatory sandboxes, a kind of breeding ground for responsible AI applications, is an excellent example of this. This advice is an important milestone in that regard.’
New supervision of AI
The supervision of high-risk AI applications for which no CE marking is currently required should largely be vested in the Dutch DPA, in addition to sectoral supervision, the supervisory authorities write. It does not matter in which sector these systems are used, from education to migration and from employment to law enforcement. The Dutch DPA should be the so-called “market surveillance authority” here.
Aleid Wolfsen, chairman of the Dutch DPA, says: ‘The market surveillance authority will ensure that AI that is placed on the market actually meets requirements in areas such as training AI, transparency, and human control. This requires specialist knowledge and expertise, which is in particular efficient if bundled together. It is also important that the Dutch DPA can keep an overview in this way, given that companies developing such AI often do not do so only for one sector. Cooperation with sectoral supervisory authorities is crucial, because they have a good overview of AI use in, for example, education or by employers. We will take swift action to set up this cooperation.’
The supervisory authorities propose two exceptions: in the financial sector, the Dutch Authority for the Financial Markets (AFM) and De Nederlandsche Bank (DNB) will deal with market surveillance, while the Human Environment and Transport Inspectorate (ILT) and the RDI will oversee critical infrastructure. Additionally, the market supervision of AI systems used for judicial purposes must be set up in such a way that the independence of judicial authorities is safeguarded.
It is important that supervisory authorities are quickly appointed not only in the Netherlands, but also in other Member States. Cross-border and large AI systems require cooperation between supervisory authorities from different Member States and with the new European AI Office, which will supervise large AI models that, for example, underpin ChatGPT.
Urgent AI regulatory actions
Several issues need to be addressed in the short term. This includes identifying fundamental rights supervisory authorities, a role envisioned by the supervisory authorities for the Netherlands Institute for Human Rights and the Dutch DPA. Attention is also needed for the notified bodies to assess AI systems’ compliance with European standards. The supervisory authorities urge the government to appoint the relevant supervisory authorities quickly, so that guidance, enforcement and practical preparation for these new tasks can begin in time. For example, the ban on some forms of AI is likely to apply as early as January 2025. The supervisory authorities propose that the Dutch DPA will be responsible for supervision of the prohibitions.