Department for the Coordination of Algorithmic Oversight (DCA)
Since 2023, the Dutch Data Protection Authority (AP) has been undertaking new activities that give impetus to algorithmic and AI oversight in the Netherlands. Within the Dutch DPA (AP), a separate organisational unit has been created to coordinate algorithmic oversight: the Department for the Coordination of Algorithmic Oversight (DCA).
On this page
The focus of the coordination of algorithmic oversight is on better protecting fundamental values and rights during the development and deployment of algorithms, for example in response to risks of discrimination, arbitrariness, and deception, as well as in information on algorithms. The idea is that users of a product or service know what they are dealing with.
The DCA’s activities in this respect are unrelated to the Dutch Data Protection Authority’s GDPR compliance monitoring. Alongside the work done by this department, the Dutch Data Protection Authority (AP) will continue to be the supervisory authority for personal data processing, including personal data processing using algorithms and AI.
Coordination activities for algorithmic oversight
The three primary tasks are to:
- identify and analyse high-level algorithmic and AI risks and to share the resulting knowledge;
- strengthen collaboration around algorithmic and AI oversight;
- promote guidance and create an overall view in relevant frameworks.
Additionally, the DCA coordinates the preparation for new supervisory tasks under the upcoming EU AI Act.
Identifying and analysing high-level algorithmic and AI risks
We work to collect and analyse signs, knowledge, and insights regarding the risks associated with the development and deployment of algorithms and AI that might affect individuals, groups of individuals, or society as a whole, and that may thus ultimately disrupt society. We do that by focusing on high-level risks and impact affecting multiple sectors and domains.
In doing so, we look both at the development of algorithmic systems and at what organisations subsequently do with those systems. One way in which we do that is by analysing incidents. In addition, we closely track any technological innovations to gain an early-stage idea of the risks it might involve.
Collecting knowledge and insights leads to the most comprehensive picture possible of the risks that the use of algorithms represents for our society. We share that picture with fellow market surveillance authorities, policymakers, civil rights organisations, companies, and individual citizens. This will help them make better choices to mitigate algorithmic risks.
AI & Algorithmic Risks Report Netherlands (ARR)
Twice a year, the Dutch Data Protection Authority (AP) publishes the AI & Algorithmic Risks Report Netherlands (ARR), sharing knowledge of developments in algorithmic risks with organisations, political actors, policymakers, and the general public.
Strengthening existing collaborations
By working more closely together as market surveillance authorities, we will be able to uncover risks attached to the use of algorithms at an earlier stage, and subsequently address them more effectively or even prevent them altogether.
The DCA hosts the AI & Algorithm Chamber (AAK), for example, which is an administrative consultation body that is part of the Digital Regulation Cooperation Platform (‘Samenwerkingsplatform Digitale Toezichthouders’, or SDT). Additionally, the working group of market surveillance authorities for AI works together on a broad scale to share knowledge on algorithmic risks.
The DCA also facilitates collaboration between market surveillance authorities on new AI-related issues, such as the challenges and risks associated with generative AI. Collaboration with other market surveillance authorities helps us detect the risks in society, as well as the blind spots.
Promoting guidance
Through its guidance and discussion documents, the Dutch Data Protection Authority (AP) contributes towards policy and clarifies the rules for algorithms. This makes it clearer to companies, public authorities, and other organisations what is expected of them and clearer to citizens what they can expect.
Focus points in the coordination of algorithmic oversight are to prevent discrimination (through bias testing, for example), promote transparency and ‘explainability’, set up adequate governance for risk control, and conduct periodical AI and algorithm audits.
Together with other market surveillance authorities, the Department for Coordination of Algorithmic Oversight helps explain the standards and frameworks governing the deployment of algorithms. Where necessary, the DCA seeks to align with international standards.
Algorithmic oversight under development
Over the coming years, the DCA will team up with other market surveillance authorities and the relevant government ministries to identify what else is needed to strengthen algorithmic and AI oversight in the Netherlands. The EU AI Act is a key cornerstone in this respect. The DCA is coordinating, together with the State Inspectorate for Digital Infrastructure (‘Rijksinspectie Digitale Infrastructuur’, or RDI), the preparations for the implementation of AI Act compliance monitoring by, among other things, advising the relevant ministries on the organisation of Dutch market surveillance and by setting up collaboration. Numerous boards, market surveillance authorities, and inspectorates are involved in this advisory process.
From 2023, the Dutch DPA (AP) has received 1 million euros in funding for the DCA’s activities, which will gradually be upped to 3.6 million euros by 2026.
Contact with the DCA
The DCA welcomes feedback from organisations and other parties. To contact the DCA, please send an email to dca@remove-this-text.autoriteitpersoonsgegevens.nl.
AI & Algorithmic Risks Report Netherlands (ARR)
Twice a year, the Dutch Data Protection Authority (AP) publishes the AI & Algorithmic Risks Report Netherlands (ARR), sharing knowledge of developments in algorithmic risks with organisations, political actors, policymakers, and the general public. View the latest edition.
Werkagenda coördinerend algoritmetoezicht 2024
De AP richt zich in de rol als coördinerend toezichthouder op AI en algoritmes in 2024 op 4 aandachtsgebieden: transparante algoritmes, auditering, governance en het voorkomen van ongewenste discriminatie bij de inzet van algoritmes. Ook bereiden we ons voor op de AI-verordening en onderzoeken we de risico’s van generatieve AI. Bekijk ook: