AI and algorithm risks on the rise amidst increased use: master plan necessary to prepare the Netherlands for a future with AI

Themes:
Coordination of algorithmic and AI supervision

The risks of artificial intelligence (AI) and algorithms are growing, especially with the emergence of generative AI. Issues like disinformation, privacy violations and discrimination are becoming more common. Technological innovations and the rapid adoption of AI and algorithms is currently outpacing our society’s ability to identify and manage these risks through regulation and supervision.

Since early 2023, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) is the national coordinating authority for risk signalling, advice, and collaboration in the supervision on AI and algorithms.

In its second AI and Algorithmic Risks Report Netherlands, the AP highlights the urgent need for better risk management and incident monitoring. The advance of generative AI puts additional pressure on the development of effective safeguards. 

The report recommends a comprehensive strategy (a national master plan) that includes human control and oversight, secure applications and systems, and strict rules to ensure that organisations are in control. 

It is also important to invest in educating people of all ages about algorithms and AI. Everyone should understand how algorithms and AI affect their lives and how to maintain control over them. This is important in areas like education and the workplace.

Incidents undermine trust

Aleid Wolfsen, Chair of the AP: "The more AI and algorithms are being used in society, the more incidents seem to occur. This demonstrates the need for better short-term risk management. With 75% of organisations in the Netherlands seeking to use AI in workforce management in the near future, workers need to be protected from negative effects."

"We also know that many risks and incidents remain under the radar. The use of AI and algorithms can contribute to sustainable prosperity and well-being. And it’s possible to do this in such a way that fundamental rights are well protected. But incidents undermine our trust in algorithms and AI. Adequate regulation and robust supervision are therefore necessary conditions."

Focus on human oversight

The AP proposes a national master plan that aims to achieve by 2030 effective management and control of the use of algorithms and AI associated risks. This ensures that companies, government and people work together with academia and NGOs towards a society in which we use algorithms and AI responsibly and safely. 

Such a strategy should include clear goals and agreements for each year. Implementation of regulations, such as the Artificial Intelligence Act (AI Act), is part of this master plan. The plan will enable society to use AI to enhance prosperity, well-being and stability, while protecting fundamental rights and public values. 

Human control is an important starting point. However, this will only work if people are aware of the functioning and risks of algorithms and AI in their daily lives. This will foster the development and use of safe and reliable applications in a society where trust in algorithms and AI is low.

Need for more knowledge and understanding

Almost everyone will have to deal with algorithms and AI in the near future. Knowledge of AI is needed in order to be able to stay in control, but not everyone needs the same type of knowledge. 

For example, teachers or doctors should know how to use and judge certain algorithms. Workers need to know what the use of algorithms and AI in their work environment means for them and how to protect themselves from possible consequences. Leaders of organisations must have adequate knowledge to be able to oversee and assess the risks, impacts and opportunities for control and risk management before deciding to deploy an AI system. 

Structural investments must be made to increase everyone’s knowledge so that  society can deal with the use of algorithms and AI.

Generative AI: high risks call for appropriate supervision

Generative AI is rapidly integrating into Dutch society and leads to new usage risks and systemic risks. Disinformation, manipulation and discrimination are risks that require attention. The AI Act will ensure that from 2025 there is appropriate and proactive oversight of foundation models and the organisations that develop these models.

Supervision and the AI Act

In the Netherlands, supervisors are jointly preparing for the supervision on the AI Act, on which a political agreement was reached in December 2023. However, controlling algorithms and AI requires more than just setting up supervision. 

It is important that companies and organisations themselves also pro-actively work on risk management, internal supervision and control mechanisms to achieve reliable and safe use of algorithms and AI.

""

Publications

AI & Algorithmic Risks Report Netherlands - Winter 2023-2024

Also read

View all current affairs