First Algorithmic Risks Report Netherlands calls for additional action to control algorithmic and AI risks
To gain better control over algorithms and artificial intelligence (AI), both government and business sectors need to take substantial strides. Two challenges are converging in 2023. First, the rapid integration of AI innovations into society, such as smart chatbots, introduces new opportunities but also new risks.
Second, all signification public and private institutions in the Netherlands are tasked with comprehending their usage of high-risk algorithms—those with substantial impacts on people's lives. In anticipation of forthcoming European legislation and using this as a point of reference, concrete actions from government and industry to increase control is now already desirable and feasible.
AP Chair Aleid Wolfsen states, "Algorithms and AI can be incredibly useful, but they also bring risks, such as the risk of discrimination, unfair outcomes, deception, and lack of transparency and explainability. To enable the benefits, these risks must be effectively managed."
"Awareness is growing about the need for management of algorithmic risks, yet many governments and companies are still seeking the right approach. Hence, clear regulations and standards are now crucial. We urge the interim government to continue efforts in this direction. Organizations can take a step forward by deploying more personnel and invest in education to oversee algorithms."
Focus needed on high-risk systems
To achieve control over algorithms and AI, comprehensive oversight of how these are being used by public entities and private societal relevant organisations is crucial. Therefore, the AP welcomes the creation of an algorithm register for government organizations. However, the registration requirement should be set up in a proportionate way so that the focus is on the effective identification and management of high-risk algorithms.
The AP advises an alignment with the anticipated classification of high-risk systems under forthcoming European legislation currently under negotiation. The AP calls on the government to provide public organisations with a one year transition period towards compulsorily registration of the usage of high-risk algorithms.
Balancing benefits and risks
Furthermore, the AP cautions against deploying the latest AI innovations in an uncontrolled way. New AI applications are in the spotlight, and the temptation is strong to experiment and explore possible innovations and efficiency gains. Possible use cases include employee evaluations, fraud detection, customer assessments for purchases or loans, and patient evaluations.
However, safe deployment without risking violations of fundamental rights and public values is only possible if risks are adequately managed. Until then, organizations are wise to exercise caution. Hence, prior to implementation, it's important to comprehensively assess not only benefits but also risks.
The Algorithmic Risks Report Netherlands is the first comprehensive and period overview of developments, risks, and challenges related to the usage of algorithms and AI. The purpose is to create an overarching risk perspective that can feed into policy initiatives, data and reporting requirements, strategies and risk assessments of individual (sectoral) supervisors and their work agenda and the risk management function of individual organisations.
The AP will now publish an Algorithm Risk Report Netherlands every six months, providing insight into recent developments, current risks, and corresponding challenges.
Since early 2023, the AP is the coordinating authority for risk signaling, advice, and collaboration in algorithm supervision. The Netherlands envisages to take an international lead in this regard. In the years to come, global efforts will continue to standardize and coordinate regulations and oversight of algorithms and AI, at forums such as the G7, OECD, UNESCO, Council of Europe, and the European Union. Periodic reports on algorithmic risks are a pivotal instrument in this supervisory approach.