Generative AI risks
The use of generative AI has become an integral part of Dutch society in 2023. Improved quality and access to ‘foundation models’, which Generative AI-tools are built on top of, drives the increase in usage.
On this page
Generative AI makes it possible to generate text, images, and audio by giving the system a specific instruction known as a ‘prompt’. Examples of generative AI models include ChatGPT, Bing AI, and Midjourney.
Challenges and risks
The use of generative AI raises legal concerns. One aspect of this is the training of models on large quantities of ‘scraped’ data, which can include personal data. Additionally, new risks for users and the increased concentration of power in big tech firms demand scrutiny. To list a number of challenges and risks identified in the fall 2023 edition of the Dutch Data Protection Authority’s AI & Algorithmic Risks Report Netherlands (ARR):
- Many countries identify large scale spreading of disinformation and manipulation through the use of Generative AI as risks.
- The output of Generative AI is based on probability estimates. The user does often not have insight into uncertainty, plausible alternative answers, or references to origins of the content. This makes it hard to interpret the output and increases the risk of incorrect conclusions and actions. This can result in discrimination and arbitrariness in a person’s or organisation’s approach.
- We emphasize risks with respect to protection of personal data. What happens, for example, when a generative AI system attributes a certain quote to the wrong person? We also see risks with respect to intellectual property rights: can a model use the voice of a pop star, for example, to generate spoken audio?
- Training a generative AI system with data containing unbalanced information of groups, possibly without realising it, may reinforce people’s biases. After all, they will then be shown ‘overrepresented’ groups more often. For example, AI-generated images may be disproportionately more likely to show men as doctors and women as nurses.
- The development of generative foundation models takes place at a small number of organisations. Mainly big tech firms have sufficient financial means to invest in this. This makes it hard for new providers to enter the market, and reinforces the existing power of big tech companies.
What is needed?
It is important that we clarify the European regulation on foundation models and the organizations that develop them. The AI Act offers the basis for that. Additionally, transparency for users is essential. Users need to know whether they are dealing with a human or a system and have the right context to assess the reliability of received outputs.
To find out more about the risks of generative AI that we have identified, read Chapter 2 of the fall 2023 edition of the ARR.