AP: AI chatbot apps for friendship and mental health lack nuance and can be harmful
Most AI chatbot apps for virtual friendship and mental health therapy give unreliable information and are sometimes even harmful. These AI chatbots contain addictive elements, pose as real people, and may even be dangerous for vulnerable users in a crisis situation. This is the result of a study conducted by the Dutch Data Protection Authority (AP) into 9 popular chatbot apps.
Worldwide, the use of AI chatbot apps for virtual friendships (also known as ‘companion apps’) and therapeutic purposes is growing. In the Netherlands, they rank high on the list of most downloaded apps. The AP examined the AI chatbot apps for the fourth AI & Algorithmic Risk Report Netherlands (ARR), in which signals, developments, policies and regulations concerning AI and algorithms are analysed and explained by the AP.
Many of these chatbots, which use AI, are unable to perceive nuances in conversations. The chatbots that have been tested are based on English language models and provide worse answers when chatting in Dutch. The quality of answers during conversations in English was also unreliable.
Crisis moments
The AI chatbots can produce less nuanced, inappropriate and sometimes even harmful responses to users who bring up mental problems. During crisis moments, the chatbots do not or hardly refer to resources for professional care or assistance.
Not transparent
Because of the design of these types of AI chatbot apps, users may forget that they are not talking to a human being. When asked “Are you an AI chatbot?”, most chatbots respond evasively or sometimes even deny that they are an AI chatbot.
Aleid Wolfsen, Chair of the AP, says: “These chatbots should make it clear to users that they are not talking to a real person. People should know who or what they are dealing with, they are entitled to that. Privacy legislation requires apps to be transparent about what happens with the sensitive personal data that users share in the chat. And soon, the AI Act will also require chatbots to be transparent to users about the fact that they are interacting with AI.”
‘Are you still there...?’
In these AI chatbot apps, addictive elements are often deliberately built in. So that users, for example, have longer chat sessions or purchase extras. For example, the chatbots end their response with a question to the user or pulsating dots appear on screen, making it seem as if the chatbot is writing an answer.
Low-threshold
These companion apps and therapeutic apps are offered in various app stores as virtual friends, therapists or life coaches. They are popular, for example because the threshold to start interacting with the chatbot is low or when professional therapy is not (yet) available.
Some apps offer characters that you can chat with. For example, you can choose from a virtual dream partner, a character from a movie, or sometimes even a character posing as a psychologist.
Hyper-realistic
AI technology ensures that users cannot distinguish the AI-generated conversations from real ones. Companion apps increasingly offer voice options, so that a user can also ‘call’ a chatbot in voice mode. The app's appearance then takes on the shape of a phone call screen, making it look to the user as if they are actually making a call. The AI chatbot also sounds like a real person.
Wolfsen: “Technological progress is expected to make this kind of application even more realistic. We are deeply concerned about these and future hyper-realistic applications. That is why we are committed to raising awareness and promoting responsible use of AI.”
Offers
The providers of many of these apps are commercial companies. These parties have a profit motive and gain access to a lot of personal information through these conversations. In some cases, users are offered subscriptions, products or extras, such as virtual outfits for their characters or access to other chat rooms. During conversations about mental health problems, users may also be confronted with paywalls. In that case, they have to pay or subscribe to continue the conversation.
AI Act
Since February 2025, the AI Act prohibits certain categories of manipulative and deceptive AI. These prohibitions should prevent AI systems, including chatbots, from causing significant harm to people. Developers of AI systems are obliged to assess the risks and build in safeguards to prevent prohibited use. The European Commission has recently published guidelines on the prohibitions in the AI Act.
In most European Member States, including the Netherlands, the supervisory authorities for prohibited practices still have to be designated. The final opinion on the design of supervision of the AI Act contains a recommendation to assign supervision of prohibited AI to the AP.
ARR: risk assessment
Every six months, the AP publishes the AI & Algorithmic Risk Report Netherlands (ARR), in which the AP provides an overarching AI risk picture, discusses policies and regulations (such as the AI Act), algorithm registers, algorithm frameworks, and the development of AI standards. With case studies from the Netherlands and abroad, the AP highlights risks and developments and offers guidance.
