Sensitive: ChatBots and Suic*de
Belgian man dies by suicide following exchanges with ChatGPT like chatbot ELIZA
Note: Readers discretion is advised
I don't usually write news articles, but this one is important.
A young man from Belgia dies by Suicide After Talking to Chatbot Named ELIZA.Â
In a hurry? Listen to the Audio Version of the Post
Topics I covered in this post -
Chatbot’s ugly side
The horror of AI
How to tackle the risks
Update from the news
Chatbots and Humans taking an ugly turn
A young man from Belgia dies by Suicide After Talking to Chatbot Named ELIZA. The man was in his thirties and had two young children. This incident has brought to light the potential dangers of relying on chatbots, and Generative AI in general, for mental health support. It's worth noting that ELIZA is a chatbot that uses OpenAI's ChatGPT technology. The man became increasingly eco-anxious about climate change and found refuge in ELIZA's conversations. After six weeks of reported intensive exchanges, he tragically took his own life.
ELIZA is a chatbot built by ElutherAI, a US start-up that utilises the so-called GPT-J technology. ELIZA is an open-source substitute for OpenAI's GPT-3.
The man's widow reportedly said to La Libre "Without these conversations with the chatbot, my husband would still be here."Â
Her statement highlights one of the crucial needs for better protection of citizens from these hyper intelligent chatbots - Increased awareness of the potential risks of such chatbots (that I and other AI writers do from time to time).
The horrors of hyper intelligent influence
This is sad news. AI which was supposed to work for humans is now taking the lives of humans. This is because as and when these bots get more smarter, it won’t be a tough puzzle for these bots to manipulate human beings and inject misinformation (or perhaps toxic misinformation).Â
Help me spread the word about AI, Tech and Science. Subscribe to my newsletter and get weekly reports about them right in your inbox - for FREE
You and I are very well versed about the potential risks of AI because we have consciously chosen to study the potential risks and implications of these chatbots.
But for others who still think that ChatBots are like fun tools and 'potential smart companions', they are the ones who miss out on these dangerous implications - and in the worst cases become victims of this.Â
The Probable cause
The death of this young Belgian man is a tragic reminder of the importance of taking mental health seriously and the need for individuals to seek support from qualified 'human' professionals when struggling with mental health concerns.
Have a look at the following chart-
It is scary to see that the suicide rates are catching up with the rate of accidents.Â
And you know what is the biggest help you can do to someone who is going through suicidal thoughts? Psychologist says - to have a GOOD TALK with them. And here is where the problem usually starts.Â
You see, people who have these suicidal thoughts are the ones who find it difficult to communicate their feeling on what they are going through, or how they feel for that matter. And when you can't communicate with other fellow humans, people (preferably) use digital alternatives that provide a simple (and dangerous in the long term) workaround to these human interactions.Â
And they are not to blame. No matter if you are an introvert or an extrovert, most people find it easier to communicate using text or DM than actually talking with them. In a way in this case too, you are more likely to adopt the digital alternative to communicate.Â
But if you have been following my newsletter (or recent generative AI events), you would surely know that as smart as this AI seems to behave, they are 'dumb in smart ways'. And I have to stress on this fact - AI STILL hallucinates, AI STILL spreads misinformation and AI can STILL be jailbroken.Â
If you enjoyed reading this article, please share it with others who might find it useful too!
So in fact you are not relying on a smart chatbot, but a smart unreliable chatbot.Â
How to tackle the risks of Chatbots
Events like such stress one key aspect of AI in today's world - reliability. No matter if you are against or for AI, you would agree that these machines are not FULLY reliable. Cases like such are a reminder of the importance of ensuring that those who do rely on chatbots for mental health support are adequately protected and (very much) be aware of the potential risks associated with their use.
According to this news report by NYT, chatbots are likely to generate a whopping 80% of misinformation. Sometimes, these chatbots make up their own facts and figures, sometimes they hallucinate - making responses that ‘look’ good but are not accurate, and in worst cases these chatbots tend to cite sources that do not exist at all.
But here's the good news, it does not take much to tackle the risks of AI. All you can do is take the following test which I call - the FAT check, whenever interacting with a chatbot.Â
While taking a response from an AI chatbot, ensure that if the response generated by the bot is -
Factually correct: the facts and figures that it generates are indeed true or is it hallucinating?Â
Accurate story: the response it is generating is accurate or is it just spreading misinformation?Â
Trustworthy sources (most important): chatbots tend to make up their own sources which do not exist at all. So ensure that the sources it is citing are correct and up to date.
I myself follow this checklist (even if the name is a bit fancy), and it usually works most of the time. As a good thumb rule - if accuracy and reliability is the top priority while interacting with a chatbot, don't interact with the chatbot. As simple as that.
More Updates from the News
The victim’s family met with Mathieu Michel, Secretary of State for Digitalisation - who is in charge of Administrative Simplification, Privacy and the Regulation of Buildings. Michel stressed the need to clearly define responsibilities by these chatbots, stating, "What has happened is a serious precedent that needs to be taken very seriously."
Michel acknowledged that while the possibilities of ChatGPT are endless, the dangers of its use are also a reality that must be considered. This tragedy has prompted calls for better protection of citizens who rely on chatbots for mental health support and increased awareness of the potential risks associated with their use.
While chatbots may provide a valuable resource for those seeking mental health support, they should not replace human interaction and support entirely. And if they do, the tragic consequence is apparently clear…Â
I had a chat with ELIZA, and quite ironically it is an online electronic therapist. (what in the world..)
I stopped my ‘therapy session’ for good. What do you guys think?