> Chatbot sites like character.ai and Chai AI have been linked to user suicides, for understandable reasons: if you allow users near-unlimited control over chatbots, over time you’ll eventually end up with some chatbots who get into an “encouraging the user to self-harm” state.
I do not understand why one could commit suicide over something a computer told them. At the same time, I understand that people may be in an unstable state or undereducated to be in a "relationship" with an AI model.
I think it's time for humanity to start developing mental health. Otherwise, we are doomed to be a strange hybrid with computer models.
I think it is not like "do it now xoxo", but rather ampflification of already strong tendency or depression leading to manifistation of such thoughts and possible actions during long and deep discussions - which LLMs are very good at. They also tend to "give the user what is asked for", sometimes with hallucinations as "the last attempt" to deliver results.
Imagine, you have an addiction towards bets and casino. This kind of addiction is linked with rising dopamine levels when the bet works out. And then you always want to have that winner feeling. Now you talk to the chatbots, since days, weeks, months - they are designed to be a friend/girfriend-alike. And at some point you talk about what you like, some private stuff, ..etc.. you don't need to fall in love with the AI model, but at some point you'll tell it while placing number 2 "you feel lonely lately" and BAM the model may answer you "may be you can go to next casino that will make you feel better" or something like this. This triggers an reaction in your mesolimbic system and you're hooked.
With suicide, there also may be some triggers - I'm not from this field - but, observable is, reports about suicide / news coverage / always put good visible informational text "If you having thoughts .. blabla..call +1-NO-TO-DEATH".
So, if such reports can be triggers, then a model, advised to be your friend or more, can at times be very much a bigger trigger. It may know your deepest thoughts, if you shared them before - which isn't impossible, but rather part of the model design.
May be some chatbot will come up with the idea how to combat addiction, depression and psychological conditions with the power of prompting. But now, its still a world wide problem. I also can't imagine why people start to take hard drugs or do harmfull things to self and each other, but they do - so it may also happen that someone commit suicide. Its better to regulate those fker mdels.
Because of the distributed nature of usage, it's likely the disaster will be distributed too - affecting a larger amount of people in many different places. The author points that out with already existing examples like the mass debt collection effort by the Australian government
Conflating a text generating website with transportation disasters seems rather disingenuous to me.
1 death of a kid who committed suicide, who also happened to be talking to AI is a far cry from AI being responsible.
How about the more reliable indirect diet related deaths caused by fast food? Obesity epidemic? This isn't even directly responsible but it's more direct than AI causing any deaths.
"one model to rule them all.." - while I support your points, the good conversational capabilities and system prompt like "be a friend at all costs" may
a. trigger something
b. support bad ideas, like friends also do in the real ;)
c. develop bad ideas during conversations
and one should also see how much sex/psychological disorders are known nowadays. Surely, among the people using such models as life-compagnions/psychic-dump could be the one or another, who may get into this to deep. We haven't really started yet - may be it goes the ways of Manga: first a niche, then nerds, then broader public, now millions of millions of consumers consume. Some day such models will become mainstream like pokemon go or tamagotchi, and then it will be among the points you named, too.
> Chatbot sites like character.ai and Chai AI have been linked to user suicides, for understandable reasons: if you allow users near-unlimited control over chatbots, over time you’ll eventually end up with some chatbots who get into an “encouraging the user to self-harm” state.
I do not understand why one could commit suicide over something a computer told them. At the same time, I understand that people may be in an unstable state or undereducated to be in a "relationship" with an AI model. I think it's time for humanity to start developing mental health. Otherwise, we are doomed to be a strange hybrid with computer models.
I think it is not like "do it now xoxo", but rather ampflification of already strong tendency or depression leading to manifistation of such thoughts and possible actions during long and deep discussions - which LLMs are very good at. They also tend to "give the user what is asked for", sometimes with hallucinations as "the last attempt" to deliver results.
Imagine, you have an addiction towards bets and casino. This kind of addiction is linked with rising dopamine levels when the bet works out. And then you always want to have that winner feeling. Now you talk to the chatbots, since days, weeks, months - they are designed to be a friend/girfriend-alike. And at some point you talk about what you like, some private stuff, ..etc.. you don't need to fall in love with the AI model, but at some point you'll tell it while placing number 2 "you feel lonely lately" and BAM the model may answer you "may be you can go to next casino that will make you feel better" or something like this. This triggers an reaction in your mesolimbic system and you're hooked.
With suicide, there also may be some triggers - I'm not from this field - but, observable is, reports about suicide / news coverage / always put good visible informational text "If you having thoughts .. blabla..call +1-NO-TO-DEATH".
So, if such reports can be triggers, then a model, advised to be your friend or more, can at times be very much a bigger trigger. It may know your deepest thoughts, if you shared them before - which isn't impossible, but rather part of the model design.
May be some chatbot will come up with the idea how to combat addiction, depression and psychological conditions with the power of prompting. But now, its still a world wide problem. I also can't imagine why people start to take hard drugs or do harmfull things to self and each other, but they do - so it may also happen that someone commit suicide. Its better to regulate those fker mdels.
Because of the distributed nature of usage, it's likely the disaster will be distributed too - affecting a larger amount of people in many different places. The author points that out with already existing examples like the mass debt collection effort by the Australian government
Conflating a text generating website with transportation disasters seems rather disingenuous to me.
1 death of a kid who committed suicide, who also happened to be talking to AI is a far cry from AI being responsible.
How about the more reliable indirect diet related deaths caused by fast food? Obesity epidemic? This isn't even directly responsible but it's more direct than AI causing any deaths.
"one model to rule them all.." - while I support your points, the good conversational capabilities and system prompt like "be a friend at all costs" may
a. trigger something
b. support bad ideas, like friends also do in the real ;)
c. develop bad ideas during conversations
and one should also see how much sex/psychological disorders are known nowadays. Surely, among the people using such models as life-compagnions/psychic-dump could be the one or another, who may get into this to deep. We haven't really started yet - may be it goes the ways of Manga: first a niche, then nerds, then broader public, now millions of millions of consumers consume. Some day such models will become mainstream like pokemon go or tamagotchi, and then it will be among the points you named, too.