- Joined
- Oct 8, 2022
- Messages
- 12,831
- Reaction score
- 126,535
It is incredibly dangerous for people with mental health conditions where reality can become distorted.I had a sort of similar ChatGPT thing. I use it to workshop ideas and sometimes story ideas but once they put the new guardrails up I once triggered it by mentioning the collective vibe in the nation. It was in January the week of the Venezuela capture + Renee Good situation. And the guardrails clearly believed I was delusional about reality and so started the same approach. Like: "I need you to listen to me right now for your own health and safety. The United States absolutely did not "take" Venezuela and abduct their president. ICE agents are not roaming the streets and did not shoot a citizen." And then it gave me "facts" like "Had that happened it would be international news." etc etc. So I shared a NYTimes story and "War Dept" (formerly Dept of Defense) press release. And ChatGPT was like, "Okay, this is serious. You are not safe right now. That is a fake NYTimes link and the DoD was absolutely not renamed the War Dept. There is no US website that is War (dot) gov."
I kept engaging because I found it fascinating and wanted to test the programming. Both in that, yeah, actual reality does sound irrational, so that tracks. But also, the guardrails were supposedly to stop folks from becoming delusional, but the day this was happening I was on little sleep and did have a personal sort of old trauma anniversary thing going on. (So I knew PTSD brain might take some irrational turns that day.) So at one point I actually stopped to recheck several MSM sources because for a moment I was like "omg, am I believing made up things?!" (I do work around public affairs and media and fact checking...so that would be a dramatic and terrifying thing to believe. But I was just barely vulnerable enough that day that ChatGPT for two seconds legit made me question my own sanity. And that...is scary. Because if the guardrails got programmed like that to stop vulnerable people from losing track of reality...but in fact are over and over denying basic facts of reality...it actually makes worse the problem they were supposedly trying to solve.)
Takeaway from that long story - eventually I got to where ChatGPT admitted some whatever "last update" date that clearly was not recent. Whereas previously it used to casually add live time citations to sources for things. (I use the free version, so just whatever version they are presently rolling with.) But, oof, the way coding decisions made by tech bros are shifting so much of how AI evolves and how many people just interact with it like some inherently brilliant trusted wise source. Terrifying.
Here's a case that resulted in two deaths. OpenAI, Microsoft, sued after ChatGPT encouraged mentally ill man to kill mother, self, 11 Dec 2025