And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
I would be extremely cautious about that sort of usage of AI. Commercial AI’s are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.
Like you clearly want someone to talk to about your life and such (who doesn’t?) and I understand not having someone to talk to (fewer and fewer do these days). But you’re opting for a corporate machine which certainly has instructions to encourage your dependence on it.
Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit
Have there been cases of deepseek causing ai psychosis or is it just chatgpt
No idea. But I’d say its less likely. Especially if you’re running a local model with Ollama.
I think key here is to prevent the AI from developing a “profile” on you and self controlled ollama sessions are the surest bet for that.