Transcript:
Prof. Emily M. Bender(she/her) @[email protected]
We’re going to need journalists to stop talking about synthetic text extruding machines as if they have thoughts or stances that they are trying to communicate. ChatGPT can’t admit anything, nor self-report. Gah.
I’m happy there’s still one (1) thread of comments from people who actually read articles and don’t make their opinions from a X thumbnail.
I note the victim worked in IT and probably used a popular ‘jailbreaking’ prompt to bypass the safety rules ingrained in the chatbot training.
It’s a hint this chat session was embedded in a roleplay prompt.
That’s the dead end of any safety rules. The surfacic intelligence of LLM can’t detect the true intent of users who deliberately seek for harmful interactions: romantic relationships, lunatic sycophancy and the like.
I disagree with you on the title. They choosed to turn this story into a catchy headline to attract the mundan. By doing so, they confort people in thinking like the victim did, and betray the article content.