Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.
(Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)
Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…
I like how the computational linguist Emily Bender refers to them: “synthetic text extruders”.
The word “extruder” makes me think about meat processing that makes stuff like chicken nuggets.
Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.
(Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)
I call it enhanced autocomplete. We all know how inaccurate autocomplete is.
I wonder how a keyboard with those enhanched autocomplete would be to use…clearly if the autocomplete is used locally and the app is open source