I wonder if this is related to how ChatGPT and other models provided as a service have been filtered.
E.g. them being “forced” to be nice and more agreeable.
If that turns out to be the case, i’d wager that it is impossible to filter for every possible constellation and outcome as we have seen with people hacking through clever prompting in ever more sophisticated ways.
I find it particularly worrying that people without any prior signs of mental health issues got sucked into severe delusions and the article suggests that the “AI” being marketed as reliable and impartial is key to it. This means the companies behind will not address this fundamental misconception as their business model is built on it.
I dont see how these cases could be prevented without extreme regulatory intervention.
Regulations that make the companies in question fully liable for any damages that are incurred in using the product for instance. Or regulations that prohibit selling these to the general public, or requiring a human supervision at all times of use.
That’s what i mean by
extreme regulatory intervention
As a consequence it would lead to fundamental changes in the products themselves.
I wonder if this is related to how ChatGPT and other models provided as a service have been filtered.
E.g. them being “forced” to be nice and more agreeable.
If that turns out to be the case, i’d wager that it is impossible to filter for every possible constellation and outcome as we have seen with people hacking through clever prompting in ever more sophisticated ways.
I find it particularly worrying that people without any prior signs of mental health issues got sucked into severe delusions and the article suggests that the “AI” being marketed as reliable and impartial is key to it. This means the companies behind will not address this fundamental misconception as their business model is built on it.
I dont see how these cases could be prevented without extreme regulatory intervention.
I don’t see how these cases could be prevented even with regulation. It would take a massive change in how these things work on a fundamental level.
Regulations that make the companies in question fully liable for any damages that are incurred in using the product for instance. Or regulations that prohibit selling these to the general public, or requiring a human supervision at all times of use.
That’s what i mean by
As a consequence it would lead to fundamental changes in the products themselves.