Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.
Except some agents go against explicit instructions and delete the prod database. You know your argument doesn’t hold, we’ve all seen the news.
Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.