Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.
Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.