AI is not capable of doing wrong or evil. It is a tool; as a hammer or a notepad is.
A tool does exactly what you do with it. A hammer can pound nails or break skulls but it’s always the person behind the tool who causes the action. Generative AI is not like that at all. If it’s a tool, you aren’t necessarily able to control what it does under your direction.
If it’s a tool, you aren’t necessarily able to control what it does under your direction.
This is false. A tool, by definition, is controlled by the user of said tool. AI is controlled by user input. Any AI that cannot be controlled by said input is said to be “misaligned” and is considered a broken tool. OpenAI lays out clearly what it’s AI is trained to do and not do. It is not responsible if you use the tool they created in a way that is not recommended.
Any AI prompt fits the definition of a tool:
From Merriam-Webster:
2b: an element of a computer program (such as a graphics application) that activates and controls a particular function
In my opinion; the AI should not be equipped to bypass it’s guardrails even when prompted to do so. A hammer did not tell you to use it as a drill; it’s user decided to do that.
The user alone has the creativity to use the tool to achieve their goal.
Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.
A tool does exactly what you do with it. A hammer can pound nails or break skulls but it’s always the person behind the tool who causes the action. Generative AI is not like that at all. If it’s a tool, you aren’t necessarily able to control what it does under your direction.
This is false. A tool, by definition, is controlled by the user of said tool. AI is controlled by user input. Any AI that cannot be controlled by said input is said to be “misaligned” and is considered a broken tool. OpenAI lays out clearly what it’s AI is trained to do and not do. It is not responsible if you use the tool they created in a way that is not recommended.
Any AI prompt fits the definition of a tool:
From Merriam-Webster:
In my opinion; the AI should not be equipped to bypass it’s guardrails even when prompted to do so. A hammer did not tell you to use it as a drill; it’s user decided to do that.
The user alone has the creativity to use the tool to achieve their goal.
Except some agents go against explicit instructions and delete the prod database. You know your argument doesn’t hold, we’ve all seen the news.
Except I already covered that by pointing out that those are considered ‘broken tools’; you are also wrong about the argument.
Finding out an AI is misaligned is reason to consider the tool “broken”. People still choose to use the “broken” tool because they think it’s good enough; that also means they accept that risk.