The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.
I’m not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.
Nothing I said could imply that there’s no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.
A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.
I’m not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.
Nothing I said could imply that there’s no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.
A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.