The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.
The choice of training data is key to how the final model operates. All sorts of depraved material must be being used as part of the training set, otherwise the model wouldn’t be able to generate the text it does (even if it’s being coached).
It’s clear the “AI race” is all about who gets the power of owning, and therefore influencing, everybody’s information stream. If they couldn’t influence it, there wouldn’t be such a race.
The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.
I’m not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.
Nothing I said could imply that there’s no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.
A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.
The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.
The choice of training data is key to how the final model operates. All sorts of depraved material must be being used as part of the training set, otherwise the model wouldn’t be able to generate the text it does (even if it’s being coached).
It’s clear the “AI race” is all about who gets the power of owning, and therefore influencing, everybody’s information stream. If they couldn’t influence it, there wouldn’t be such a race.
I’m not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.
Nothing I said could imply that there’s no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.
A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.