Well yeah if you can have it write a script to do the same thing over and over again, that’s more efficient and reliable than asking AI to process similar data multiple times.
if they’re talking about what i think they are, it isn’t the (very good) point you’ve made about scripting viability, but instead a kind of recursive bluster, exactly like whats shown in the OP pic.
an example would be you ask it to write a script to automate something, and it suggests asking an LLM to do it, instead of just fucking doing it like it used to for the same basic type of prompt, hence why it looks like enshitifcation rather than progress.
purely my opinion here, but i suspect it’s got nothing to do with model fidelity or the state of the art and everything to do with some bs profit-seeking human intervention or masking some other bs business decision.
Well yeah if you can have it write a script to do the same thing over and over again, that’s more efficient and reliable than asking AI to process similar data multiple times.
if they’re talking about what i think they are, it isn’t the (very good) point you’ve made about scripting viability, but instead a kind of recursive bluster, exactly like whats shown in the OP pic.
an example would be you ask it to write a script to automate something, and it suggests asking an LLM to do it, instead of just fucking doing it like it used to for the same basic type of prompt, hence why it looks like enshitifcation rather than progress.
purely my opinion here, but i suspect it’s got nothing to do with model fidelity or the state of the art and everything to do with some bs profit-seeking human intervention or masking some other bs business decision.
This is it. It feels like they’re all trying to make uses use less compute while they continue to crank at training destroying Yogi Bear’s habitat.