An internal Microsoft memo has leaked. It was written by Julia Liuson, president of the Developer Division at Microsoft and GitHub. The memo tells managers to evaluate employees based on how much t…
That’s why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don’t believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is “smart” in any meaningful way.
This has become a thought-terminating cliché all on its own: “They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced.”
That’s why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don’t believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is “smart” in any meaningful way.
but that’s how it was marketed as to people that buy it. doesn’t matter that it doesn’t work
This has become a thought-terminating cliché all on its own: “They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced.”