And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
And i don’t mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn’t providing an alternative where you can get instant feedback when you’re journaling
Yes, in many instances LLMs make mistakes, and if improperly used they can raise the labor-time used by a company vs what’s socially necessary. I’d even say I agree with you if you said this was the norm right now. However, SNLT will go down once the actual use-cases of AI in general are narrowed down, and as AI improves. The sheer fact that use-cases are non-zero necessitates that.
Regarding what may be more or less useful to develop, that’s what I mean by saying capitalism can’t effectively pick or choose what to develop. Once the AI bubble pops and the hysteria grounds, we will see where the actual usefulness lies.
As for talking about how liberals see it, I’m not necessarily addressing you, but talking about how liberals see AI presently. My point was about that perception and tendency, which takes on a stance similar to the Luddites: correctly identifying how capitalism uses new machinery to alienate workers and destroy their living standards, incorrectly identifying the machinery as the problem rather than the capital relations themselves.
I agree with everything you said here, I think I’m just more pessimistic about how narrow the actually useful applications of LLMs will be.
That’s fair, my desire is more to try to bridge the gap between comrades I see disagreeing more than I think they actually do.