• NewOldGuard@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    6 days ago

    “AI” image and video generation is soulless, ugly, and worthless. It isn’t art; it is divorced from the human experience. It is incredibly harmful to the environment. It is used to displace art & artists and replace them with garbage filler content that sucks even more of the joy from this world. Just incredibly wasteful and aesthetically insulting.

    I think these critiques apply to “GenAI” more broadly, too. LLMs in particular are hot garbage. They are unreliable but with no easy way to verify what is or isn’t accurate, so people fully buy into misinformation created by these things. They also get treated as a source of truth or authority, meanwhile the types of responses you get are literally tailor made to suit the needs of the organization doing the training by their training data set, input and activation functions, and the type of reinforcement learning they performed. This leads to people treating output from an LLM as authoritative truth, while it is just parroting the biases of the human text in its training data. The can’t do anything truly novel; they remix and add error to their training data in statistically nice ways. Not to mention they steal the labor of the working class in an attempt to mimic and replace it (poorly), they vacuum up private user data at unprecedented rates, and they are destroying the environment at every step in the process. To top it all off, people are cognitively offloading to these the same way they did for reliable tech in the past, but due to hallucinations and general unreliability those doing this are actively becoming less intelligent.

    My closing thought is that “GenAI” is a massive bubble waiting to burst, and this tech won’t be going anywhere but it won’t be nearly as accessible after that happens. Companies right now are dumping tens or hundreds of billions a year into training and inferencing these models, but with annual revenues in the hundreds of millions for these sectors. It’s entirely unsustainable, and they’re all just racing to bleed the next guy white so they can be the last one standing to collect all the (potential future) profits. The cost of tokens for an LLM are rising, despite the marketing teams claiming the opposite when they put old models on steep discount while raising prices on the new ones. The number of tokens needed per prompt are also going up drastically with the “thinking”/“reasoning” approach that’s become popular. Training costs are rising with diminishing returns due to lack of new data and poor quality generated data getting fed back in (risking model collapse). The costs will only go up more and more quickly, and with nothing to show for it. All of this for something which you’re going to need to review and edit anyway to ensure any standard of accuracy, so you may as well have just done the work yourself and been better off financially and mentally.