• Grimy@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    15 hours ago

    Bad faith comparison.

    The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That’s literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.

    AI is a tool, not a weapon. This is kind of melodramatic.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        13 hours ago

        then you have little understanding of how genai works… the social impact of genai is horrific, but to argue the tool is wholly bad conveys a complete or purposeful misunderstanding of context

        • considerealization@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 hours ago

          I’m not an expert in AI systems, but here is my current thinkging:

          Insofar as ‘GenAI’ is defined as

          AI systems that can generate new content, including text, images, audio, and video, in response to prompts or inputs

          I think this is genuinely bad tech. In my analysis, there are no good use cases for automating this kind of creative activity in the way that the current technology works. I do not mean that all machine assisted generation of content is bad, but just the current tech we are calling GenAI, which is of the nature of “stochastic parrots”.

          I do not think every application of ML is trash. E.g., AI systems like AlphaFold are clearly valuable and important, and in general the application of deep learning to solve particular problems in limited domains is valuable

          Also, if we first have a genuinely sapient AI, then it’s creation would be of a different kind, and I think it would not be inherently degenerative. But that is not the technology under discussion. Applications of symbolic AI to assist in exploring problem spaces, or ML to solve classification problems also seems genuinely useful.

          But, indeed, all the current tech that falls under GenAI is genuinely bad, IMO.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            things like the “patch x out of an image” allows people to express themselves with their own creative works more fully

            text-based genai has myriad purposes that don’t involve wholesale generation of entirely new creative works:

            using it as a natural language parser in low-stakes situation (think like you’re browsing a webpage and want to add an event to the calendar but it just has a paragraph of text that says “next wednesday at xyz”)

            the generative part makes it generically more useful that specialist models (and certainly less accurate most of the time), and people can use them to build novel things on top of rather than be limited to the original intent of the model creator

            everything genai should be used for should be low-stakes: things that humans can check quickly, or doesn’t matter if it’s wrong… because it will be wrong some of the time

      • Ifera@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        13 hours ago

        GenAI is a great tool for devouring text and making practice questions, study guides and summarize, it has been used as a marvelous tool for education and research. Hell, if set properly, you can get it to give you the references and markers on your original data for where to find the answers to the questions on the study guide it made you.

        It is also really good for translation and simplification of complex text. It has its uses.

        But the oversimplification and massive broad specs LLMs have taken, plus lack of proper training for the users, are part of the problem Capitalism is capitalizing on. They don’t care for the consumer’s best interest, they just care for a few extra pennies, even if those are coated in the blood of the innocent. But a lot of people just foam at the mouth when they hear “Ai”.

        • considerealization@lemmy.ca
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          11 hours ago

          Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.

          • Ifera@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            11 hours ago

            Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.

            Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.

            Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.

            However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.