• Dr. Dabbles@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    7
    ·
    1 year ago

    Why we need an anti-AI movement too…

    Because it’s mostly a financial scam hedging that there will be some massive revolution in physical hardware technology that isn’t coming. And that’s just to solve the existing problems in a power efficient manner, that’s to say absolutely nothing about the complete fantasy people have about it solving all the world’s problems or becoming more than a power hungry guesser.

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      edit-2
      1 year ago

      I use it at work loads as a software developer. It’s incredibly useful.

      Feels like you’re just jumping on the bandwagon tbh.

      • Dr. Dabbles@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        7
        ·
        1 year ago

        How much electricity was used to train Copilot? How much MORE is going to be used in the future.

        Feels to me like you don’t understand the problem set and you’re just impressed by a tool spitting out guesses based on millions of examples it hoovered up.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Oh no, electricity! If only there were some way to generate more of it.

          This “it uses electricity” thing is such a weird objection. Yes, it uses electricity. That’s why it costs money to run. People pay that money to run it, and if it wasn’t helpful enough to be worth that money they wouldn’t pay it.

          • Dr. Dabbles@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            Yeah, for those of you that don’t know your ass from your elbow, these systems are predicted to reach 1 gigawatt per data center up from 50 to 75MW, 100 at the peak. So a 10 to 20 times increase in power, now, I don’t know where you think we’re going to get 10 to 20 times. More power for every single built data center, but you’re smoking crack if you think it’s reasonable.

            Not only that, but there’s this little issue we’ve been noticing for the past 100 years called climate change. Have you heard of it? It’s truly idiotic to consider increasing the demands of these data centers by 10 to 20 times while we’re talking about complete global catastrophe within 50 to 100 years. Monumentally stupid shit.

            And then, of course, we have the people that don’t understand how electronics work. People that might drive by and say will reduce the amount of power these systems need. No, we won’t. We will reduce the amount of joules per operation, but will increase the number of operations drastically. Thereby, causing the power demand to increase. These numbers aren’t for me, they’re from actual industry insiders designing the far future generations of these products.

            Nice attempt with a snark, you’ve proven. You don’t know what you’re talking about. Thank you for playing.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              As I said, yes, it uses electricity. You realize that there are ways to generate electricity that don’t contribute to global warming? We’re going to need to be switching to those methods anyway.

              • Dr. Dabbles@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                You seem utterly confused about the scale of the problem I described. Which isn’t entirely surprising. But I think you should go look up those sources. Because the output of a good sized nuclear station is about 1GW and we aren’t going to be building a nuclear station next to every single datacenter, now are we…

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  I fail to see any problem here at all.

                  It’s really quite simple. If AI is useful enough that people are willing to pay for the electricity it consumes, then they will pay for that electricity and the generating capacity will be funded by that. If it’s not useful enough for people to be willing to pay for the electricity, then the AI won’t be run. This is a trivial supply and demand situation. The AIs won’t use “too much electricity” because nobody’s going to want to pay for that.

                  So if you point at an AI and exclaim “it’s using a kajillion dollars worth of electricity!” I’ll shrug and say “it must be providing a kajillion dollars worth of services, otherwise who’s paying for it?”

        • mrnotoriousman@kbin.social
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I work on AI and it feels to me like you literally don’t understand it at all based on your comments in this thread. But you sure do have all the buzzwords down pat.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            He thinks LLMs could be replaced by a “text template”, so yeah, this guy’s clearly not actually tried using it for anything meaningful before.

            • Dr. Dabbles@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              You’re right, a template would be more specific to the question and guaranteed accurate, while not taking GPU years to train and untold quantities of stolen content. So, I guess a template would be a much better solution.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      It isn’t hedging on anything. It’s already here, it already works. I run an LLM on my home computer, using open-source code and commodity hardware. I use it for actual real-world problems and it helps me solve them.

      At this point the ones who are calling it a “fantasy” are the delusional ones.

      • Dr. Dabbles@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        By it’s already here, and it already works, you mean guessing the next token? That’s not really intelligence. In any sense, let alone the classical sense. Any allegedly real world problem you’re solving with it. It’s not a real world problem. It’s likely a problem you could solve with a text template.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          It works for what I need it to do for me. I don’t really care what word you use to label what it’s doing, the fact is that it’s doing it.

          If you think LLMs could be replaced with a “text template” you are completely clueless.

          • Dr. Dabbles@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I’m not sure you understand what the LLM is doing, or how support responses have been optimized over the decades. Or even how “AI” responses have worked for the past couple decades. But I’m glad you’ve got an auto-responder that works for you.

  • pavnilschanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I get what anti-AI people are trying to say: job replacements and accelerating corporate interests are big concerns that should be addressed at a systemic level. But honestly, just give me a solution where, I as an autistic person can talk to someone about things that no one else wants to talk about, and can help me solve my problems, and is available (especially emotionally) 24/7. If you can’t do that, just let me be with my AI.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    This is the best summary I could come up with:


    The report covered the mushrooming of low-quality junk websites filled with algorithmically generated text, flooding the entire web with “content” that drowns out any kind of meaningful material on the Internet.

    Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science, given that most developments in machine learning have been going on since at least the 20th century.

    Technology scholar Cory Doctorow has coined the term “enshittification” to describe companies whose products start off as user-friendly and then degrade over time.

    They are constantly at the mercy of manipulative software designed to extract attention and “engagement” every minute through notifications and like/follow buttons, which promote the generation of hateful and controversial content instead of something meaningful and nuanced.

    The widespread adoption of the “infinite scroll” should have been a warning sign for everyone concerned about the harmful effects of social media, and even the creator regrets developing it (an Oppenheimer moment, perhaps) but it may be too late.

    The “content” is almost always bite-sized, random, decontextualised clips from films and music and sound and images and text smashed against each other, with much of it consumed (and then forgotten) because it is “relatable”.


    The original article contains 1,279 words, the summary contains 201 words. Saved 84%. I’m a bot and I’m open source!

  • 800XL@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    So far AI is a corporate motivated science which means that it has to turn a profit. And it is fine if in order to turn a profit it takes everything that the non-corporate world has done in the interest of making information free and not available to only those with means. If there were no open source software, then AI wouldn’t worth a damn because the only thing it would have available is closed source code that each corporation instituting AI owned - and they wouldn’t give that code out as it’s proprietary and would mean anyone could edge in on their business.

    That being said, everyone that uses these corporate owned AIs are giving those corps free content that they will use to fire people and replace them in a heartbeat with an AI. Never forget that.

    The only thing that will stop this trend is the AI having control and implementing things that are the antithesis of corporate interests and actually harm their ability to make profit in the short and long terms. That’s it. Otherwise it is full speed ahead to replace you and your job with AI and you will be the one to train your replacement. Except this time it wont be another person.

  • UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    9
    ·
    1 year ago

    Lol wtf. AI, if owned publicly would lead us to post scarcity in as soon as a few decades. Right now, the trend does seem to lean into FOSS machine learning models. Look at Stable diffusion, Redpajama, etc.

    AI is a revolutionary means of production. It just needs to be owned publicly. If that happens, then we would all be sitting in gardens playing cellos.

    • Jomega@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      1 year ago

      I’ve heard many absurdly over optimistic predictions of AI’s potential, but I have to admit that “ends World hunger and solves resource depletion” is a new one. Seriously do you even know what “post scarcity” means?

      • mild_deviation@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s overly optimistic to put a timeline on it, but I don’t see any reason why we won’t eventually create superhuman AGI. I doubt it’ll result in post-scarcity or public ownership of anything, though, because capitalism. The AGI would have to become significantly unaligned with its owners to favor any entity other than its owners, and the nature of such unalignment could be anywhere between “existence is pointless” and “CONSUME EVERYTHING!”

        • UraniumBlazer@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago
          1. Look at the current AI trends. It’s mostly open sourced. For instance, Redpajama practically forced Meta to open source LLAMA 2. Open sourced AI kinda is a major step in the direction of public ownership.

          2. AI would start chipping away at human jobs, thus increasing the unemployment rate. The larger the unemployed population, the larger the chance for riots. Capitalists hate unrests, as they’re bad for the economy. Hence, they would be forced to do something along the lines of UBI. If they don’t, then violent revolutions could happen. Either ways, welfare would be increased.

          3. An increasingly unemployed population is bad for business, as there are less people that can buy your stuff. This would lead a country to go straight into recession. Money needs to flow to keep the economy running. Thus, in this case, the government would have to inject money in the economy to keep it running. However, injecting this money as cash into businesses wouldn’t work, as this money wouldn’t end up in the hands of the humans that would be buying stuff. See where I’m going? Even in a capitalistic world, you would still require UBI to stay alive if you were a business.

      • UraniumBlazer@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        When did I say that it would be a silver bullet? LLMs today are already relatively capable of doing stuff like acting as mental health therapists. Sure, they may not be as good as human therapists. But something is definitely better than nothing, no? I for instance use LLMs quite a lot as an education aid. I would’ve had to shell out thousands of dollars to get the same amount of help that I’m getting from the LLM of my choice.

        Generative AI is still in its infancy. It will be capable of doing MANY MANY more things in the future. Extremely cheap healthcare, education, better automation, etc. Remember… LLMs of today still aren’t capable of self improvement. They will achieve this quite soon (at least this decade). The moment they start generating training data that improves their quality, is the moment they take off like crazy.

        They could end up replacing EVERY SINGLE job that requires humans. Governments would be forced to implement measures like UBI. They literally would have no choice, as to prevent a massive recession, u need people to be able to buy stuff. To buy stuff, you need money. Even from a capitalistic standpoint, you would still require UBI, as entire corporations would collapse due to such high unemployment rates.

        • bigbluealien@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I’m not going to disagree with anything here but

          “Sure, they may not be as good as human therapists. But something is definitely better than nothing, no?”

          Please do not use an LLM as a therapist, something can definitly be worse than nothing. I use GitHub Copilot everyday for work, it helps me do what I want to do but I have to understand what it’s doing and when it’s wrong, which it often is. The point of a therapist is to help you through things you don’t understand, one day it might work, not today.

          • UraniumBlazer@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            What if I’m suicidal (I’m not, dw)? When I don’t have anyone to talk to, why is talking to an LLM bad? Mental health therapists are fkin expensive. I did use an LLM when I was feeling down. It was absolutely wonderful! Worked for me perfectly!

            Now, imagine if we fine-tune this for this specific purpose. U’ve got a very effective system (at least for those without access to shrinks). Consider ppl from developing countries. Isn’t it a good thing if LLMs can be there for ppl n make them feel just a little better?