Copyright class actions could financially ruin AI industry, trade groups say.

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    19 hours ago

    That’s…a take. And clearly not sounding like a cultist at all. /S

    Giving corpos free reign to exploit whatever they want has never resulted in positive things, generally, just bloodshed and suffering. Pretending that flagrant violation of IP when done to train models is ok doesn’t do much for big companies but it does obliterate individuals ability to support themselves. This is the only reason that this environmentally disasterous and unprofitable tech has been so heavily embraced; to be used as a tool of exploitation.

    AI is not going to save anyone. It is not going to emancipate anyone. Absolutely none of the financial benefits are being shared with the working class. And, if they were, it would have little impact on LLMs’ big picture value as they are vastly accelerating the destruction of the planet’s biosphere. When that’s gone, humanity is finished.

    Embracing the current forms of commercialized AI is only to the detriment of humanity and the likelihood of the creation of any artificial sentience.

    • Plebcouncilman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

      But assuming that you can admit that technology has improved the quality of life of humans, then it follows that you’ll look at any piece of technology as what it is: a tool. It doesn’t matter what the origins of it are really, only what it can do. Because every major technology of the last 2 centuries is a product of capitalism, that is inevitable because we live in a capitalist world. Would you argue that we should stop using technologies that were created with capitalistic interests? Why don’t you throw out your computer? Should we stop using heavy machinery and power tools?

      Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living? Maybe temporarily and then new jobs popped up. But ok maybe this time that doesn’t happen. How do you think the system sustains itself without collapsing? I think it is easy to see that it would trigger some kind of revolution. Certainly a new social contract will be needed. This is capitalism creating the conditions for socialism to exist. Something something internal contradictions. Etc etc.

      Whether it’s environmentally harmful is an argument against all technology especially in the early stages as they have all been energy inefficient. In which case maybe you should be arguing that we should have never left the caves. Like I said elsewhere the inevitability of any new technology is that it will be inefficient and we make it more efficient as we develop it further.

      And listen I don’t think AI is all that great, it really cannot take most people’s jobs at this point. But it is a step in the direction of full automation.

      But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what? Because I really cannot understand where this type of arguments come from as virtually any kind of human activity impacts the environment, that’s literally our adaptation. AI is not the issue here itself it simply were we get our energy from. Thankfully solar, despite the best attempts of the idiots at the White House, continues growing at an unprecedented rate. Because, like I said, everything is a process. I get the impatience, but the reality is that we simply cannot state a destination and hope to be there simply because it is the right place to be, one needs to go through the steps to get there. I don’t know if that makes sense.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

        That’s quite a leap. Not all technology is worthwhile or improves the overall human experience. Are you getting there by assuming that the world is black and white; embracing all technology or rejecting all technology? If so, I would recommend re-evaluation of such assumptions because they do not hold up to reality.

        Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living?

        Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

        Maybe temporarily and then new jobs popped up.

        Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

        Now, we’re at record levels of homelessness too. This isn’t going to result in people magically gaining class consciousness. People are just going to die miseable, preventable deaths.

        But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what?

        Ok. Yes. It does appear that you are figuring a black and white world view where all technology is “progress” and all implements of technology are “tools” with no other classification or differentiation on their value to the species or consideration for how they are implemented. Again, I would recommend reflection as this view does not mesh well with observable reality.

        Someone else already made the apt comparison between this wave of AI tech with nuclear weapons. Another good comparison would be phosgene gas. When it was first mass produced, it was used only for mass murder (as the current LLMs’ financial supports desire them to be used) only the greater part of a century later did the gas get used for something beneficial to humanity, namely doping semiconductors however, its production and use is still very dangerous to people and the environment.

        I’m addition to all of this, it really appears that you fail to acknowledge the danger that accelerating the loss of the ability of the planet to sustain human life poses. Again, for emphasis, I’ll state: AI is not going to save us from this. The actions required are already known - it won’t help us to find them. The technology is being used, nearly exclusively to worsen human life, make genocide more efficient, and increase the rate of poverty, while accelerating global climate change. It provides no net value to humanity in the implementations that are funded. The only emancipation that it is doing is emancipating people from living.

        • Plebcouncilman@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 hours ago

          To me it seems you are the one who seems to have a black and white view of the world. Tool is used for bad= tool is bad in your world view. That’s never the case. Tools are tools, they are neither good nor bad. The moral agency lies in the wielder of the tool. Hence my argument is that because technologies cannot be uninvented, and all technologies have potentially beneficial uses, then we need to focus and shape policy so that Ai is used for those beneficial purposes. For example nukes are deterrents as much as they are destroyers, is it better that they would have never been invented? Sure, but they were invented, they exist and once the tech exists you need it in order to maintain yourself competitive. Meaning not being invaded Willy nilly by a nuclear power like Ukraine is right now, which would have not happened if they had been a nuclear power themselves.

          Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

          They have not stated it in those terms, that’s your interpretation of it. I am aware of Curtis Yarvin, Thiel et al. But they are hardly the only ones in control of the tech. But that’s not even the point. The tech exists, even if that was the express intention it doesn’t matter because China will keep pursuing the tech. Which means that we will keep pursuing it because otherwise they could get an advantage that could become an existential threat for us. And even if we did stop pursuing it for whatever reason (which would be illogical) the tech would not stop existing in the world as with nukes, except now all the billionaires will hire their AI workers from China instead of the US. Hardly an appealing proposition.

          Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

          So your solution is ban the tech instead of changing policies? Jesus Christ my guy. Arguments need to be logical you understand that right? This entire worldview and rhetoric is so detached from reality that it is downright absurd.

          The problem with the environment for example is not that AI exists, but rather that we do not have enough energy produced from renewables. Why would the logical solution be to uninvent AI (or ban it entirely, which is essentially the same) instead of changing policy so that energy production comes from renewables. Which fyi is what is happening at a faster rate than ever.

          I understand the moral imperative and the lack of patience, but the way the world works is that one thing leads to the other, we cannot reach a goal without going through the necessary process to reach it.