• theotherbelow@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Cool so an ai reading a book is substantially transformative but college students need to sell plasma to afford nth edition books.

    Such an efficient society for wealthy crooks.

  • nandeEbisu@lemmy.world
    cake
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    With the law as I understand it (not a lawyer) this seems correct.

    I think this is unhealthy for society as a whole though, but it is the legislature’s job to fix that, not the judiciary.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Alsup also said, however, that Anthropic’s copying and storage of more than 7 million pirated books in a “central library” infringed the authors’ copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement.

      US copyright law says that willful copyright infringement can justify statutory damages of up to $150,000 per work.

      this is pretty much what we expected from the decision last week: training on books is legal; pirating books is still piracy… you can train on books you own without asking permission (and i assume books/ebooks that you don’t have to circumvent DRM as that’s illegal in a different way)

  • ParadoxSeahorse@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    3 days ago

    This is physically bought, scanned, books. Not covered by this case is what they’re allowed to do with that model, eg. charge people for access to it.

    Maybe controversial, but compared to meta pirating books, claiming it makes no difference, and that each book is individually worthless to the model (but the model is of course worth billions), is it wrong that I’m like “hmm at they’re least buying books”?

    As others say, there should be specific licensing, so they actually need to pay a cost per book, set by the publisher, specifically to legally include it in their model, not just shopping as humans but actually an llm skin suit slave.

    • altkey@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Your comment made me think of the LLM piping this way (as if it could’ve started legal):

      1. Shit goes in: sourcing material should be treated not like for a personal, but for a commercial use over some volume by default. It’s clearly differentiated in licenses, pricing, fees, etc.
      2. Shit goes out: the strictiest license of all dataset is applied to how the output can be used. If we can’t discern if X was in the mix, we can’t say it wasn’t, and therefore assume it’s there.

      To claim X is not in the dataset, the LLM’s owner’s dataset should be open unless parts of it are specifically closed by contract obligations with the dataminer\broker. Both open and closed parts with the same parameters should produce the same hash sums of datasets and the resulting weights as in the process of learning itself. If open parts don’t contain said piece of work, the responsibility is on data providers, thus closed parts get inspected by an unaffilated party and the owner of LLM. Brokers there are interested in showing it’s not on them, and there should be a safeguard against swiftly deleting the evidence - thus the initial trade deal is fixed by some hash once again.

      Broker with someone’s pirated work can’t knowingly sell the same dataset unless problematic pieces are deleted. The resulting model can continue learning on additional material, but then a complete relearning should be done on new, updated datasets, otherwise it’s a crime.

      Failure to provide hashes or other possible signatures verifying datasets are the same, shifts the blame onto LLM’s owner. Producing and sharing them in the open and observable manner, having more of their data pool public grants one a right to make it a business and shields from possible lawsuits.

      Data brokers may not disclose their datasets to public, but all direct for-profit piracy charges are on them, not the LLM owner, if the latter didn’t obtain said content themselves but purchased it from other party.

      It got longer than I thought.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Except that some derivative works are allowed by humans under current copyright law. This has been degraded to the point where reaction videos have some defense as a derivative work.

        If a reaction video is a derivative work, why can’t an AI trained on that work also count?

        • ParadoxSeahorse@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          “Derivative” is less questionable than “work”.

          For eg. AI Gen imagery is not copyrightable for the most part, legally closer to plagiarism than art?

          • HobbitFoot @thelemmy.club
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Derivative describes what happened to the copyrighted work, not what slop was churned out by it.

            If the plagiarism is far enough from the original work, it isn’t protected by the original copyright.

      • ParadoxSeahorse@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I really like the idea of signing the model with a dataset hash. Each legally licensable piece of source material could provide a hash, maybe?

        In terms of outputs, it’s really difficult to judge how transformative a model is without transparency of dataset. We’ve obviously seen prompts regurgitate verbatim known works, it could be even more prevalent than apparent just through obscurity as opposed to transformation. More than meets the eye.

        • altkey@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Each legally licensable piece of source material could provide a hash, maybe?

          We may generate a hash sum for every piece but I don’t see now how it would help. The only application I assume is to know that between stages A and B the database of many works hasn’t been modified. But if we have a hash of a singular piece, we can’t tell by it, if it was included in the dataset or not, persecute cases of it’s misuse etc. For licensing stuff it wouldn’t hurt to obtain it, I guess, but I don’t know how it would be applied to prove something. Alas, I think I do now*.

          In terms of outputs, it’s really difficult to judge how transformative a model is without transparency of dataset.

          True. That’s why I assume everything in the dataset is involved in every creation.

          It is, probably, the level of fight only accessible by the likes of Disney with their endless pockets, but if they do their lawsuit thing frequently enough (correctly assumimg the likeness of Mickey is in every graphical dataset), there’s a hope LLM’s owners and dataset brokers would go more transparent about the data they obtain and use, thus helping everyone.

          One tool I see created is - here’s the asterix * - a standard look-up webpage where you can search a closed commercial dataset (or many of them at once) by hash or by providing a file**. Hash sux ass due to it naturally changing itself whenever the file is slightly modified. But if it’s a known copy-version that circulated the web for a while, it can serve as a unique identifier as that one thing.

          Asterix two** - I imagine if something like that occures, it’d be a captcha-, ad-, js-code-ridden nightmare. If there could be a bill about that whole thing, the look-up site should be included too, with instructions to make an API for that resource and limitations on how awful it can be.

    • monogram@feddit.nl
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      3 days ago

      Just because you own a cd doesn’t mean you have a license to play it in a club.

      • amorpheus@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Since when can’t you use knowledge gained from books for personal profit?

        The only difference is scale.

        • jmill@lemmy.zip
          link
          fedilink
          arrow-up
          17
          ·
          3 days ago

          In this analogy, the AI uses books like a remix DJ would use bits and pieces of songs from different tracks to splice together their output. Except in the case of AI, it will be much harder to identify the original source.

            • jmill@lemmy.zip
              link
              fedilink
              arrow-up
              7
              ·
              2 days ago

              If you made money doing that, it probably would be illegal. You would certainly get sued, in any case.

              • HobbitFoot @thelemmy.club
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                9
                ·
                2 days ago

                People make a lot of money summarizing articles behind paywalls and it is generally considered legal as long as it is a summary and not copied text.

          • notfromhere@lemmy.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            12
            ·
            2 days ago

            Have you never used bits and pieces of what other people say or what you’ve read in books or riffs you’ve heard or styles seen/heard/read when communicating or creating?

            • jmill@lemmy.zip
              link
              fedilink
              arrow-up
              12
              ·
              2 days ago

              Of course. But I’m not a machine churning out an endless spew of those bits and pieces with no further creative input. I’d be on the side of giving any truly conscious entity rights (including creative ones), but LLMs are not, and I don’t think ever could be, conscious. That’s just not how they work, to my understanding anyway.

              • notfromhere@lemmy.ml
                link
                fedilink
                arrow-up
                1
                arrow-down
                12
                ·
                2 days ago

                If LLMs aren’t conscious, who is using them to churn out an endless spew of those bits and pieces with no further creative input?

                Someone has to be doing it. I guess it could be these newfangled AI Agents I’ve been hearing about, but as far as at least I’m aware, they still require input and/or editing (depending on the medium) from a human.

                • njm1314@lemmy.world
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  2 days ago

                  Okay let’s take a break here cuz I think we need to point something out. They are absolutely not conscious. By any definition of the word. By any stretch of the imagination. It’s important to me that you understand this. What you are describing here is a tool. Not something with consciousness.

      • notfromhere@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        2 days ago

        They are likely referring to the training process of populating model weights based on prepared datasets via training algorithms.