• stickly@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    edit-2
    2 days ago

    If you trace someone’s art

    Not how AI works

    copy their style and were paid to do it

    All artists copy, iterate or regurgitate existing work. What does pay have to do with anything? It’s clearly not a deciding factor for anti ai critics; the original post doesn’t mention payment at all.

    they are capable of making something new as they improve, possibly a style no one has seen or a unique take on an existing style

    This just isn’t how humans create. There’s also nothing stopping a human artist from taking inspiration from AI output (“wow, the combination of X subject in Y style is interesting. How can I improve it”), is there no value in that? Is that line of creativity forever tainted?

    always be generating in the confines of its training data, and getting WORSE as it is trained

    Categorically false for art. Ai output quality does get worse when you inbreed it on facts or data based in the real world. The only thing it’s really truly good at is hallucinating, which is a fine way of making art because the quality is entirely subjective.

    A model with 60B parameters has something like ~60B^16 possible outputs. Just because humans currently lack the creativity to do anything interesting with it doesn’t mean the tool is slop.

    There are real, ethical reasons to dislike our current AI usage. But saying all AI content is bad ipso facto is just reactionary nonsense.

    • starelfsc2@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I was using tracing as a metaphor for stealing their art without permission and using it as a weight in training data, and I said paid to do it as in these companies get massive returns off stealing the ai art. Even locally generated, the scraping was still content stolen without permission.

      All artists copy, iterate or regurgitate existing work. What does pay have to do with anything? It’s clearly not a deciding factor for anti ai critics; the original post doesn’t mention payment at all.

      I mean not wrong but not fully correct either. AI is generating specifically from the dataset it has. I would say the way AI neurons work is similar to humans, but the AIs data is literally just images and words. That is like 30% of what a human will experience, and they are limited to specifically what their dataset contains. It is incapable of generating outside that dataset. A human is also incapable of generating outside their dataset, but a human is not restricted in their dataset to experiences and things that have already happened, and the experiences are not reflected in just words and images. AI images also tend to average their dataset, so the images end up more generic on average.

      Categorically false for art. Ai output quality does get worse when you inbreed it on facts or data based in the real world. The only thing it’s really truly good at is hallucinating, which is a fine way of making art because the quality is entirely subjective.

      Do you have a source for that? Everything I’ve read has said the opposite, such as https://arxiv.org/abs/2311.12202

      I absolutely agree with things like using it for inspiration or helping create an open source project, but I’m weighing the cost benefit for art and it seems like the long-term is negative for artists and consumers both, even if I might not care in the short term.

      • stickly@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        paid to do it as in these companies get massive returns

        Ai companies are famously revenue negative. Their value is entirely speculative and it’s doubtful they’ll get any real returns due to the plain physical economics of training and running these models. Money currently being made is pocket change by grifters (eg: Ai YouTube videos, low effort articles), and that will dry up as water finds its level because it’s just so easy to do.

        Looking at it from another perspective, the training of Ai and open sourcing of the initial models might be the greatest intellectual property transfer to the public in ~200 years. The strangle hold of Disney (and all litigious artist estates) on works that should be in the public domain has been strongly undermined.

        That is like 30% of what a human will experience

        Visual processing alone takes up about half of our brain. Between that and language you’ve covered most of it, I doubt Ai quality would be much improved by giving it taste or smell.

        not restricted in their dataset to experiences and things that have already happened, and the experiences are not reflected in just words and images

        Not sure what exactly you mean here. I can imagine a purple polka-dot parrot only because I have experienced those words in the context of color and pattern and animal. I can’t imagine an ibcid kcajjd kpal outside of maybe vaguely attaching the concept of nonsense words to Dr Seuss. And I suppose an experience could be reflected in, say, a tantric massage but I’m not judging Gen Ai content on its ability to rub my genitals.

        Everything I’ve read has said the opposite

        Losing semantic coherence is exactly what I mean by hallucination. Even as you lose the ability to use input to derive a sane output, the resulting image could still be aesthetically pleasing or interesting. It could also be garbage, but the same problem happens with artists on hallucinogens.

        And I agree, Ai is “bad” because of what terrible people think they can do with it and by extension the economic and environmental damage done by trying to apply it everywhere. But I think people losing sleep over individual pieces of Ai content and artistic purity are a bit silly.