• 1 Post
  • 212 Comments
Joined 2 years ago
cake
Cake day: July 11th, 2023

help-circle





  • Ok, first of all, AI doesn’t “learn” the way humans do. That’s not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That’s easy different than someone studying what negative space is or learning how to draw hands.

    The comparison to human learning isn’t about identical processes, it’s about function. Human artists absorb influences and styles, often without realizing it, and create new works based on that synthesis. AI models, in a very different but still meaningful way, also synthesize patterns based on what they’re exposed to. When people say AI ‘learns from art,’ they aren’t claiming it mimics human cognition. They mean that, functionally, it analyzes patterns and structures in vast amounts of data, just as a human might analyze color, composition, and form across many works. So no, AI doesn’t learn “what negative space means” it learns that certain pixel distributions tend to occur in successful compositions. That’s not emotional or intellectual, but it’s not random either.

    Second, posting a picture implies consent for people to see and learn from it, but that doesn’t imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn’t really consenting to people using that to generate pornography based off of her body. There’s also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it’s advised by corporations, don’t bother trying to bring that up, I’m already pissed at Disney.) But even with people saying specifically that they don’t want their art to be used for AI, even prominent artists like Miyazaki, doesn’t stop AI from taking those images and doing something they don’t consent to, scraping, with them.

    I agree, posting art online doesn’t give others the right to do anything they want with it. However, there’s a difference between viewing and learning from art versus directly copying or redistributing it. AI models don’t store or reproduce exact images — they extract statistical representations and blend features across many sources. They aren’t taking a single image and copying it. That’s why, legally and technically, it isn’t considered theft. Equating all AI art generation with nonconsensual exploitation like kiddie porn is conflating separate issues: ethical misuse of outputs is not the same as the core technology being inherently unethical.

    Also, re your point on copyright, it’s important to remember that copyright is designed to protect specific expressions of ideas not general styles or patterns. AI-generated content that does not directly replicate existing images does not typically violate copyright, which is why lawsuits over this remain unresolved or unsuccessful so far.

    (As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There’s much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn’t track.)

    This thread and conversation isspecifically talking about AI art, so the comparison and data is still apt.

    Fourth, I’m not just talking about people using AI to make lies, I’m talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn’t know what’s a joke or misinformation, and will present it as true, and people will believe it as true if they don’t know any better. It’s inaccurate, and can’t be accurate because it doesn’t have a filter for its summeries. It’s typing only using the suggested next word on your cell phone.

    Concerns about misinformation, environmental impact, and misuse are real. That’s why the responsible use of AI must involve regulation, transparency, and ethical boundaries. But that’s very different from claiming that AI is an ‘eyeball stabbing machine’. That kind of absolutist framing isn’t helpful. It stifles productive discussion about how we can use these tools in ways that are helpful, including in medicine like you mention.

    I didn’t say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It’s a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.

    I have never once mentioned capitalism or communism.







  • Most of the data used to train AI, especially image models, came from publicly available content accessible by anyone. Artists have been doing this kind of thing for centuries: looking at existing work, internalizing styles, and creating something new. AI is doing that at scale — it’s not copying, it’s learning patterns. Just like humans do.

    Consent is important, absolutely, but if your art is posted publicly, you’re already consenting to it being seen and learned from. That’s how influence works. If someone draws in your style after following you online, that’s not theft. You might not like it, but it’s not unethical in itself.

    Also, let’s not pretend this conversation is only about artists’ rights. It’s become a catch-all for every fear around new tech. People are worried about the impact of AI on the environment? Understandable and totally valid, although way less than you might think

    https://www.nature.com/articles/s41598-024-54271-x

    https://www.nature.com/articles/s41598-024-76682-6

    Misinformation? Agreed, serious concern and one I share. But saying AI is inherently unethical because of how some people use it is like saying the internet is inherently unethical because people post lies.

    We should absolutely talk about regulation, transparency, and compensation, but let’s not throw out the entire field because it challenges the comfort zone of some industries. Ethics matter, yes, but so does clarity. Not everything that feels unfair is a violation.





  • Again, plagiarism isn’t just ‘using others’ work’ — it’s about copying and passing it off as your own, often without transformation. AI doesn’t memorize or intentionally replicate specific works. It generates outputs based on probabilities, not stored text. That’s a big difference in mechanism.

    There’s a meaningful distinction between training and theft. A human artist studies other art — that doesn’t make their work plagiarism, even if it’s derivative.

    As I said, if the outputs are used irresponsibly — like someone passing off AI writing as their own research or using it to flood markets with low-effort content — that’s where it becomes a tool for exploitation. But the problem then is how it’s used, not the tech itself.


  • AI doesn’t reproduce individual works the way plagiarism does. It’s not like it’s pulling out someone’s article and copying it. It’s trained on patterns — like how a person who reads hundreds of books starts to pick up how stories are structured, but doesn’t memorize them word-for-word.

    If AI is trained on copyrighted material without permission, there’s a real argument about whether that’s fair or exploitative, but that’s more of a legal and ethical issue than a plagiarism one.


  • Blaming the AI for plagiarism is like blaming a calculator for a wrong answer. It depends on how it’s used. AI is a tool — and like any tool, it can be misused or used ethically depending on the person behind it.

    Plagiarism involves intent and deception, usually by a person taking someone else’s work and claiming it as their own. AI can’t have intent, so it seems like your concerns in this regard should be directed at the user, not the content, for which Lemmy already has tools to address i.e blocking a individual.



  • Which seems like a silly method of comparing emissions, given that the human doesn’t exist for the purpose of creating images. The carbon footprint of the human is still present whether or not they are generating art.

    Whether it’s creating art with AI or via another means the human must be involved or else the art doesn’t get created. They are a intrinsic part of the process and so their footprint must be included.

    For an AI, the emissions are an addition to global carbon footprint

    For Digital art (I.e Photoshop etc) the computer use is in addition to global carbon footprint. In Photography the construction of a camera is in addition to global carbon footprint. The list goes on. Either we either include the carbon footprint of all the tool(s) involved in the creation of the piece or we don’t include any.

    For the final point, a random social media post isn’t a profit seeing endeavor, which is why it isn’t expected to pay for any images it uses. The normal accepted practice is to just give credit to the source. The same is not true for news articles, which does care about there being a watermark and is expected to pay for image use. Unless of course people start accepting the normal use of ai images in which case disrupts a whole industry to provide worse art.

    Whether it’s ‘accepted practice’ or not is irrelevant. Using a watermarked image for anything without permission or license is illegal and fails to reimburse the artist that created it, the very thing you accuse AI of doing.