Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • FinishingDutch@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    4
    ·
    9 months ago

    Honestly, this sort of thing is what’s killing any sort of enjoyment and progress of these platforms. Between the INCREDIBLY harsh censorship that they apply and injecting their own spin on things like this, it’s nigh on impossible to get a good result these days.

    I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

    It’s even more annoying that you can’t even PAY to get rid of these restrictions and filters. I’d gladly pay to use one if it didn’t censor any prompt to death…

    • mellowheat@suppo.fi
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      edit-2
      9 months ago

      I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

      The thing is, if it’s injecting diversity into a place where there shouldn’t have been diversity, this can usually be fixed by specifying better in the next prompt. Not by writing ragebait articles about it.

      But yeah, I’d also be happy to be able to use an unhinged LLM once in a while.

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        9 months ago

        Yeah, this is what people don’t get. These LLMs aren’t thinking about anything. It has zero awareness. If you don’t guide it towards exactly what you want in your prompt, it’s not going to magically know better.

        • FinishingDutch@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Speaking for myself, it’s definitely not the lack of detail in the prompts. I’m a professional writer with an excellent vocabulary. I frequently run out of room with the prompts on Bing, because I like to paint a vivid picture.

          The problems arise when you use words that it either flags as problematic, misinterprets anyway or if it just injects its own modifiers. For example, I’ve had prompts with ‘black haired’ rejected on Bing, because… god knows why. Maybe it didn’t like what it generated as it was problematic. But if I use ‘raven-haired’ I get a good result.

          I don’t mind tweaking prompts to get a good result. That’s part of the fun. But when it just tells you ‘NO’ without explanation, that’s annoying. I’d much prefer an AI with no censorship. At least that way I know a poor result is due to a poor prompt.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          Who says you need awareness to think? People process information subconsciously all the time.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 months ago

      I couldn’t agree more. I recently read an article that criticized “uncensored AI” for that it was capable of coming up with a plan for a nazi takeover of the world or something similar. Well duh, if that’s what you asked for then it should. If it truly is uncensored then it should be capable of plotting a similar takeover for gay furries too as well as also counter-measures for both of those plans.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        This points at a very crucial and deep divide in people’s social philosophy, which is how to ensure bad things are minimized.

        One major branch of this theory goes like:

        Make sure people are good people, and punish those who do wrong

        And the other major branch goes like:

        Make sure people don’t have the power needed to do wrong

        Very deep, very serious divide in our zeitgeist, and we never talk about it directly but I think we really should.

        (Or maybe we shouldn’t, because the conversation could be dangerous in the wrong hands)

        I’m in the former camp. I think people should have power, even if it enables them to do bad things.

    • crimsonpoodle@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Just run ollama locally and download uncensored versions— runs on my m1 MacBook no problem and is at the very least comparable to chatgpt3. Unsure for images though, but there should be some open source options. Data is king here, so the more you use a platform the better its AI gets (generally) so don’t give the corporations the business.

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        I’ve never even heard of that, so I’m definitely going to check that out :D I’d much prefer running my own stuff rather than sending my prompts to god knows where. Big tech already knows way yoo much about us anyway.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          I love teaching GPT-4. I’ve given permission for them to use my conversations with it as part of future training data, so I’m confident what I teach it will be taken up.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        How powerful is ollama compared to say GPT-4?

        I’ve heard GPT-4 uses an enormous amount of energy to answer each prompt. Are the models runnable on personal equipment once they’re trained?

        I’d love to have an uncensored AI

        • crimsonpoodle@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Llama2 is pretty good but there are a ton of different models which have different pros and cons, you can see some of them here: https://ollama.com/library . However I would say that as a whole models are generally slightly less polished compared to chat gpt.

          To put it another way: when things are good they’re just as good, but when things are bad the AI will start going off the rails, for instance holding both sides on the conversation, refusing to answer, just saying goodbye, etc. More “wild westy” but you can also save the chats and go back to them so there are ways to mitigate, and things are only getting better.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 months ago

      I want the tool to just do its fucking job.

      Download ComfyUI, download a model (I’d say head over to civitai), have a blast. The only censorship you’ll see on the way is civitai hiding anything sexually explicit unless you have an account, the site becomes a lot more horny when if you flip the switch in the settings.

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I’ll look into it for sure. I tried Automatic1111 last year with SD, bunch of add-on stuff… it was finicky and didn’t get me quite what I was looking for.

        Thanks for the tip!

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Some stuff will always be finickly and fickle: The more you and the model disagree with what a very basic prompt means the more work it is to get it to do what you want – and it might not be able to, OTOH poking around will then likely inspire you to do something else that seems possible, AI as a medium is quite a bit more of a dialogue than oil on canvas: Once you’ve mastered oil it becomes passive, not talking back any more, while AI models will continue to brat back.

          That said though ComfyUI gives you a ton more control than A1111, it’s also generally faster and more performant.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 months ago

      And, by establishing legal precedent that AIs can’t be trained on copyrighted content without purchasing licenses as if the content were going to be redistributed, we’ve ensured that people who aren’t backed by millions of dollars won’t be able to build their own AIs.