• SpacetimeMachine@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      4 hours ago

      In the matrix humans were used as batteries, not processors. Although that was the original writing before an exec thought the average person would be “confused” by that

  • Spaniard@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    As much as I think current IA is another bullshit marketing term, we will see when AI has been around for at least a few centuries, I don’t think we need thousands of years like brains did.

    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      4 hours ago

      This is the thing about AI criticism. AI in the LLM sense we know today has been publicly available for a few years, in development for a couple decades. Any criticism about how stupid it is will be irrelevant in 6-12 months. Look at the people trashing AI 2 years ago, how it would constantly hallucinate and produce gibberish code. Now it’s a lot better on both regards. In 2 more years, what then? It’ll be better. Yes we’ll hit the LLM ceiling but there’s a lot of fine tuning to be done.

      Criticize AI for the environmental effects, the inequality that it’s enhancing, how the rich and powerful have access to the AIs that know too much about us. Criticize it for lacking the reality of human composed text. But criticizing it on technical grounds is not the right angle.

      FWIW if you asked both an AI and a HS student to crank out an essay on a random topic, the HS student not having studied the topic, the HS student would be the one making more shit up. Human brains have limitations too. AI and human brains aren’t directly comparable.

  • jaybone@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    14 hours ago

    Tbf a lot of energy goes into producing the food we consume. But nowhere near what it costs to run this AI garbage.

        • FrenziedFelidFanatic@pawb.social
          link
          fedilink
          English
          arrow-up
          13
          ·
          19 hours ago

          There are probably 2 reasons for this:

          1. There’s probably a lot more motor control going on than you would expect when you need to think (writing, fidgeting, etc.).

          2. Your brain wants sugars, so when you run out of immediately-available glycogen to break down, you will want to eat more in order to keep thinking. Breaking down fats wont supply energy fast enough (in the short term) to keep complex thought running continuously.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    17 hours ago

    Edit: removing my far too serious comment.
    Tldr Poe’s law, I can’t tell if this is a critique of AI, or of AI critics.

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      14 hours ago

      Win at being shit.

      God creates man.
      Man creates god.
      Man kills god.
      Man creates AI.
      AI kills man.
      AI destroys earth.
      Crocodile people rule the galaxy until the heat death of the universe.

      -Nietzsche

  • TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    18 hours ago

    Also you can run most models on a wide range of fuels. Sucrose, glucose, maltose, ethanol, molybdenum disulfide, small rocks, some grass. Really anything.

  • uncouple9831@lemmy.zip
    link
    fedilink
    arrow-up
    7
    arrow-down
    12
    ·
    edit-2
    12 hours ago

    The thing in the right is also a glorified prediction engine. I suppose whoever made this is steeped in religious dogma but humans aren’t that advanced either. We just predict things.

    Inb4 the advanced fat-based brains brigade me using their advanced fat-based prediction engines 🙄

    • skarn@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      12 hours ago

      It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.

      The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.

      We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.

      If you spend enough time using LLMs you must notice how their working is different from your own.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.

      • uncouple9831@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        I don’t spend time working with LLMs. I’d agree we have additional features. For example I think while the computers currently can guess, we can guess and check in a meaningful way. But that’s not what the meme was about. I would argue the meme was barely about anything other than “ai bad, me smort”. Ironic since the LLM could probably make a better one even if it “doesn’t understand”, whatever understand is.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        You can’t prove that I do, I can’t prove that you do. Those metaphysical arguments don’t have much punch in a scientific conversation.

        • Alcoholicorn@mander.xyz
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          I don’t need to understand consciousness to be confident a llm is not conscious.

          Dogs are glorified barking machines. Is a tape playing a tape of a dog barking have the consciousness or intellegence of a dog?

    • Seefra 1@lemmy.zip
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      Sorry, but I’m not a prediction engine, I am capable of abstract thought, and actually understanding the meaning of the words.

      I can also process all kinds of different data and make connection between then which includes emotional connections.

      Another cool trick, I also have this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.

      • uncouple9831@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        4 hours ago

        So you have something you don’t understand and can’t prove exists. Like a hallucination?

        Tbh the rest isn’t worth responding to. Emotional connections? Come on, you’re a horny bag of chemical soup. None of this is real. Humans mostly guess what reality is anyway.

      • teuniac_@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.

        Maybe you’d be able to if you dial it to 25W