To elaborate a little:

Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.

The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.

  • missingno@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    3 days ago

    From a theoretical perspective, it is entirely possible for code to simulate the activity of a human brain by simulating every neuron. And there would be deep philosophical questions to ask about the nature of thought and consciousness, is an electronic brain truly any different from a flesh one?

    From a practical perspective, current technology simply isn’t there yet. But it’s hard to even describe the gap between how a LLM operates and how we operate, because our understandings of both LLMs and ourselves are honestly both very poor. Hard to say more than just… no, they’re not alike. At least not yet.