• SIGSEGV@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    1 year ago

    So when it happens, you’ll change your mind? My point is that what we have today is based on interactions in the human brain: neural networks. You can say, “They’re just guessing the next word based on mathematical models”, but isn’t that exactly what you’re doing?

    Point to the reason why what comes out of your mouth is any different. Is it because your network is bigger and more complicated? If that’s the case GPT-4 is closer to being human than GPT-3 was, being a larger model.

    I just don’t get your point at all.

    • PupBiru@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      and if that is indeed the point: that the difference is simply size, then what does that law look like? surely it would need to specify a size of the relevant neural network that is able to derive works

      but that’s then just an arbitrary number because we just don’t know what it would be

      • SIGSEGV@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I don’t even think that matters much, right? Current LLMs already out-compete humans at many tasks. I think we’re already past the threshold, at least in some regards. That is to say, I don’t think there is a hard line because it depends on what your testing criteria are.