• zxqwas@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    2
    ·
    1 day ago

    Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.

    What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?

    Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      10 hours ago

      The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.

      Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.

      ¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.

        So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.

      • dreugeworst@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 hours ago

        why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.

        to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums

      • SuperNerd@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.