The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.
So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.
So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.