• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.

    Moravec’s paradox holds true because of two fronts:

    1. Computing resources required
    2. Lack of formal description of a behavior

    But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we’re brute-forcing away the lack of a formal description of the algorithm.

    We’re now looking towards neuromorphic hardware on the trillion-“core” scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler… before you copy the first trained NN to an identical body and re-training costs drop to O(0)… which is much less than even training a million toddlers at once.