• mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    This is the real future of neural networks. Trained on supercomputers - runs on a Game Boy. Even in comically large models, the majority of weights are negligible, and local video generation will eventually be taken for granted.

    Probably after the crash. Let’s not pretend that’s far off. The big players in this industry have frankly silly expectations. Ballooning these projects to the largest sizes money can buy has been illustrative, but DeepSeek already proved LLMs can be dirt cheap. Video’s more demanding… but what you get out of ten billion weights nowadays is drastically different from a six months ago. A year to date ago, video models barely existed. A year to date from now, the push toward training on less and running on less will presumably be a lot more pressing.

    • ThorrJo@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I’m very interested in this approach because I’m heavily constrained by money. So I am gonna be looking (in non appliance contexts) to develop workflows where genAI can be useful when limited to small models running on constrained hardware. I suspect some creativity can yield useful tools with these limits, but I am just starting out.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    The network aspect makes smart appliances smart. For example I can program the washing machine to be ready with the laundry when I get home… I don’t think it’s super useful to give it a microphone, computer and several gigabytes of RAM, just so I can go down to the basement and talk to the darn thing… It already has a bunch of LEDs to tell me the status and I think I’m perfectly fine with that.

    I think I’m more for edge-computing. Have one smart home hub with the hardware to do voice and AI compute (with a slightly larger AI model on it), and then have one protocol for the appliances to communicate within my own Wifi. Something like Home Assistant does, just more towards general edge-compute. I think that’s more useful than spend extra money on every item in the household just to replace the touchscreen with a speaker/mic, and fit the cheapest conversation agent which is in the budget for that specific device.

    AI chips are welcome, though. And I’m sure we have some useful applications for them. 1 TOPS isn’t much compared to a computer or graphics card. But certainly not bad either. I think we have several single board computers with similar specs. Just not the Raspberry Pi, that comes without a NPU.

  • naeap@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Well, the investments are not only about the massive power consumption of AI needed for all the users, but mostly needed for the research and training

    If you just take the trained model and run it on some capable hardware of course it will produce results

    I don’t even know how this can be a comparison