• adam@kbin.pieho.me
    link
    fedilink
    arrow-up
    36
    arrow-down
    2
    ·
    1 year ago

    ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.

    • hotdoge42@feddit.de
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      1 year ago

      That’s wrong. You can do it on your home PC with stable diffusion.

      • ᗪᗩᗰᑎ@lemmy.ml
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        3
        ·
        1 year ago

        And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.

        • diomnep@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          “He’s off by multiple orders of magnitude, and he doesn’t even mention the resource that GenAI models require in large amounts (GPU), but he’s not wrong”

  • weew@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    4
    ·
    1 year ago

    So much for the brilliant AI-specialized Tensor processor

    It’s basically just a mediocre processor that offloads interesting things to the mothership.

    • DoucheBagMcSwag@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      1 year ago

      Yep. What a joke. Goes to show you that google could Make these pixel features available to all android devices if they wanted to.

  • Steeve@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    1 year ago

    Yeah, obviously. The storage and compute required to actually run these AI generative models is absolutely massive, how would that fit in a phone?

    • Contend6248@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      When some company praises the ground braking AI capability of a new SoC they have been built, you might get the idea that it’s doing these tasks on said SoC.

      Why would you think otherwise?

      A list of what this phone does offline and what not would be great.

  • Mojojojo1993@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Isn’t that kinda the dream. We have devices that remote the os. So we get a super powerful device that keeps getting updated and upgraded. We just need a receiver ?

    Isn’t that what we want. Can reduce down the bulk on devices. Just a slab with battery, screen and modem and soc that can power the remote application ?

    • botengang@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sometimes that’s what people dream about. On the other hand that hybrid cloud model is giving up the last remnants of computing autonomy and control over devices we own.

    • 0x2d@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Pixels have never been “stock android” instead they have been close to stock but with extra AI features over it (e.g Magic Eraser, Now Playing, etc)

      Surprisingly they’re the easiest phones to degoogle as well