• Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    It should be noted, how much will that affect the lifespan of those GPUs running double-dutyx8?

    AI’s still replaceable but it will emulate human-like burnout.

    • bcovertigo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      From their linked study:

      “Filling this utilization gap requires us to better saturate each GPU by enabling it to serve requests from multiple models. As such, we aim to conduct effective GPU pooling. By sharing a GPU between as many models as possible with- out violating the service-level objective (SLO), GPU pooling promises great reductions in operational expenses (OPEX) for concurrent LLM serving.”

      The “saturate each gpu” part seems to support your idea.

      • Rentlar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 minutes ago

        I do expect operational savings from this optimization, but my guesstimate would be a 2-5x savings rather than the reported 9x savings when looked at over a fixed time period.