It should be noted, how much will that affect the lifespan of those GPUs running double-dutyx8?
AI’s still replaceable but it will emulate human-like burnout.
From their linked study:
“Filling this utilization gap requires us to better saturate each GPU by enabling it to serve requests from multiple models. As such, we aim to conduct effective GPU pooling. By sharing a GPU between as many models as possible with- out violating the service-level objective (SLO), GPU pooling promises great reductions in operational expenses (OPEX) for concurrent LLM serving.”
The “saturate each gpu” part seems to support your idea.
I do expect operational savings from this optimization, but my guesstimate would be a 2-5x savings rather than the reported 9x savings when looked at over a fixed time period.