AMD prepares Ryzen 7 9850X3D and Ryzen 9 9950X3D2 CPUs with higher clocks and full 3D V-Cache on all cores. See what improvements are coming.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      No worries mate, we can’t all be experts of every field and every topic!

      Besides there are other AI models that are relatively small and depend on processing power more than RAM. For example there’s a bunch of audio analysis tools that don’t just transcribe information but also diarise it (split it up by speaker), extract emotional metadata (e.g. certain models can detect sarcasm quite well, others spot general emotions like happiness or sadness or anger), and so on. Image categorisation models are also super tiny, though usually you’d want to load them into the DSP-connected NPU of appropriate hardware (e.g. a newer model “smart” CCTV camera would be using a SoC that has NPU to load detection models into, and do the processing for detecting people, cars, animals, etc. onboard instead of on your NVR).

      Also by my count, even somewhat larger training systems such as micro wakeword training, would fit into the 196MB V-Cache.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        Exactly! Not my area of expertise, heh.

        There might even be niches in LLM land, like mamba SSM states, really tiny draft models, or other “cache” type things fitting into so much L3. This might already be the case with EPYC/TR stuff some homelab folks use.

        It makes me wonder if the old AMD 6800 XT (with its 128MB of cache) would be good at this sort of “small model” thing.