• 2 Posts
  • 92 Comments
Joined 6 months ago
cake
Cake day: April 27th, 2024

help-circle




  • I just bought a relatively decent folding bike, a Tern Link C8. Have no idea how repairable it is but it is definitely simpler and better made than my previous bike which was a lectric xp lite which was just getting worse and worse after getting about 800 miles out of it. So I definitely feel like bike makers are following auto makers with this stuff and it sucks.

    What I was hoping for was for electric bikes to get cheaper, lighter and more efficient as time went on but it feels like the opposite. Those ebikes are like 3 grand still while the shitty heavy and obtusely designed “cheap” ebikes still cost as much as a decent normal bike. Im happy avoiding ebikes for now which are just glorified motorcycles that still cant keep up with shitty impatient drivers.



  • Decent, for now until ROCM & ZLUDA improve. I use NixOS and run my AI stuff using docker containers as its the easiest way imo because of how fucked up the dependencies are for ROCM especially.

    Basically to get AMD working for this stuff right now is to make sure certain versions of ROCM for certain versions of projects interacting with certain versions of pytorch all like each other. The most dependency hell of all dependency hells.

    So most projects have a hell of a time supporting ROCM so you must use alternative forks mostly, and even if there is a ROCM version it is so hardly used that no one knows if it works or if it doesnt half the time. I will say you will have the EASIEST time by far if you use a 7900 XT because most things are built to support that card. Otherwise good luck. Get used to using environment variables such as:

    HSA_OVERRIDE_GFX_VERSION=11.0.0 (Or 10.3.0 if that one doesnt work. I use 11.0.1, these are codes for GPU’s supported by ROCM incase yours isnt supported)

    TL:DR - its all a big mess right now but it does work if you fuck with it a bunch, I got my 7800 XT to work nicely with Ollama + OpenWebUI for text generation. For stable diffusion its definitely a shit show atleast for my preferred UI Invoke AI. Doesnt work at all it only uses my CPU (also AMD so maybe some fuckery.) However I dont regret it as AMD is truly the best especially on Linux but definitely not for AI as it currently stands.



  • jaxiiruff@lemmy.ziptoLinux@lemmy.mlZLUDA's third life
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    13 days ago

    Oh hell yeah this will save us AMD users so much headache, I really dont understand why it was taken down recently.

    From what I understood it was AMD having a problem with ZLUDA but now suddenly its okay again? Whatever as long as ROCM is improved or benefitted by this im happy.











  • I subbed to Jerma for a long time on twitch and bought merch when he was more active and gave a couple dollars to Callmekevin when he streamed.

    The cool thing about donating is they respond to your message if they arent a dick that just has a tts going 24/7. For example Jerma didnt really push donating he just enjoys reading chat and I actually got my message read once by him. I feel like using tts is a cheap and soulless copout of interaction that most suckers feel they get value out of just because it was shown or heard on stream.