• 0 Posts
  • 33 Comments
Joined 14 days ago
cake
Cake day: April 10th, 2025

help-circle



  • I’m also annoyed by how “in the face” it has been, but that’s just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I’m just as sick of the “ASI” psychos. It’s just a tool. A novel one, but a tool nonetheless.

    What do you mean “black box”? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don’t feed sensitive data into it then. It shouldn’t be in your codebase anyway.

    Or run your own LLMs

    Or run a proxy to sanitize the data locally on its way to a cloud provider

    There are options, but it’s really cutting edge so I don’t blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it’s starting.

    Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren’t genius chat bots the marketing dept wants to sell you. It won’t mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we’re still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.

    I’m not saying jump in with both feet, but don’t bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I’m not saying it’ll be existential, but it’s not going away, I’m going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.







  • blinx615@lemmy.mltoFuck AI@lemmy.worldThe Perfect Response
    link
    fedilink
    arrow-up
    3
    arrow-down
    7
    ·
    3 days ago

    This is a myth pushed by the anti-ai crowd. I’m just as invested in my work as ever but I’m now far more efficient. In the professional world we have code reviews and unit tests to avoid mistakes, either from jr devs or hallucinating ai.

    “Vibe coding” (which most people here seem to think is the only way) professionally is moronic for anything other than a quick proof of concept. It just doesn’t work.







  • I got used EPYC stuff and a 3090, but basically the same template; just a few more resources.

    • CPU: AMD EPYC 7542 (16 cores / 32 threads)
    • Motherboard: Supermicro H12SSL-i
    • Memory: Samsung DDR4 8×32GB
    • GPU: EVGA RTX 3090 FTW3 24GB

    However, I haven’t run into some of the issues you had. With the proxmox host on wired ethernet and my laptop on 5GHz wifi from about 10ft away from the access point I can easily play Rocket League with no noticeable latency, 1440p 120Hz. I’m using sunshine on a windows VM and moonlight on Fedora. It did, indeed, take a crapload of fiddling and I consider myself pretty adept at these things, but it can be done. :D

    I also swap the GPU between two VMs. I have a Ubuntu VM I use for AI workloads for fiddling around. On that one, I just ssh in and the GPU is 100% utilized for AI. Planning to add another GPU in the future (or a few).

    Can’t speak to remote connections, but my previous experience with cloud providers tells me it might be good enough for slow paced games, but it will fail horribly on anything really latency dependent. Best case scenario is the latency is off by just enough to make you lose your mind, or worse, you get use to the weird remote latency and then get all screwed up when you play at home.