• 0 Posts
  • 74 Comments
Joined 1 year ago
cake
Cake day: October 15th, 2024

help-circle


  • Paradox owns the world of darkness ip entirely right now. I wonder if this call to refocus their efforts on their core competencies will mean selling it off.

    They haven’t done too well with it. They’ve released several games and I think some solo dev interactive fiction have been the only ones getting positive reviews: Earthblood, Swansong, Bloodhunt, and now Bloodlines 2 have all done pretty poorly. Even the tabletop RPG has seen better days, with 5th edition causing quite the controversy, mostly with Werewolf. Hell they closed White Wolf down for a while because the books caused controversy.

    Paradox have done a terrible job with the IP.


  • I played pretty much the same way De_Narm did. I tried caring less, though because I had no idea what would come next, it inevitably descended into spaghetti. I am stressed out about technical debt enough at work to be playing a technical debt simulator lol.

    Dedicating the space needed to expand, ensuring everything you build is scalable, inevitably requires you to know a lot about what’s coming.

    Yeah, if you know what you’re doing you can avoid these issues. I did not enjoy myself in the slightest, so after some hours of giving it a chance I decided that learning how to avoid these issues was not worth the pain. I’ll just stick to work instead.


  • I feel vindicated. I have the exact same feeling of factorio feeling too much like work, having to refactor everything because the requirements change is one of the more frustrating parts of software engineering imo, and the game feels tailored specifically to invoke that frustration.

    I imagine that part gets better after the first hundred hours where you basically know what’s coming. I don’t have the patience to learn the tech tree though, given that I don’t even enjoy the game.










  • I feel these kinds of protection, against suicidal language etc, will just lead to even further self-censorship to avoid triggering safeguards, similar to how terms like unalive gained traction.

    AI should be regulated, but forcing them to do corpospeech and be ‘safe’ even harder than they already do in order to protect vulnerable children is not the way. I don’t like that being the way any tech moves and is a part of why I’m on Lemmy in the first place.

    The character.ai case the article mentioned already was the ai failing to ‘pick up on’ (yes I know that is anthropomorphizing an algorithm) a euphemism for suicide. Filters would need to be ridiculous to catch every suicidal euphemism possible and lead to a tonne of false positives.