• 5 Posts
  • 272 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle







  • Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!


  • Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott’s narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

    We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)


  • I’ve seen this concept mixed with the simulation “hypothesis”. The logic goes that if future simulators are running a “rescue simulation” but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).

    So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.





  • Depends what you mean by “steelman”. If you take their definition at it’s word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means “entirely rebuild the opposing argument into something different” (because they don’t have a basic humanities education or don’t want to actually properly read leftist thought) and they can’t resist doing that!







  • Given that the USA has refused more comprehensive gun laws or better funding of public mental health services even after many many school shootings, I think you are far too optimistic about the LLM induced mental health crisis actually leading to a ban or even just tighter liability on LLMs. My expectation is age verification plus giant disclaimers, and the crisis continuing. The inference cost will force the LLMs to be more obviously dumb and unable to keep track of context, and the lack of a technological moat will lead to LLM chatbots becoming commoditized, but I’m overall not optimistic.

    The LLM induced skill gap will be a thing yes… I predict companies trying to address it in the most hamfisted and belittling way possible. Like, they keep using code interviews (that are close to useless at evaluating the actual skills the employee needs), but now they want you to do the code interview with spyware installed to make sure you aren’t using an LLM to help you.