• 26 Posts
  • 293 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • When phrased like that, they can’t be disentangled. You’ll have to ask the person whether they come from a place of hate or compassion.

    content warning: frank discussion of the topic

    Male genital mutilation is primarily practiced by Jews and Christians. Female genital mutilation is primarily practiced by Muslims. In Minnesota, female genital mutilation is banned. It’s widely understood that the Minnesota statutes are anti-Islamic and that they implicitly allow for the Jewish and Christian status quo. However, bodily autonomy is a relatively fresh legal concept in the USA and we are still not quite in consensus that mutilating infants should be forbidden regardless of which genitals happen to be expressed.

    In theory, the Equal Rights Amendment (ERA) has been ratified; Mr. Biden said it’s law but Mr. Trump said it’s not. If the ERA is law then Minnesota’s statutes are unconstitutionally sexist! This analysis requires a sort of critical gender theory: we have to be willing to read a law as sexist even when it doesn’t mention sex at all. The equivalent for race, critical race theory, has been a resounding success, and there has been some progress on deconstructing gender as a legal concept too. ERA is a shortcut that would immediately reverberate throughout each state’s statutes.

    The most vocal opponents of the ERA have historically been women; important figures include Alice Hamilton, Mary Anderson, Eleanor Roosevelt, and Phyllis Schafly. It’s essential to know that these women had little else in common; Schafly was a truly odious anti-feminist while Roosevelt was an otherwise-upstanding feminist.

    The men’s-rights advocates will highlight that e.g. Roosevelt was First Lady, married to a pro-labor president who generally supported women’s rights; I would point out that her husband didn’t support ERA either, as labor unions were anti-ERA during that time due to a desire to protect their wages.

    This entanglement is a good example of intersectionality. We generally accept in the USA that a law can be sexist and racist, simultaneously, and similarly I think that the right way to understand the discussion around genital mutilation is that it is both sexist and religiously bigoted.

    Chaser: It’s also racist. C’mon, how could the USA not be racist? Minnesota’s Department of Health explicitly targets Somali refugees when discussing female genital mutilation. The original statute was introduced not merely to target Muslims, but to target Somali-American Muslim refugees.


  • Catching up and I want to leave a Gödel comment. First, correct usage of Gödel’s Incompleteness! Indeed, we can’t write down a finite set of rules that tells us what is true about the world; we can’t even do it for natural numbers, which is Tarski’s Undefinability. These are all instances of the same theorem, Lawvere’s Fixed-Point. Cantor’s theorem is another instance of Lawvere’s theorem too. In my framing, previously, on Awful, postmodernism in mathematics was a movement from 1880 to 1970 characterized by finding individual instances of Lawvere’s theorem. This all deeply undermines Rand’s Objectivism by showing that either it must be uselessly simple and unable to deal with real-world scenarios or it must be so complex that it must have incompleteness and paradoxes that cannot be mechanically resolved.




  • Complementing sibling comments: Swift requires an enormous amount of syntactic ceremony in order to get things done and it lacks a powerful standard library to abbreviate common tasks. The generative tooling does so well here because Swift is designed for an IDE which provides generative tools of the sort invented in the 80s and 90s; when their editor already generates most of their boilerplate, predicts their types, and tab-completes their very long method/class names, they are already on auto-pilot.

    The actual underlying algorithm should be a topological sort with either Kahn’s algorithm or Tarjan’s algorithm. It should take fewer than twenty lines total when ceremony is kept to a minimum; here is the same algorithm for roughly the same purpose in my Monte-in-Monte compiler, sorting modules based on their dependencies in fifteen lines. Also, a good standard library should have a routine or module implementing topological sorting and other common graph algorithms; for example, Python’s graphlib.TopologicalSorter was added in 2020 and POSIX tsort dates back to 1979. I would expect students to immediately memorize this algorithm upon grokking it during third-year undergrad as part of a larger goal of grokking graph-traversal algorithms; the idea of both Kahn and Tarjan is merely to look for vertices with no incoming edges and error if none can be found, not an easy concept to forget or to fail to rediscover when needed. Congrats, the LLM can do your homework.

    If there’s any Swifties here: Hi! I love Taytay; I too was born in the late 80s and have trouble with my love life. Anyway, the nosology here is pretty easy; Swift’s standard library doesn’t include algorithms in general, only algorithms associated to data structures, which themselves are associated to standardized types. Since Swift descends from Smalltalk, its data structures include Collections, so a reasonable fix here would be to add a Graph collection and make topological sorting a method; see Python’s approach for an example. Another possibility is to abuse the builtin sort routine, but this will cost O(n lg n) path lookups and is much more expensive; it’s not a long-term solution.


  • One important nuance is that there are, broadly speaking, two ways to express a formal proof: it can either be fairly small but take exponential time to verify, or it can be fairly quick to verify but exponentially large. Most folks prefer to use the former sort of system. However, with extension by definitions, we can have a polynomial number of polynomially-large definitions while still verifying quickly. This leads to my favorite proof system, Metamath, whose implementations measure their verification speed in kiloproofs/second. If you give me a Metamath database then I can quickly confirm any statement in a few moments with multiple programs and there is programmatic support for looking up the axioms associated with any statement; I can throw more compute at the problem. While LLMs do know how to generate valid-looking Metamath in context, it’s safe to try to verify their proofs because Metamath’s kernel is literally one (1) string-handling rule.

    This is all to reconfirm your impression that e.g. Lean inherits a “mediocre software engineering” approach. Junk theorems in Lean are laughably bad due to type coercions. The wider world of HOL is more concerned with piles of lambda calculus than with writing math proofs. Lean as a general-purpose language with I/O means that it is no longer safe to verify untrusted proofs, which makes proof-carrying Lean programs unsafe in practice.

    @[email protected] you might get a laugh out of this too. FWIW I went in the other direction: I started out as a musician who learned to code for dayjob and now I’m a logician.


  • I don’t have any good lay literature, but get ready for “steering vectors” this year. It seems like two or three different research groups (depending on whether I count as a research group) independently discovered them over the past two years and they are very effective at guardrailing because they can e.g. make slurs unutterable without compromising reasoning. If you’re willing to read whitepapers, try Dunefsky & Cohan, 2024 which builds that example into a complete workflow or Konen et al, 2024 which considers steering as an instance of style transfer.

    I do wonder, in the engineering-disaster-podcast sense, exactly what went wrong at OpenAI because they aren’t part of this line of research. HuggingFace is up-to-date on the state of the art; they have a GH repo and a video tutorial on how to steer LLaMA. Meanwhile, if you’ll let me be Bayesian for a moment, my current estimate is that OpenAI will not add steering vectors to their products this year; they’re already doing something like it internally, but the customer-facing version will not be ready until 2027. They just aren’t keeping up with research!


  • Steve Yegge has created Gas Town, a mess of Claude Code agents forced to cosplay as a k8s cluster with a Mad Max theme. I can’t think of better sneers than Yegge’s own commentary:

    Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.

    If you’re familiar with the Towers-of-Hanoi problem then you can appreciate the contrast between Yegge’s solution and a standard solution; in general, recursive solutions are fewer than ten lines of code.

    Gas Town solves the MAKER problem (20-disc Hanoi towers) trivially with a million-step wisp you can generate from a formula. I ran the 10-disc one last night for fun in a few minutes, just to prove a thousand steps was no issue (MAKER paper says LLMs fail after a few hundred). The 20-disc wisp would take about 30 hours.

    For comparison, solving for 20 discs in the famously-slow CPython programming system takes less than a second, with most time spent printing lines to the console. The solution length is exponential in the number of discs, and that’s over one million lines total. At thirty hours, Yegge’s harness solves Hanoi at fewer than ten lines/second! Also I can’t help but notice that he didn’t verify the correctness of the solution; by “run” he means that he got an LLM to print out a solution-shaped line.


  • NEOM is a laundry for money, religion, genocidal displacement, and the Saudi reputation among Muslims. NEOM is meant to replace Wahhabism, the Saudi family’s uniquely violent fundamentalism, with a much more watered-down secularist vision of the House of Saud where the monarchs are generous with money, kind to women, and righteously uphold their obligations as keepers of Mecca. NEOM is not only The Line, the mirrored city; it is multiple different projects, each set up with the Potemkin-village pattern to assure investors that the money is not being misspent. In each project, the House of Saud has targeted various nomads and minority tribes, displacing indigenous peoples who are inconvenient for the Saudi ethnostate, with the excuse that those tribes are squatting on holy land which NEOM’s shrines will further glorify.

    They want you to look at the smoke and mirrors in the desert because otherwise you might see the blood of refugees and the bones of the indigenous. A racing team is one of the cheaper distractions.



  • Nah, it’s more to do with stationary distributions. Most tokens tend to move towards it; only very surprising tokens can move away. (Insert physics metaphor here.) Most LLM architectures are Markov, so once they get near that distribution they cannot escape on their own. There can easily be hundreds of thousands of orbits near the stationary distribution, each fixated on a simple token sequence and unable to deviate. Moreover, since most LLM architectures have some sort of meta-learning (e.g. attention) they can simulate situations where part of a simulation can get stuck while the rest of it continues, e.g. only one chat participant is stationary and the others are not.








  • Today, in fascists not understanding art, a suckless fascist praised Mozilla’s 1998 branding:

    This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

    Quoting from a 2016 explainer:

    [T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of “these free software people are all a bunch of commies.” I was trolling. I trolled them so hard.

    The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don’t actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy’s mind, and the fascist never really grows out of that mindset.


  • The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

    • Lethality: the bots will kill us all
    • Inevitability: the bots are unstoppable and will definitely be created in the future
    • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
    • Superintelligent: the bots are better than people at thinking

    I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.



  • Fundamentally, Chapman’s essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton’s fences. Chapman’s not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

    I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern’s fundamentally about memes, not humans.

    So, on Chapman. I think that they’re a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can’t confirm or cite that and I don’t think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

    [T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

    He’s explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I’m familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander’s rejection of neoreaction (source); that’s a somewhat-incoherent view suggesting that he’s politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

    Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

    I don’t know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he’s really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn’t take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

    Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I’ve gotta do five, so a fifth possibility is that he’s not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.