

formulaic
System prompt: don’t be formulaic. Try be spontaneous and random, like natalie portman in that movie. Not the pedo one, the one with JD from scrubs
Only Bayes Can Judge Me
formulaic
System prompt: don’t be formulaic. Try be spontaneous and random, like natalie portman in that movie. Not the pedo one, the one with JD from scrubs
YaaS: Yubitsume as a Service
He got up at 5:30 a.m. to show the sunrise to his microphone
No doubt inspired by my favourite holistic wellness woo/Soundgarden song, butthole sun(ning)
Grue? Is this Zork?
Zit crit!
Agree: I don’t have much of an opinion on him either way. If this community is gonna hold him up as a voice of note then it should be able to critically engage with his work.
If saltman knew what quantum gravity was and why LLMs won’t solve it first, maybe he’d have general intelligence
maybe 2, for good measure
This paragraph caught my interest. It used some terms I wasn’t familiar with, so I dove in.
Ego gratification as a de facto supergoal (if I may be permitted to describe the flaw in CFAImorphic terms)
TL note: “CFAI” is this “book-length document” titled “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures”, in case you forgot. It’s a little difficult to quickly distill what a supergoal is, despite it being defined in the appendix. It’s one of two things:
A big picture type of goal that might require making “smaller” goals to achieve. In the literature this is also known as a “parent goal” (vs. a “child goal”)
An “intrinsically desirable” world (end) state, which probably requires reaching other “world states” to bring about. (The other “world states” are known as “subgoals”, which are in turn “child goals”)
Yes, these two things look pretty much the same. I’d say the second definition is different because it implies some kind of high-minded “desirability”. It’s hard to quickly figure out if Yud actually ever uses the second definition instead of the first because that would require me reading more of the paper.
is a normal emotion, leaves a normal subjective trace, and is fairly easy to learn to identify throughout the mind if you can manage to deliberately “catch” yourself doing it even once.
So Yud isn’t using “supergoal” on the scale of a world state here. Why bother with the cruft of this redundant terminology? Perhaps the rest of the paragraph will tell us.
Anyway this first sentence is basically the whole email. “My brain was able to delete ego gratification as a supergoal”.
Once you have the basic ability to notice the emotion,
Ah, are we weaponising CBT? (cognitive behavioral therapy, not cock-and-ball torture)
you confront the emotion directly whenever you notice it in action, and you go through your behavior routines to check if there are any cases where altruism is behaving as a de facto child goal of ego gratification; i.e., avoidance of altruistic behavior where it would conflict with ego gratification, or a bias towards a particular form of altruistic behavior that results in ego gratification.
Yup we are weaponising CBT.
All that being said, here’s what I think. We know that Yud believes that “aligning AI” is the most altruistic thing in the world. Earlier I said that “ego gratification” isn’t something on the “world state” scale, but for Yud, it is. See, his brain is big enough to change the world, so an impure motive like ego gratification is a “supergoal” in his brain. But at the same time, his certainty in AI-doomsaying is rooted in belief of his own super-intelligence. I’d say that the ethos of ego-gratification has far transcended what can be considered normal.
I hear GPT-8 will broadcast a dyson sphere circumnavigation race
Opening the sack with this shit that spawned in front of me:
Guess it won’t be true AGI!
Kind of a fluff story (archive) where salty’s douchiness is on full display.
I referenced it because fake book titles are throwaway jokes, you can reference something hyperspecific and not have to worry about whether or not someone will get it, because they might not even notice it at all.
The spectre of Marx nods in approval
Ah, gotcha. fwiw I wasn’t saying that to say “joyless people are bad”; burnout also tends to look like joylessness.
Man, knowing nothing else about your coworker, they sound like a completely joyless person. Coming up with fake titles for things is like, such a high fun-to-effort ratio. “Creativity and the essence of Human Experience” by Chat GPT. Boom, there’s one. “Cooking With Olive Oil” by Sam Altman. “IQ184” by Harukiezer Murakowsky. This is so fun and easy that it’s basically hack outside of situations where it is solicited.
Putting cream in my carbonara to see how my 8K 120Hz Nonna reacts
Well unfortunately it’s not diluted enough to be homeopathic, so it’s just off3 broadway
My current iteration of the etymology is that “dath ilani” anagrams to “HD Italian”. As in, the dath ilani are an idealised version of Italians, making dath ilan a utopian version of Italy.
there’s No Such Feasible Way for that imo
FWIW my understanding is the “Off”-ness is just decreasing theatre size, but gets (mis)construed as less prestige/fame/quality etc. Fame is probably fine for construing, but prestige/quality is orthogonal. Like you could conceivably have absolute theatre mega stars do an epoch-defining production in an intimate 50-seat theatre or something, and that would still be “Off-off-broadway”.
Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.
Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)
E: Maybe it’s just sealion