The sole funder is the founder, Saar Wilf. The whole thing seems like a vanity project for him and friends he hired to give their opinion on random controversial topics.
The sole funder is the founder, Saar Wilf. The whole thing seems like a vanity project for him and friends he hired to give their opinion on random controversial topics.
The video and slides can be found here, I watched a bit of it as it happened and it was pretty clear that rootclaim got destroyed.
Anyone actually trying to be “bayesian” should have updated their opinion by multiple orders of magnitude as soon as it was fully confirmed that the wet market was the first superspreader event. Like, at what point does occams razor not kick in here?
For people who don’t want to go to twitter, heres the thread:
Doomers: “YoU cAnNoT dErIvE wHaT oUgHt fRoM iS” 😵💫
Reality: you literally can derive what ought to be (what is probable) from the out-of-equilibrium thermodynamical equations, and it simply depends on the free energy dissipated by the trajectory of the system over time.
While I am purposefully misconstruing the two definitions here, there is an argument to be made by this very principle that the post-selection effect on culture yields a convergence of the two
How do you define what is “ought”? Based on a system of values. How do you determine your values? Based on cultural priors. How do those cultural priors get distilled from experience? Through a memetic adaptive process where there is a selective pressure on the space of cultures.
Ultimately, the value systems that survive will be the ones that are aligned towards growth of its ideological hosts, i.e. according to memetic fitness.
Memetic fitness is a byproduct of thermodynamic dissipative adaptation, similar to genetic evolution.
Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.
It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.
The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.
My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.
ahh, I fucking haaaate this line of reasoning. Basically saying “If we’re no worse than average, therefore there’s no problem”, followed by some discussion of “base rates” of harrassment or whatever.
Except that the average rate of harrassment and abuse, in pretty much every large group, is unacceptably high unless you take active steps to prevent it. You know what’s not a good way to prevent it? Downplaying reports of harrassment and calling the people bringing attention to it biased liars, and explicitly trying to avoid kicking out harmful characters.
Nothing like a so-called “effective altruist” crowing about having a C- passing grade on the sexual harrassment test.
I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.
As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.
This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.
You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.
Thanks, I love these answers! I’ll drop a DM on matrix for further questions.
This rather economic recycling allows a living cell to absorb damage that would be catastrophic when you just assume that everything works forever just as you imagined. I don’t have a guess how much more energy would be expended in reassembly of diamondoids, @[email protected] might have an estimate, but i guess it’s some 1-2 orders of magnitude more
The DMS researchers were estimating something on the order of 5 eV for mechanically dropping a single pair of Carbon atoms onto the surface of diamond. I’m not sure how to directly compare this to the biological case.
If you post on EA forum or LW, you can crosspost automatically to the other one by clicking one button on the publishing page. The sites are run by essentially the same people.
Hmmm, I wonder who benefits from keeping EA chained to an Eliezer Yudkowsky fan forum…
Hidden away in the appendix:
A quick note on how we use quotation marks: we sometimes use them for direct quotes and sometimes use them to paraphrase. If you want to find out if they’re a direct quote, just ctrl-f in the original post and see if it is or not.
This is some real slimy shit. You can compare the “quotes” to Chloe’s account, and see how much of a hitjob this is.
Hey, thanks so much for looking through it! If you’re alright with messaging me your email or something, I might consult you on some more related things.
With your permission, I’m tempted to edit this response into the original post, it’s really good. Have you looked over Yudkowsky’s word salad in the EA forum thread? Would be interested in getting your thoughts on that as well.
Thanks! I strive for accuracy, clarity, humility, and good faith. Aka, everything I learned not to do from reading the sequences.
EA as a movement was a combination of a few different groups (This account says Giving what we can/80000 hours, Givewell, and yudkowsky’s MIRI). However, the main source of early influx of people was the rationalist movement, as Yud had heavily promoted EA-style ideas in the sequences.
So if you look at surveys, right now a a relatively small percentage (like 15%) of EA’s first heard about it through lesswrong or SSC. But back in 2014, and earlier, Lesswrong was the number one onroad into the movement (like 30%) . (I’m sure a bunch of the other answers may have heard about it from rationalist friends as well). I think it would have been even more if you go back earlier.
Nowadays, most of the recruiting is independent from the rationalists, so you have a bunch of people coming in and being like, what’s with all the weird shit? However they still adopt a ton of rationalist ideas and language, and the EA forum is run by the same people as Lesswrong. It leads to some tension: someone wrote a post saying that “yudkowsky is frequently confidently, egregiousl wrong”, and it was somewhat upvoted on EA forum but massively downvoted on Lesswrong.
Do you have any links to this, out of curiosity? I looked a bunch and couldn’t find any successor projects.
what are the other ones?
I guess the rest of the experimental setup that recombines the photon amplitiudes. Like if you put 5 extra beam splitters in the bottom path, there wouldn’t be full destructive interference.
when i’m thinking about splitter with pi/4 phase shift, i’m thinking about coupled line coupler or its waveguide analogue, but i come from microwave land on this one. maybe this works in fibers?
I’m not sure how you’d actually build a symmetric beam splitter: wikipedia said you’d need to induce a particular extra phase shift on both transmission and reflection. (I’m fully theoretical physics so I’m not too familiar).
If you want more of this, I wrote a full critique of his mangled intro to quantum physics, where he forgets the whole “conservation of energy” thing.
What I think happened is that he got confused by the half mirror phase shifts (because theres only a phase shift if you reflect off the front of the mirror, not the back). Instead of asking someone, he invented his own weird system which gets the right answer by accident, and then refused to fix the mistake ever, saying that the alternate system is fine because it’s “simpler”.
My impression is that the toxicity within EA is mainly concentrated in the bay area rationalists, and in a few of the actual EA organizations. If it’s just a local meetup group, it’s probably just going to be some regular-ish people with some mistaken beliefs that are genuinely concerned about AI.
Just be polite and present arguments, and you might actually change minds, at least among those who haven’t been sucked too far into Rationalism.
Obvious reminder: do not assume that anonymous tumblr posts are accurate. (this is the only post the tumblr account made).
Has anyone attempted a neutral unpacking of the mess of claims and counterclaims around Ziz and related parties?
I roll a fair 100 sided dice.
Eliezer asks me to state my confidence that I won’t roll a 1.
I say I am 99% confident I won’t roll a 1, using basic math.
Eliezer says “AHA, you idiot, I checked all of your past predictions and when you predicted something with confidence 99%, it only happened 90% of the time! So you can’t say you’re 99% confident that you won’t roll a 1”
I am impressed by the ability of my past predictions to affect the roll of a dice, and promptly run off to become a wizard.
Take a guess at what prompted this statement.
Did one side of the conflict confess? Did major expert organization change their minds? Did new, conclusive evidence arise that was unseen for years?
Lol no. The “confirmation” is that a bunch of random people did their own analysis of existing evidence and decided that it was the rebels based on a vague estimate of rocket trajectories. I have no idea who these people are, although I think the lead author is this guy currently stanning for Russia’s war on ukraine?