

Well naive bayesianism, as practiced by the rationalists. Bayesianism itself can be reformed to get rid of most its problems, though I’ve yet to see a good solution for the absent-minded driver problem.
Well naive bayesianism, as practiced by the rationalists. Bayesianism itself can be reformed to get rid of most its problems, though I’ve yet to see a good solution for the absent-minded driver problem.
This is another example of the dangers of wealth inequality. A lot of EAs tried to start a youtube channel (e.g.), but the only one that could get funded was this one, the one promoting bitcoins and charter cities. Now this is the largest EA channel, attracting more of those types and signalling clearly that if you want to succeed in EA you gotta please the capitalist funders.
I read the article, not a single mention of things like the research on stereotype threat in chess. I wish rationalists would crack open a sociology book at some point in their lives. They’re so interested in social phenomena, but while Less Wrong has a tag for psychology (with 287 posts), history (245 posts), and economics (462 posts), they seem unwilling to look at sociology for explanations, with it not even having a tag on LW.
How do you find these things? How do you read these things? I’m starting to worry about your health David; such a continuous stream of highly concentrated horseshit can’t be good for you.
I don’t know, when I googled it this 80000 hours article is one of the first results. It seems reasonable at first glance but I haven’t looked into it.
Wait they had Peter’s arguments and sources before the debate? And they’re blaming the format? Having your challenger’s material before the debate, while they don’t have yours is basically a guaranteed win. You have his material, take it with you to the debate and just prepare answers in advance so you don’t lose $100K! Who gave these idiots a $100K?
The way this is categorized, this 18.2% is also about things like climate change and pandemics.
the data presented on that page is incredibly noisy
Yes, that’s why I said it’s “less comprehensive” and why I first gave the better 2019 source which also points in the same direction. If there is a better source, or really any source, for the majority claim I would be interested in seeing it.
Speaking of which,
AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc)
AI is to climate change as indoor smoking is to fire safety, nearterm AI problems is an incredibly vague and broad category and I would need someone to explain to me why they believe AI has anything to do with pandemics. Any answer I can think of would reflect poorly on the one holding such belief.
You misread, it’s 18.2% for long term and AI charities [emphasis added]
The linked stats are already way out of date
Do you have a source for this ‘majority’ claim? I tried searching for more up to date data but this less comprehensive 2020 data is even more skewed towards Global development (62%) and animal welfare (27.3%) with 18.2% for long term and AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc). Utility of existential risk reduction is basically always based on population growth/ future generations (aka humans) and not simulations. ‘digital person’ only has 25 posts on the EA forum (by comparison, global health and development has 2097 post). It seems unlikely to me that this is a majority belief.
I spend a lot of time campaigning for animal rights. These criticisms also apply to it but I don’t consider it a strong argument there. EA’s spend an estimated 1.8 million dollar per year (less than 1%, so nowhere near a majority) on “other longterm” which presumably includes simulated humans, but an estimated 55 million dollar per year (or 13%) on farmed animal welfare (for those who are curious, the largest recipient is global health at 44%, but it’s important to note that it seems like the more people are into EA the less they give to that compared to more longtermist causes). Farmed animals “don’t resent your condescension or complain that you are not politically correct, they don’t need money, they don’t bring cultural baggage…” yet that doesn’t mean they aren’t a worthy cause. This quote might serve as something members should keep in mind, but I don’t think it works as an argument on its own.
I’m not that good at sneering. ‘EA is when you make Fordlândia’? Idk, you found the original post and you’re much better at it, it’s better if you do it.
When he posted the finished video on youtube yesterday, there were some quite critical comments on youtube, the EA forum and even lesswrong. Unfortunately they got little to no upvotes while the video itself got enough karma to still be on the frontpage on both forums.
He solved the is-ought problem? How did he do that?
what ought to be (what is probable)
Hey guys I also solved the is-ought problem, first we start with is (what we should do)…
people who are my worst enemies - e/acc people, those guys who always talk about how charity is Problematic - […] weird anti-charity socialists
Today I learned that ‘effective accelerationists’ like CEO of Y-combinator Garry Tan, venture capitalist Marc Andreessen and “Beff Jezos” are socialists. I was worried that those evil goals they wanted to achieve by simply trying to advance capitalism might reflect badly on it, but luckily they aren’t fellow capitalists after all, they turned out to be my enemies the socialists all along! Phew!
Well of course, everything is determined by genetics, including, as the EA forum taught me today, things like whether someone is vegetarian so to solve that problem (as well as any other problem) we need (and I quote) “human gene editing”. /s
When the second castle (bought by ESPR with FTX-money) was brought up on the forum, Jan Kulveit (one of the main organizers of ESPR) commented:
Multiple claims in this post are misleading, incomplete or false.
Then never bothered to actually explain what the misleading and false claims actually were (and instead implied the poster had doxxed them). Then under the post this thread discusses he has the gall to comment:
For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
I guess Jan doesn’t think falsely implying the person who is critical of your chateau purchase is both a liar and a doxxer counts as ‘epistemic decline’.
Yeah, I really wouldn’t trust how that book [by Richard Lynn] picks its data. As stated in “A systematic literature review of the average IQ of sub-Saharan Africans”:
For instance, Lynn and Vanhanen (2006) accorded a national IQ of 69 to Nigeria on the basis of three samples (Fahrmeier, 1975; Ferron, 1965; Wober, 1969), but they did not consider other relevant published studies that indicated that average IQ in Nigeria is considerably higher than 70 (Maqsud, 1980a, b; Nenty & Dinero, 1981; Okunrotifa, 1976). As Lynn rightly remarked during the 2006 conference of the International Society for Intelligence Research (ISIR), performing a literature review involves making a lot of choices. Nonetheless, an important drawback of Lynn (and Vanhanen)'s reviews of the literature is that they are unsystematic.
They’re not the only one who find Lynn’s choice of data selection suspect. Wikipedia describes him as:
Richard Lynn (20 February 1930 – July 2023) was a controversial English psychologist and self-described “scientific racist” […] He was the editor-in-chief of Mankind Quarterly, which is commonly described as a white supremacist journal.
[From earlier in the comment] I can view an astonishing amount of publications for free through my university, but they haven’t opted to include this one, weird… So should I pay money to see this “Mankind Quarterly” publication?
When I googled it I found that Mankind Quarterly includes among its founders Henry Garrett an American psychologist who testified in favor of segregated schools during Brown versus Board of Education, Corrado Gini who was president of the Italian genetics and eugenics Society in fascist Italy and Otmar Freiherr von Verschuer who was director of the Kaiser Wilhelm Institute of anthropology human heredity and eugenics in Nazi Germany. He was a member of the Nazi Party and the mentor of Josef Mengele, the physician at the Auschwitz concentration camp infamous for performing human experimentation on the prisoners during World War 2. Mengele provided for Verschuer with human remains from Auschwitz to use in his research into eugenics. […] Something tells me it wouldn’t be very EA to give money to these people.
(also baller move to publish it on april fools)