

well that headline just filled me with dread


well that headline just filled me with dread


neither did I reading this article was my first exposure to him


the aforementioned wikipedia page has got some criticisms of his works under the critical reception section


this article involving an incredibly eyebrow-raising take from one of the people at METR (the team behind the famous “tasks AI can do doubles every 7 months” graph) saying AI is eventually going to become more impactful than the invention of agriculture and more transformative than the emergence of the human species and also calls it an intelligent alien species. Immensely funny amongst the other people saying “please stop treating AI like magic”
the Harari guy also seems to be into transhumanism if a skim of his wikipedia page is correct. The “this is the first time in history that we have no idea what the world will look like in 10 years” thing is also an eyebrow-raiser. I could probably rattle off a couple examples (ie the two world wars)


Nate Soares’s interview with Hank Green seems to engage in a bunch of LLM humanisation, so doesn’t look like it


OpenAI is probably toast tldr OpenAI’s financial situation is more cooked as a big investor shows doubt, WeWork 2 imminent


Deploying chatbots into high-stakes situations with absolutely zero safety regulations. Surely this will have zero consequences and go completely fine


“we’re scared we’ve crossed a line”
…seeking money from the UAE and Qatar didn’t count as crossing a line?


just one more data centre’s gonna do it! just give me a couple million more bucks!


also completely leaving out important context on the Iran/stuxnet example, in that it was a joint effort between two countries believed to have been in development for five years. The idea that AIs will engage in lightspeed wars and disable all critical infrastructure in a single day while speaking in alien languages and creating alliances is unreasonable extrapolation of the capabilities. Also completely ignored the segment where the Anthropic team implemented safeguards and communicated with the teams behind the software to patch out the bugs. It’s the most blatant fearmongering ever. Thank god the comments contain reasonable responses and breakdowns of the post. That channel’s way of highlighting papers just pisses me off


community posts have been a thing for like, two years now? three?


deleted by creator


I am worried that you continue to be right


Eliezer’s latest book says humans and chatbots are both “sentence-producing machines” so yes


Everything in this article is some deep cult shit. The tabletop game, the crying session, the debate with Kapoor. It just has such heavy feels of a cult


yeah, I read the whole article. being an AI doomer must be absolutely miserable. the whole thing just reeks of cult behaviour but especially that part. they’re living in a bubble completely separate from reality


Just decide to be sane, sanity is a skill issue
Yud has officially cured mental health forever, psychologists and therapists in shambles


Eliezer calling himself genre savy and above tropes as an actual serious coping mechanism is simply too good to not bring back up. the weirdest way to deny being depressed I’ve seen
read the whole article, of course Donald Trump is excited about this. Which also guarantees that this will crash and burn in no time at all