

Why not make an evil time travelling robot controlled by the illuminati? bro it’s even called Alexander
Maybe they simply yearn to write Final Fantasy villains


Why not make an evil time travelling robot controlled by the illuminati? bro it’s even called Alexander
Maybe they simply yearn to write Final Fantasy villains


oh no not another cult. The Spiralists???
it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I don’t think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/


ah seems the site doesnt show the comments, change the ones it shows and they turn up
Oh man, I’ve found the old LW accounts of a few weird people and they didn’t have any comments. Now I’m wondering if they did and I just didn’t sort it


Gotta love forgetting why games have these features in the first place, so accessibility features get viewed as boring stuff you need to subvert and spice up. also reminds me of how many games used to (and continue to) include filters for simulating colorblindness as actual accessibility settings because all the other games did that. Like adding a “Deaf Accessibility” setting that mutes the audio.
Demon Souls didn’t have a pause mechanic (maybe because of technical or matchmaking problems, who knows), so clearly hard games must lack a functioning pause feature to be good. Simple. The less pause that you button, the more Soulsier it that Elden when Demon the it you Ring. Our epic new boss is so hard he actually reads the state of the tinnitus filter in your accessibility settings, and then he


Sadly I misremembered and this one wasn’t from LW but I’ll share it anyway. I think I had just finished reading a bunch of the “Most effective aid for Gaza?” reddit drama which was like a nuclear bomb going off, and then stumbled into this shrimp thing and it physically broke me.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
source: https://benthams.substack.com/p/the-best-charity-isnt-what-you-think
Discussion here (special mention to the comment that says “Did the human pet guy write this”): https://awful.systems/comment/5412818


I forget where I heard this or if it was parody or not, but I’ve heard an explanation like this before before regarding “why can’t you just put a big red stop button on it and disconnect it from the internet?”. The explanation:
And if you ask “why can’t you do that and also put it in a Faraday cage?”, the galaxy brained explanation is:


Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.
. . .
Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.
taking a wild guess it’s Yudkowsky. “very knowledgeable people” and “many/most experts” is staying on my AI apocalypse bingo sheet.
even among people critical of AI (who don’t otherwise talk about it that much), the AI apocalypse angle seems really common and it’s frustrating to see it normalized everywhere. though I think I’m more nitpicking than anything because it’s not usually their most important issue, and maybe it’s useful as a wedge issue just to bring attention to other criticisms about AI? I’m not really familiar with Bernie Sanders’ takes on AI or how other politicians talk about this. I don’t know if that makes sense, I’m very tired


Some light uplifting news amid *gestures at everything*. I saw this a minute ago from the guy who runs TheCodingHorror and co-founded Stack Overflow and Discourse: https://www.reddit.com/r/IAmA/comments/1ifd3ys/im_giving_away_half_my_wealth_to_make_the/
No EA stuff! $1M each going to eight great charities and non-profits as far as I can tell: Children’s Hunger Fund, First Generation Investors, Global Refuge, NAACP Legal Defense and Educational Fund, PEN America, The Trevor Project, Planned Parenthood, and Team Rubicon. (from The Trevor Project’s blog post)


I’m in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It’s one of those things where AI bros will go, “Look, it’s so good at poetry!!” but they have no taste and can’t even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It’s a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that’s a little rough around the edges always wins over smooth, featureless AI slop in my book.
slight tangent: I was interested in seeing how they’d work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is “small LLMs are really good at creative writing for their size!”)
I don’t think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.
Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]
I had a similar idea, interesting to see that it actually works. [ . . . ]
I think that’s cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)
It’s the first time I’ve used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I’m very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.
For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I’ve actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can’t imagine ever getting something like that from an LLM.


Getting flashbacks to the people who thought the GameStop guy was a leftist


caption: “”“AI is itself significantly accelerating AI progress”“”

wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree


I’ve seen people defend these weird things as being ‘coping mechanisms.’ What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.


Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, it’s powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rock’s child molesting robot didn’t require dedicated nuclear power plants


One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy


I love the word cloud on the side. What is 6G doing there



Oh wow, Dorsey is the exact reason I didn’t want to join it. Now that he jumped ship maybe I’ll make an account finally
Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames
e: oh god it’s a lot worse than just crypto people and Dorsey. Back to procrastinating


I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery
I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.
It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.
Then he immediately follows up with:
Then I started to discuss with o1. [ . . . ] It says yes.
Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].
Then I asked o1 [ . . . ], to which it says yes too.
I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.


Cambridge Analytica even came back from the dead, so that’s still around.
(At least, I think? I’m not really sure what the surviving companies are like or what they were doing without Facebook’s API)
Former staff from scandal-hit Cambridge Analytica (CA) have set up another data analysis company.
[Auspex International] was set up by Ahmed Al-Khatib, a former director of Emerdata.


I think he might have adhd.
Oh no, I don’t think we’re ready for him to start mythologizing autism + ADHD.
Watching my therapist pull up Musk facts on his phone for 40 minutes going “bro check this out you’re just like him frfr” the moment he learned I was autistic was enough for me. Please god don’t let musk start talking about hyperfocusing.
I’ve seen the same thing and it’s reassuring lol.
I lurk on subreddit drama and curated tumblr, and I feel like the common reaction to LW has gone from a few negative comments and “really? that’s crazy”'s five years ago to being much more aware. Years ago you’d see maybe one person familiar with them and then a couple people respond who are totally out of the loop and maybe you’d see one crazy rationalist chime in to nuh-uh them. Now, anything rationalist-related usually has a bunch of people bringing up the harry potter or acausal robot god stuff right away.
I use the tag feature a lot in RES to keep track of people who I like hearing what they have to say. Years ago I mostly saw the same names when LW stuff came up, but now there’s always a ton of people I’ve never seen before who are familiar with it.
It’s also reassuring because I really don’t want to be the person to say anything first and it’s easier to chime in on a discussion someone else has already started.