my god imagine being like this
this one was definitely my pleasure
“how can you fools not see that Wikipedia’s utterly inaccurate summary LLM is exactly like digital art, 3D art, and CGI, which are all the same thing and are/were universally hated(???)” is a take that only gets more wild the more you think on it too, and that’s one they’ve been pulling out for at least two years
I didn’t catch much else from their posts, cause it’s almost all smarm and absolutely no substance, but fortunately they formatted it like paragraph soup so it slid right off my eyeballs anyway
why would anyone want to play as an attractive Puerto Rican when peak sexiness has already been achieved
god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping
are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?
holy shit I’m upgrading you to a site-wide ban
so many paragraphs and my eyes don’t want any of them
Hinton? hey I have a pretty good post summarizing what’s wrong with Hinton, oh wait it was you two weeks ago
what are we doing here
you want to know what e/acc is? it’s when some fucker comes and makes the stupidest posts imaginable about LLMs and tries their best to sound like a recycled chan meme cause they think that’ll give them a pass
bye bye e/acc
some experts genuinely do claim it as a possibility
zero experts claim this. you’re falling for a grift. specifically,
i keep using Claude as an example because of the thorough welfare evaluation that was done on it
asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.
s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency
you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?
Like it has atleast the same amount of value as like letting an insect out instead of killing it
that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.
you say you acknowledge the harms done by LLMs, but I’m not seeing it.
centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:
i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution
the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.
claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.
if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?
schizoposting
fuck off with this
even if its wise imo to try not to be abusive to AI’s just incase
describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?
it didn’t take me long at all to find the most recent post with a slur in your post history. you’re just a bundle of red flags, ain’t ya?
don’t let that edge cut you on your way the fuck out
no problem at all! I don’t think the duplicate’s too much of an issue, and this way the article gets more circulation on both Mastodon and Lemmy.
E: ah, this is from mastodon. I don’t know how federation etc. works.
yep! any mastodon post whose first line looks like a subject line and which tags the community is treated by Lemmy as a new thread in that community. now, you might think that’s an awful mechanism in that it’s very hard to get right on purpose but very easy to accidentally activate if you’re linking and properly citing an article in the format that’s most natural on mastodon. and you’d be correct!
I feel like this article might deserve its own post, because I think it’s the first time I’ve ever seen an attempted counter-sneer. it’s written like someone’s idea of what a sneer is (tpacek swears sometimes and says he doesn’t give a shit! so many paragraphs into giving a shit!) but all the content is awful bootlicking and points that don’t stand up to even mild scrutiny? and now I’m wondering if tpacek’s been reading us and that’s why he’s upset, or if this is what an LLM shits out if you ask it to write critihype in the tone of a sneer
congrats, that’s awesome news!
it’s not pseudoscience unless it’s from the “literally studying ghosts” region of crankery, otherwise it’s just sparkling… actually I don’t know what your point is with all this
I agree, you are fucking done. good job showing up 12 days late to the thread expecting strangers to humor your weird fucking obsession with using LLMs for something existing software does better
imagine if you read the article at all instead of posting 6 paragraphs about an impossible game you’re fantasizing about, that LLMs do nothing to enable because they’re stochastic chatbots and don’t understand game systems (just like you!)
you know it’s weird
I looked for established reviews of Suck Up, the perfect local LLM game that isn’t local and is barely a game, and I couldn’t find any
all of the hype for this piece of shit that came out in 2023 and made zero impact was from paid influencers and the game’s dev Gabriel spamming reddit on a regular basis
so I guess what I’m trying to say is: fuck off with this shit, we’re not buying
Weird that you’re downvoting me already. Lol
weird that you’re complaining
The game Suck Up! is the perfect example save for the part where the developers chose to run it server-side on release
the perfect example. yeah, this is barely a game and they couldn’t even make it run locally. all of this shit is just an awful tech demo for an expensive gimmick. none of it is fun, nobody plays it. why in fuck are you even here pumping it?
it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.