Star Trek: Voyager S2E23 “The Thaw”
Yet a artificial lung can let someone breathe air.

Data can be pretty spooky.
And then I saw a weird movie (“Death to Smoochy”) years prior* where his actor played kind of a gross villain and it was REAL disturbing.
Brent Spiner does do a good villain. Watch Out to Sea to see him in a campier antagonistic role.
I’ll have to give it a shot. Certainly not his fault but I pretty much can only see Data acting strange when he does other roles lol
Is it like Lore, or worse?
Definitely weirder. Like this loop every few minutes where my brain forgets and goes “wtf is Data doing…” just subconsciously.
~ Oy yay komo va ~
Yea when i seached “stng data upset” , I got a lot of great pics. I just wanted mild upset, as im not sure if positronic brain and AI are the same.
Strong disagree. There’s no reason why sufficiently advanced AI couldn’t replace brain function. Note this is ACTUAL AI, not LLMs, which are not intelligence in any way, shape, or form.
I appreciate the original memes, but I could really do without the bouncing text 🙂
Hmm, I have actually had people complement my bouncing text. There are two different types of bounces here. Which do you dislike?
- When each line of text enters the frame, it sort of bounces into place by overshooting the final position, then overcorrecting, until it finally reaches the final position.
- As each line of text overshoots its final position, it bumps into the line above it, causing the line above to bounce a little bit.
Or are both unwanted?
You don’t need to change it on my account! But in my opinion, #2 is the bigger annoyance. #1 might be ok without #2.
You don’t need to change it on my account!
It wouldn’t just be your account, seeing that your comment has a lot of upvotes. If enough people definitely dislike it, I’ll avoid doing it with future posts. But I don’t plan on fixing this one unless people are really that disgusted by it lol
#1 might be ok without #1.
I assume you mean “#1 might be ok without #2”. While I pretty much always do both, here is a previous GIF that I made without doing that for some reason. Better?

I assume you mean “#1 might be ok without #2”.
Oops, yes. Fixed!
I’m still not a fan, personally. I think just sliding in or maybe reducing the bounce would be better. That being said, I think it’s much better without all of the text bouncing.
Seems to me like a program masquerading as intelligence could out smart Elon Musk
She did say “actual brain functions”…
An LG Smart Fridge that simply agrees with everything he says would outsmart him.
That’s like saying there’s no way a machine can replicate hand sewing.
You’re right, thinking and sewing are exactly as complex.
Complexity isn’t relevant to my analogy.
The lessons learned from the failures and eventual success of machine sewing are.Unless you’re being sarcastic.
Sewing really is surprisingly complex.I was being a little sarcastic 😆 . But I admit I don’t understand the analogy; what relationship does human thought have to do with human sewing?
Sewing machines don’t make stiches the way people do. People tried for decades and failed to build machines that sewed like humans. They work by making their stiches in ways humans never would, or could realy. They had to invent a whole new way get the job done, not remotely the way a person would do it.
AI will very likely be the same. Expecting machine minds to do things the same way a human mind would, to mimic human thought, strikes me as some kind of human centric bias.
Ah, in that case we agree! I also believe that if a genuine AI ever comes about it will be quite alien.
That’s like saying there’s no way a machine can replicate hand sewing.
Gets me thinking there’s no way I could do sewing consistently. My adhd novelty seeking creative side (over powering my autism side) would be switching stiching types constantly, before I give up in the tedium of it. Could a machine do that?
There are sewing machines that offer didn’t stitching modes. In fact, different use cases have different optimal stitches. Like a decorative stitch can be whatever, and a hem doesn’t need to handle the same kind of forces as a join, which itself might require different strengths (like a dress shirt sleeve vs a jean’s pocket).
It can’t; it’s too perfect, too neat…
/s
Because of the microtubules and quantum effects, right?
1 - Well, it can simulate them, and we will probably never use simulation when we want intelligence. (We will for understanding the brain, though)
2 - It doesn’t matter at all, intelligence doesn’t need to think like us.
3 - We are nowhere close to any general one, and the more investors bet all their money and markets sell all their hardware to the same few companies that will burn at their local maximum, the further away we will become.
2 - It doesn’t matter at all, intelligence doesn’t need to think like us.
Agreed, but look at the history of how humans have thought about the presumed intelligence (or lack of it) in animals; we seem to be bad at recognizing intelligence that doesn’t mirror our own.
You think we won’t be able to use AI because we can’t recognize intelligence?
Those are two separate questions, I think.
- “You think we won’t be able to use AI” – If there is some day actual artificial intelligence, I have no idea if humans can “use” it.
- “we can’t recognize intelligence?” – I think you can make the case that historically we haven’t been great about recognizing non-human intelligence.
What I am saying is that if we ever invent an actual AGI, unless it thinks and, more importantly, speaks in a way we recognize, we won’t even realize what we invented.
Recognizing the intelligence is something you pushed into the discussion, I just want to know why you think it’s important.
Hm? I was agreeing with your 2nd point. I was merely adding to that by pointing out that we’ve only recently begun to recognize non-human intelligence in species like crows (tool use), cetaceans (language), higher primates (tool use, language, and social organization); which leaves me concerned that, if an AI were to “emerge” that was very different than human intelligence, we’d likely fail to notice it, potentially cutting off an otherwise promising development path.
Oh ok, you have a completely new concern.
I don’t think we will fail to spot intelligence in AIs, since they have advocates, something that animals never had. But we have a problem in that “intelligence” seems to be a multidimensional continuum, so until we solve lots of different kinds of it, there will exist things that fit some form of it but really don’t deserve the unqualified name.








