In this context, “AI bro” clearly refers to the creators, not the end users - which is what your link is about. Users aren’t the ones who “taught it to speak like a corporate middle manager.” That was the AI company leaders and engineers. When I asked who “they” are, I was asking for names. Someone tried to dodge by saying “AI bros and their fans,” but that phrase itself distinguishes between two groups. I wasn’t asking about the fans.
Let me rephrase: name a person responsible for training an AI to sound like a corporate middle manager who also believes their LLM is conscious.
Alright, I see your angle here. Creators generally try to avoid answering that question because they get more money if they muddle the waters. Thanks for elaborating!
try to avoid answering that question because they get more money if they muddle the waters
I dont personally think this is quite fair either. Here’s a quote from the first link:
According to Jang, OpenAI distinguishes between two concepts: Ontological consciousness, which asks whether a model is fundamentally conscious, and perceived awareness, which measures how human the system seems to users. The company considers the ontological question scientifically unanswerable, at least for now.
To me, as someone who has spent a lot of time thinking about consciousness (the fact of subjective experience) this seems like a perfectly reasonable take. Consciousness itself is entirely a subjective experience. There’s zero evidence of it outside of our own minds. It can’t be measured in any way. We can’t even prove that other people are consciouss. It’s a relatively safe assumption to make but there’s no conclusive way to prove it. We simply assume they are because they seem like it.
In philosophy there’s this concept of a “philosophical zombie” which means a creature which is outwardly indistinquishable from a human but it completely lacks any internal experience. This is basically what the robots in the TV series “west world” were - or at least so they thought.
This is all to say that there is a point after which AI system so convincingly mimics a conscious being that it’s not entirely ridiculous thing to worry that what if it actually is like something to be this system and whether we’re actually keeping a conscious being as a slave. If we had a way to prove that it is not consciouss then there’s no issue there but we can’t. People used to justify mistreatment of animals by claiming they’re not consciouss either but very few people thinks that anymore. I’m not saying an LLM might be conscious, I’m relatively certain that they’re not but they’re also the most concsious seeming thing we’ve ever created and they’ll just keep getting better and better. I’d say that there is a point after which these systems act consciouss so convincingly that one would need to basically be a psychopath to mistreat them.
AI bros and their fans.
Show me one “AI bro” claiming LLMs are consciouss.
https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/
This was back in 2022 when LLMs were exponentially worse too.
Go to /r/MyBoyfriendIsAI
Don’t worry, we got bleach for your eyes for when you get back
“They taught AI to talk like a middle manager…” isn’t refering to the people at /r/MyBoyfirendIsAI. Those are users, not the creators of it.
Do you not understand what an AI bro is? Here’s a start: https://www.urbandictionary.com/define.php?term=AI+Bro
In this context, “AI bro” clearly refers to the creators, not the end users - which is what your link is about. Users aren’t the ones who “taught it to speak like a corporate middle manager.” That was the AI company leaders and engineers. When I asked who “they” are, I was asking for names. Someone tried to dodge by saying “AI bros and their fans,” but that phrase itself distinguishes between two groups. I wasn’t asking about the fans.
Let me rephrase: name a person responsible for training an AI to sound like a corporate middle manager who also believes their LLM is conscious.
Alright, I see your angle here. Creators generally try to avoid answering that question because they get more money if they muddle the waters. Thanks for elaborating!
Some more interesting links:
https://the-decoder.com/openai-leaves-the-question-of-ai-consciousness-consciously-unanswered/
https://www.forbes.com/sites/lanceeliot/2024/07/18/why-americans-believe-that-generative-ai-such-as-chatgpt-has-consciousness/
I dont personally think this is quite fair either. Here’s a quote from the first link:
To me, as someone who has spent a lot of time thinking about consciousness (the fact of subjective experience) this seems like a perfectly reasonable take. Consciousness itself is entirely a subjective experience. There’s zero evidence of it outside of our own minds. It can’t be measured in any way. We can’t even prove that other people are consciouss. It’s a relatively safe assumption to make but there’s no conclusive way to prove it. We simply assume they are because they seem like it.
In philosophy there’s this concept of a “philosophical zombie” which means a creature which is outwardly indistinquishable from a human but it completely lacks any internal experience. This is basically what the robots in the TV series “west world” were - or at least so they thought.
This is all to say that there is a point after which AI system so convincingly mimics a conscious being that it’s not entirely ridiculous thing to worry that what if it actually is like something to be this system and whether we’re actually keeping a conscious being as a slave. If we had a way to prove that it is not consciouss then there’s no issue there but we can’t. People used to justify mistreatment of animals by claiming they’re not consciouss either but very few people thinks that anymore. I’m not saying an LLM might be conscious, I’m relatively certain that they’re not but they’re also the most concsious seeming thing we’ve ever created and they’ll just keep getting better and better. I’d say that there is a point after which these systems act consciouss so convincingly that one would need to basically be a psychopath to mistreat them.
I don’t really agree, acting like a conscious being (because it’s a language model) still doesn’t make it conscious, perceived or not.
Have you read Blindsight by Peter Watts? It’s an interesting book that touches on self-awareness and how we perceive it.