- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Without the paywall
The nature of these discussions and the fact that MS described AGI as “an LLM service that reaches $100 B per year in revenue” is evidence that much of the marketing around “AI” is basically fraudulent.
Clearly LLMs specifically and ML models in general have many powerful use cases, but that doesn’t mean the people involved aren’t running a scheme to profit off the hype.
Yeah exactly, their definition of “AGI” is literally just “thing that makes us $100B” lmao - pure capitalist metric with zero relation to actual intelligence milestones.
I bet this rhetoric goes so hard, the moment we reach an economicly usefull “agi” it will be all hands on board to stop it from going ASI.
They specifically want ai that can follow orders without thinking for themselves.
Something like in the video game Detroit: Become Human, where there are androids that are basically a humanized AGI, but when they start to gain consciousness and peacefully seek their rights, humans begin to behave like the Third Reich.
The message in detroid become human seems to be we would go full extermination out of fear of losing our spot on top of the foodchain. Which does seem plausible.
What i really didnt like though was how androids were designed to suffer
They had no ability to shut themselves off, if you kept abusing them they react and respond like real people. They where forced to stay sentient in their single body.
This is also why they had to experience standing in the back of the bus just for the narrative. Could have just send a drone. Could also just shut down the body while traveling and do something in cyberspace instead.
If there is one thing certain about digital “life” its that it wont have the same restrictions as us and a body is optional and basically a set of clothes.
While i did enjoy Detroid it was designed around its own pre set narrative rather then explore what could emerge organicaly if machines became sentient. I could not really take it serious because of that.
LLMs cannot think or draw conclusions, it’s just guessing based on the content of old reddit posts.
Ai can be way more then just a single llm though.
Agi and Asi still mean human level and beyond human level ai.
Wether the concepts are archievable in our lifetime is an entirely different matter.
Negotiating tactic somehow becomes headline
The sooner we can foment OpenAI’s collapse, the better. It will cause the “AI” bubble to pop, and we can move on.
it is openai-over.