That’s not what it’s saying though. It’s making the very reasonable point that if you (the leader of an AI company) think you’re about to have an AGI that can do sci-fi AGI things, then why the hell would you be developing chatbots, which are significantly less advanced technology, only for your chatbot to be immediately superseded by your AGI?
If you’re about to get access to the world’s largest diamond mine you’re not going to spend the next few months messing around with get rich quick schemes.
Pretty sure this is a direct dig against Sam Altman specifically who is making huge claims despite no evidence that they’re making progress on AGI.
The people actually using AI to cure cancer were probably doing it before OpenAI (remember when we called it Machine Learning?) and haven’t been going to the media and Musking the completion date
Yeah, the people actually making progress were doing it before Sam Altman. RFdiffusion was made by the same people who released Rosetta@home 20 years ago
No, it’s still a pretty good dig. Their inflated valuation hinges on AGI but the only news they actually provide is that they’re going to let subscribers fuck their chatbots for $200/month (or whatever it costs)
It couldn’t be more obvious that they’re grasping at straws
sam altman and openai just announced they are allowing erotica for “verified users”
its only a matter of time before they allow full blown pornographic content, the only thing is that you have to verify your ID. so, openai and the “gubment” will know all the depraved shit you will ask for, and i guarantee it will be used against you or others at some point.
itll either become extremely addictive for the isolated who want the HER experience, or it will be used to undermine political discidents and anti fascist users.
despite what people think, openai does in fact hand over data to the authorities (immediate government representatives) and that information is saved and flagged if they deem it necessary.
basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.
they used to say they would refuse to make this type of content, knowing full well the implications of what might happen if they did. now due to “public demand” they are folding.
my advice, get a dumb phone, a digital camera, and a laptop to still have access to the internet and tools. reduce your physical ability to access the internet so readily. its saturated with AI, deep fakes, agents, and astroturfing bots that want you plugged in 100% of the time so they can further your addictions, manipulate you, and extract as much data from you as possible.
basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.
i feel like you really need to take the time to research the implications of AI and surveillance more. specifically how the US and virtually every government and tech corporation on the planet is intending/already using them, together.
AI exists because of illegal/unconsentual data extraction, its almost entirely built on it, and theres no way that will likely ever stop. you can attempt to regulate it, but the US governement deregulated it on purpose for a bribe and the promise of more control.
it wont happen while money and power still exist at the end of the rainbow.
This is precisely why I’m saying AI and our surveillance state are completely different issues.
Yes the surveillance state is really bad and we need comprehensive laws that protect our personal data.
That being said, what the copyright industry is desperately clinging to has nothing to do with that. Your second paragraph has nothing to do with the first. So I don’t agree with both, I only agree with the first, which is my main point, about how they are seperate issues.
That is absolutely not what he is saying. He is saying that governments across the world are starting to crack down on anything they deem unsocial behavior, and the companies that provide you with those services are 100% willing to sell you out when asked to.
I should be allowed to buy crack cocaine or a prostitute since it is no one’s business what i do in my free time. Unfortunately Uncle Sam disagrees, so it’s in my best interest to not pay for those services with a credit card that can be traced right back to me.
Maybe thats what they have been doing all this time, every time someone new comes in thinking they are infallible and will solve the issues they then see the amazing anatomically correct sex chats the scientists have made and get sucked in to a world of nonstop orgasms
No, the thought here is that they’re going to hit the holy Grail
A super intelligent being would be able to cure cancer from first principles, just like anything else. It would understand the laws of reality so well it would be like magic, coming up with wonders we might never understand
You’re getting downvoted because of how you put it. Most people do not understand the difference between AI used for research (like protein sequencing) and LLMs.
Also, the people making LLMs are not making protein sequencers.
No, OP is about how OpenAI said they were releasing a chatbot with PhD level intelligence about half a year ago (or was it a year ago?) and now they are saying that they’ll make horny chats for verified adults (i.e. paying customers only).
What happened to the PhD level intelligence Sam?! Where is it?
I agree, for most people ‘AI’ is ChatGPT and their perception of the success of AI is based on social media vibes and karma farming hot takes, not a critical academic examination of the state of the field.
I’m not remotely worried about the opinions of random Internet people, many of which are literally children just dogpiling on a negative comment.
Reasonable people understand my point and I don’t care enough about the opinions of idiots to couch my language for their benefit.
Ah I see the misunderstanding. Government pivoting is the problem.
NIH blood cancer research was defunded few months ago while around same time government announced they will be building 500-billion datacenters for LLMs.
“If LLM becomes AGI we won’t need the image-recognition linear algebra knowledge anymore, obviously.”
Researchers are still the good and appreciated no matter what annoying company is deploying their work.
False dichotomy.
People using AI to cure cancer are not the people implementing weird chatbots. Doing one has zero effect on the other.
That’s not what it’s saying though. It’s making the very reasonable point that if you (the leader of an AI company) think you’re about to have an AGI that can do sci-fi AGI things, then why the hell would you be developing chatbots, which are significantly less advanced technology, only for your chatbot to be immediately superseded by your AGI?
If you’re about to get access to the world’s largest diamond mine you’re not going to spend the next few months messing around with get rich quick schemes.
Pretty sure this is a direct dig against Sam Altman specifically who is making huge claims despite no evidence that they’re making progress on AGI.
The people actually using AI to cure cancer were probably doing it before OpenAI (remember when we called it Machine Learning?) and haven’t been going to the media and Musking the completion date
First, the team inputted the structure of the cancer target into a generative AI model called RFdiffusion. That model had been trained on known protein structures and their amino acid sequences, the strings of building blocks that fold up into individual proteins. RFdiffusion proposed protein shapes that fit the target like a key fits a lock. A second AI model suggested strings of amino acids that, when folded into 3-D structures, would likely form the proposed shapes. Jenkins and his colleagues then blasted through tens of thousands of protein designs and, with the help of a third AI model that checked all that work, narrowed the designs down to 44 options that they tested in the lab. One appeared to be a winner. In lab experiments, human T cells engineered to have the AI-designed protein on their surface could rapidly kill melanoma cells and prevent the cancer from growing.
Yeah, the people actually making progress were doing it before Sam Altman. RFdiffusion was made by the same people who released Rosetta@home 20 years ago
🤣🤣🤣
Yeah it’s not a very good one though because it’s predicated on the idea that a company can’t make more than one product.
I also don’t believe OpenAI is anywhere close to AGI, but obviously they can try to make AGI and make horny chatbots at the same time.
No, it’s still a pretty good dig. Their inflated valuation hinges on AGI but the only news they actually provide is that they’re going to let subscribers fuck their chatbots for $200/month (or whatever it costs)
It couldn’t be more obvious that they’re grasping at straws
Well, current AI seems incredibly good as a basic assistant. Just ask it for sources, it will only piss you off 10% of the time.
It’s certainly not useless, I’m not a blanket hater
It’s making fun of things like: “ChatGPT boss predicts when AI could cure cancer”.
sam altman and openai just announced they are allowing erotica for “verified users”
its only a matter of time before they allow full blown pornographic content, the only thing is that you have to verify your ID. so, openai and the “gubment” will know all the depraved shit you will ask for, and i guarantee it will be used against you or others at some point.
itll either become extremely addictive for the isolated who want the HER experience, or it will be used to undermine political discidents and anti fascist users.
despite what people think, openai does in fact hand over data to the authorities (immediate government representatives) and that information is saved and flagged if they deem it necessary.
basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.
they used to say they would refuse to make this type of content, knowing full well the implications of what might happen if they did. now due to “public demand” they are folding.
my advice, get a dumb phone, a digital camera, and a laptop to still have access to the internet and tools. reduce your physical ability to access the internet so readily. its saturated with AI, deep fakes, agents, and astroturfing bots that want you plugged in 100% of the time so they can further your addictions, manipulate you, and extract as much data from you as possible.
Fully automated luxury kompromat
That’s why I have all my private with deepseek.
I’m an adult, there’s no reason I can’t have the bot talk dirty to me. That’s a lot of text for essentially saying you wish the censorship stayed.
Surveillance state and data extraction are real issues that need to be tackled at the root (which isn’t AI).
i feel like you really need to take the time to research the implications of AI and surveillance more. specifically how the US and virtually every government and tech corporation on the planet is intending/already using them, together.
AI exists because of illegal/unconsentual data extraction, its almost entirely built on it, and theres no way that will likely ever stop. you can attempt to regulate it, but the US governement deregulated it on purpose for a bribe and the promise of more control.
it wont happen while money and power still exist at the end of the rainbow.
This is precisely why I’m saying AI and our surveillance state are completely different issues.
Yes the surveillance state is really bad and we need comprehensive laws that protect our personal data.
That being said, what the copyright industry is desperately clinging to has nothing to do with that. Your second paragraph has nothing to do with the first. So I don’t agree with both, I only agree with the first, which is my main point, about how they are seperate issues.
That is absolutely not what he is saying. He is saying that governments across the world are starting to crack down on anything they deem unsocial behavior, and the companies that provide you with those services are 100% willing to sell you out when asked to.
I should be allowed to buy crack cocaine or a prostitute since it is no one’s business what i do in my free time. Unfortunately Uncle Sam disagrees, so it’s in my best interest to not pay for those services with a credit card that can be traced right back to me.
Yes, AI already helps in oncology research and has for years and years, probably decades.
Think about all of the erotic chatbots that those oncologist phds could have created instead.
and then they could get together with boston dynamics and we’ve got
I mean, at least they’d be smart and horny, right?
My favorite kind of phd
Pound HarDer?
Probably High on Drugs
ProbablyPrettyMaybe thats what they have been doing all this time, every time someone new comes in thinking they are infallible and will solve the issues they then see the amazing anatomically correct sex chats the scientists have made and get sucked in to a world of nonstop orgasms
This is the most sane take in the whole post.
No, the thought here is that they’re going to hit the holy Grail
A super intelligent being would be able to cure cancer from first principles, just like anything else. It would understand the laws of reality so well it would be like magic, coming up with wonders we might never understand
That’s the idea anyways. A digital deity
You’re getting downvoted because of how you put it. Most people do not understand the difference between AI used for research (like protein sequencing) and LLMs.
Also, the people making LLMs are not making protein sequencers.
No, OP is about how OpenAI said they were releasing a chatbot with PhD level intelligence about half a year ago (or was it a year ago?) and now they are saying that they’ll make horny chats for verified adults (i.e. paying customers only).
What happened to the PhD level intelligence Sam?! Where is it?
I agree, for most people ‘AI’ is ChatGPT and their perception of the success of AI is based on social media vibes and karma farming hot takes, not a critical academic examination of the state of the field.
I’m not remotely worried about the opinions of random Internet people, many of which are literally children just dogpiling on a negative comment.
Reasonable people understand my point and I don’t care enough about the opinions of idiots to couch my language for their benefit.
You’re my role model for the day
Ah I see the misunderstanding. Government pivoting is the problem.
NIH blood cancer research was defunded few months ago while around same time government announced they will be building 500-billion datacenters for LLMs.
“If LLM becomes AGI we won’t need the image-recognition linear algebra knowledge anymore, obviously.”
Researchers are still the good and appreciated no matter what annoying company is deploying their work.
Exactly. Gen AI is a very large field.