- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Deep inside Alphabet, the parent company of Google, a secretive lab is working on a promise so audacious it sounds like science fiction: to “solve all diseases.” The company, Isomorphic Labs, is now preparing to start its first human clinical trials for cancer drugs designed entirely by artificial intelligence.
In a recent interview with Fortune, Colin Murdoch, President of Isomorphic Labs and Chief Business Officer of Google DeepMind, confirmed the company is on the verge of this monumental step. For anyone who has watched a loved one battle a devastating illness, the hope this offers is immense. But for a public increasingly wary of AI’s power, it raises a chilling question: can we really trust a “black box” algorithm with our lives?
I’m anti-general purpose LLMs, but this is a valid use case for “ai”. There are known constraints, the model is trained on a very specific dataset, and you have have human experts in the loop for assurance.
This also isn’t new. Protein folding models have been successfully used, and validated, for a number of years.
It may not be an LLM. Or a general purpose LLM. Or whatever alphabet soup is used this week.
It’s a Google-related product. Google has nobody’s best interests at heart but Google.
Even if it is called Alphabet this week.
Yes. If there’s any one thing that pisses me off about the latest “AI” bubble, it may be that… AI has been around for decades, and has been useful for decades while this “GenAI” scam BS is taking center stage.
I took a course in college named “Introduction to Artificial Intelligence” in like 2005. In that course, I learned about the A* algorithm which is used among other purposes by games to let NPC’s navigate from point A to point B potentially around obstacles or over terrain of different passability. That shit is genuinely useful and bears no resemblence to LLMs or Stable Diffusion. And yet it was called “AI” back in… like the 1960s and was still called “AI” in 2005. Probably still is in college courses around the world.
Now, I haven’t read the article, but I’d have to hope nobody put too much blind faith in the AI’s output here. But the right tools in the hands of sufficiently well-educated scientists, be they called “AI” or not, can certainly assist in things like drug development.
Oh, also, you can call just about anything that’s done with code “AI” even if it really has nothing to do with artificial intelligence. My employer was fairly recently sold an automated customer service tool by a big, well-known software vendor that another team I work distantly with had to configure/program, every step from soup to nuts. (There was absolutely no machine learning involved or anything like that. This other team had to decide all the flows the customers could go through.) But you can bet your ass you couldn’t read any three consecutive words in any of their marketing materials about it without at least one of the three being “AI”.
I’m sure there are microwave ovens no more sophisticated than the one I have (spoiler: it’s the dumbest microwave oven I could find) that are being marketed with the term “AI”.
Every pill just kills the patient. They are now technically cured of any and all diseases.
The SCP-049 way.
My cure is most effective.
OK, I’m staying away from hospitals starting today.
Guess I’m going to start stocking up on microwave popcorn packets. If there is no circus as a consequence of this company then I guess I’ll be stocked up for movie nights
We’ll have to see what happens if this all goes well or not.
If it goes well, we’ll have to see the effectiveness of the drugs, as they are more effective than conventional ones.
If the latter happens, let’s look at the legal aspect. Can products made with AI be patented?
If they can’t be patented, we’ll see many companies launch these drugs as generics and at a lower price, in addition to being more effective. Big Pharma will likely start suing to avoid losing customers, and it will be interesting to see people support these companies just because they are against AI.
Not to mention that this could benefit them and that they wouldn’t have to pay so many medical bills or such expensive drugs that are actually very cheap to produce.
I’m sure it will have as much success as holistic medicine.
They can even solve basic arithmetic and they are going to cure all deseases, right…
Different models, different problems. While LLMs are a bit of a parlor trick, this particular application is actually well suited for AI, as it involves the search for patterns in massive amounts of data. It’s not just guessing the next word, but actually seeking probabilistic causes and effects hidden within more data than researchers could possibly search on their own.
You’re misunderstanding what’s happening here. You’re talking about an LLM. That’s not what this is
I know it is not an LLM, I’m just saying it in not solving diseases, its finding most likely protein shapes. It was more of a rant against AI hype and how they present it. Perhaps I should have worded that better.