which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…
But LLMs are non of those things. all they can do is look like text.
LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.
down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.
the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).
But a chat bot that tells the executive how smart and special it is?
Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).
This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).
This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.
The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.
There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.
The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.
The main breakthrough of LLM happened when they figured out how to tokenize words… The subsequent transformer architecture was already being tested on various data types and struggled compared to similarly advanced CNN.
When they figured out word encoding, it created a buzz because transformers could work well with words. They never quite worked as well on images. For that, stable diffusion (a variation on CNN) has always been better.
It’s only because of the buzz on LLMs that they tried applying them to other data types, mostly because that’s how they could get funding. By throwing in disproportionate amount of resources, it works… But it would have been so much more efficient to use different architectures.
which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…
But LLMs are non of those things. all they can do is look like text.
LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.
down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.
the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).
But a chat bot that tells the executive how smart and special it is?
That’s the winner.
yeah, that’s tough to beat
Multimodal LLMs are definitely a thing, though.
yhea, but it’s better to use the right tool for the job than throwing a suitcase full of tools at a problem
That’s not…
sigh
Ok, so just real quick top level…
Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).
This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).
This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.
The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.
There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.
The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.
The main breakthrough of LLM happened when they figured out how to tokenize words… The subsequent transformer architecture was already being tested on various data types and struggled compared to similarly advanced CNN.
When they figured out word encoding, it created a buzz because transformers could work well with words. They never quite worked as well on images. For that, stable diffusion (a variation on CNN) has always been better.
It’s only because of the buzz on LLMs that they tried applying them to other data types, mostly because that’s how they could get funding. By throwing in disproportionate amount of resources, it works… But it would have been so much more efficient to use different architectures.
go ask chatgpt to fold a protein