- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
LLMs totally choke on long context because of that O(n2) scaling nightmare. It’s the core scaling problem for almost all modern LLMs because of their self-attention mechanism.
In simple terms, for every single token in the input, the attention mechanism has to look at and calculate a score against every other single token in that same input.
So, if you have a sequence with n tokens, the first token compares itself to all n tokens. The second token also compares itself to all n tokens… and so on. This means you end up doing n*n, or n^2, calculations.
This is a nightmare because the cost doesn’t grow nicely. If you double your context length, you’re not doing 2x the work; you’re doing 2^2=4x the work. If you 10x the context, you’re doing 10^2=100x the work. This explodes the amount of computation and, more importantly, the GPU memory needed to store all those scores. This is the fundamental bottleneck that stops you from just feeding a whole book into a model.
Well, DeepSeek came up with a novel solution to just stop feeding the model text tokens. Instead, you render the text as an image and feed the model the picture. It sounds wild, but the whole point is that a huge wall of text can be “optically compressed” into way, way fewer vision tokens.
To do this, they built a new thing called DeepEncoder. It’s a clever stack that uses a SAM-base for local perception, then a 16x convolutional compressor to just crush the token count, and then a CLIP model to get the global meaning. This whole pipeline means it can handle high-res images without the GPU just melting from memory activation.
And the results are pretty insane. At a 10x compression ratio, the model can look at the image and “decompress” the original text with about 97% precision. It still gets 60% accuracy even at a crazy 20x compression. As a bonus, this thing is now a SOTA OCR model. It beats other models like MinerU2.0 while using fewer than 800 tokens when the other guy needs almost 7,000. It can also parse charts into HTML, read chemical formulas, and understands like 100 languages.
The real kicker is what this means for the future. The authors are basically proposing this as an LLM forgetting mechanism. You could have a super long chat where the recent messages are crystal clear, but older messages get rendered into blurrier, lower-token images. It’s a path to unlimited context by letting the model’s memory fade, just like a human’s.
modern LLMs
Versus the LLMs used by the ancient Greeks?