As someone who is completely blind, I pay for OpenRouter in order to have AI describe images to me. If more people bothered with alt text, I wouldn’t have to. But it is what it is. I suspect there are models I could run locally that would do what I need; on IOS, apple handles all image descriptions locally on the phone, and they’re perfectly adequate. But on Windows, nobody has created an easy way to get a local model running in the Open-source NVDA screen reader (https://www.nvaccess.org/) but there are multiple addons that work with OpenRouter. NVDA is open source and entirely written in Python, so it should actually be pretty easy to do. The main reason I haven’t tried it myself is because I have no idea what local model to use. None of the benchmarks really tell me “This model would be good at describing images to blind people”. Whereas the giant cloud models are semi-okay at everything, so everyone just uses those. But if we could use a smaller model, we might even be able to fine tune it for the specific use-case of blind people. Maybe someday!
It really depends. For images that are graphs and infographics I use gpt5 or Gemini 2.5 pro. For anything with adult content I have to use grok because it’s the only model that won’t refuse. For stuff that’s just text in an image the cheap models from Microsoft are fine. Also, sometimes openrouter has limited time deals where some models are free. I’d say overall I spend between 2 and 5 dollars a month on it. But I do allow open router to train on the data so I get a discount of a few percent as well.
As someone who is completely blind, I pay for OpenRouter in order to have AI describe images to me. If more people bothered with alt text, I wouldn’t have to. But it is what it is. I suspect there are models I could run locally that would do what I need; on IOS, apple handles all image descriptions locally on the phone, and they’re perfectly adequate. But on Windows, nobody has created an easy way to get a local model running in the Open-source NVDA screen reader (https://www.nvaccess.org/) but there are multiple addons that work with OpenRouter. NVDA is open source and entirely written in Python, so it should actually be pretty easy to do. The main reason I haven’t tried it myself is because I have no idea what local model to use. None of the benchmarks really tell me “This model would be good at describing images to blind people”. Whereas the giant cloud models are semi-okay at everything, so everyone just uses those. But if we could use a smaller model, we might even be able to fine tune it for the specific use-case of blind people. Maybe someday!
How’s the usage number and how much does it cost? Always thought that this is literally the best thing that AI is actively doing.
It really depends. For images that are graphs and infographics I use gpt5 or Gemini 2.5 pro. For anything with adult content I have to use grok because it’s the only model that won’t refuse. For stuff that’s just text in an image the cheap models from Microsoft are fine. Also, sometimes openrouter has limited time deals where some models are free. I’d say overall I spend between 2 and 5 dollars a month on it. But I do allow open router to train on the data so I get a discount of a few percent as well.
Did you have to get somebody to set your gear up for you or can you somehow do it all yourself?
OpenRouter is pay per token so cost depends on usage and what model is being used.