- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.
Actually the way you get it to do better is to put more of the burden on interpreting the context on the LLM instead of heavy handed instructions - because the LLMs do understand the context.
For example, here’s Gemini answering what the physical characteristics of 1940s soldiers in Germany might have looked like:
I think it could have managed to contextualize the prompt correctly if given the leeway in the instructions. Instead, what’s happened is the instructions given to it ask it to behind the scenes modify the prompt in broad application to randomly include diversity modifiers to what is asked for. So “image of 1940s German soldier” is being modified to “image of black woman 1940s German soldier” for one generation and “image of Asian man 1940s German soldier” for another, which leads to less than ideal results. It should instead be encouraged to modify for diversity and representation but adapted to the context of the request.
I think a lot of the improvement will come from breaking down the problem using sub assistant for specific actions. So in this case you’re asking for an image generation action involving people, then an LLM specifically designed for that use case can take over tuned for that exact use case. I think it’ll be hard to keep an LLM on task if you have one prompt trying to accomplish every possible outcome, but you can make it more specific to handle sub tasks more accurately. We could even potentially get an LLM to dynamically create sub assistants based on the use case. Right now the tech is too slow to do all this stuff at scale and in real time, but it will get faster. The problem right now isn’t that these fixes aren’t possible, it’s that they’re hard to scale.
Yes, this is exactly correct. And it’s not actually too slow - the specialized models can be run quite quickly, and there’s various speedups like Groq.
The issue is just more cost of multiple passes, so companies are trying to have it be “all-in-one” even though cognitive science in humans isn’t an all-in-one process either.
For example, AI alignment would be much better if it took inspiration from the prefrontal cortex inhibiting intrusive thoughts rather than trying to prevent the generation of the equivalent of intrusive thoughts in the first place.
Exactly, that’s where the too slow part comes in. To get more robust behavior it needs multiple layers of meta analysis, but that means it would take way more text generation under the hood than what’s needed for one shot output.
Yes, but in terms of speed you don’t need the same parameters and quantization for the secondary layers.
If you haven’t seen it, see how fast a very capable model can actually be: https://groq.com/