

The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.
“Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”
You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)