Very cool. I love nothing more than security critical software written by a statistical text generator.
There is great irony in this post, considering this sensationalism was called out in a response to the maintainer:
…show hostility due to some random article with sensational title like ‘KeePassXC uses vibe coded contributions now without the users knowing’ which I know is not true. A blog article by KeePassXC would greatly avoid such situation.
using llm’s to do base level coding with human oversight is fine imo.
I hate AI’s societal consequences and it’s profit motive. I hate the data collection, i hate the Surveillance Tech it’s used for. I hate how it shits on Artists. I hate that people use LLM’s to substitude human connections, i hate the (current) environmental impact
But I’m not a luddite. Cat’s out of the bag anyway. We can’t stop it. Same as people couldn’t stop machinery taking over simple work.
We can stop it. It’s not profitable or good.
identification of cancer, development of vaccines, analysis of weather patterns and the ability to insta-translate speech and basically create real life subtitles are just small examples of vast usecases where AI was genuinely successful and a good use of the technology and there’s so much more.
We got to a point where a 50W specialized all-in-one-computing package can handle the workload of a local llm and outpace my gaming-rig which runs deepseek 8b. 50W is as much as a lightbulb in the 90’s. So the environmental impact goes away with time as technology progresses.
the issue lies with the corpos, not the technology.
Sure, collectively we could stop it. But those who oppose it are not numerous enough.
any worthwhile forks or alternatives?
Just quoting this from the linked post:
“I’m a KeePassXC maintainer. The Copilot PRs are a test drive to speed up the development process. For now, it’s just a playground and most of the PRs are simple fixes for existing issues with very limited reach. None of the PRs are merged without being reviewed, tested, and, if necessary, amended by a human developer. This is how it is now and how it will continue to be should we choose to go on with this. We prefer to be transparent about the use of AI, so we chose to go the PR route. We could have also done it locally and nobody would ever know. That’s probably how most projects work these days. We might publish a blog article soon with some more details.”
First I’ve seen this, so I appreciate the post OP. It’s four months old too, so I have no idea what and if anything has changed since the quoted post
Yes the first post sorta goes against the expectation I got from the title.
I think I’m giving them a pass as well. It’s been months since and everything is still okay. From what I can see it looks like some experiments. With quite a good chunk of manual intervention, review and then changing around things and force-pushing a correct (probably human-written) version. I wonder if it even saved them time. Maybe they reconsidered their approach since, the last of those PRs is from end of August. At least they seem to be transparent and pay a good amount of attention to what Copilot does.
I think vibe-coding and AI assisted programming is a bit weird anyway. My own experience is mostly negative. I’ve experimented with it nonetheless. Idk, lots of programmers are clever but also curious people. They’ll try things and figure it out eventually. And looks to me like they might be roughly on the right track here. And I’ll agree, it doesn’t really matter whether they review pull requests from a 14 year old, or russian hacker in disguise or AI. It’s always the same process with pull requests and you never know who’s at the other end and what their motivations are. It’s highly problematic if people bury developers in AI slop, but if they choose so themselves, they’re mostly equipped to deal with it. At least in theory and if they’re good at their job.
Yeah not sure what the point is if it’s not saving any time anyway
Some individual motivation… Curiosity. Fascination with new tech. Or the prospect of maybe saving time and then evaluating if that’s the case. Idk, I’ve tried it as well and it doesn’t seem to save me time but that’s one of the the big promises of AI. I think we all know how AI delivers on its promises overall. But learning and experimenting (with some due diligence) is rarely amongst the problematic aspects of something. But it kind of comes first or you can’t learn about the truth.
The same way fedora is slop now? For fucks sake…
Edit: no, fedora is not slop. The same way keepass isnt slop. Slop is made by letting an llm make something unchecked. KeePass is still reviewing every PR
Yeah this is really getting exhausting. There’s plenty of real shit to be mad about without getting mad about really petty nothing like this. Also the thing is free and open source. Like the entitlement sometimes with this shit is wild.
AI is used poorly for a great many things but just blanket shitting on every use of it is just as obnoxious.
I use LLMs for coding too. They’re pretty great at generating the code I could have written myself. But that’s the important part. I completely understand the code. As long as we’re transparent and a good developer combs trough it I don’t see why not.
I have used LLMs for coding for work and it’s been really annoying. The technology just burns tokens to end up at square 0
We might be in the wrong community here to discuss a positive attitude towards AI coding… But anyway… Do you like it? I think I’m more and more coming to the conclusion that I don’t really fancy it. It’s somewhat fulfilling to code something. But my experience with AI is I’ll spend 90 minutes arguing with it and making it have countless shots at the one problem, and then I end up reading all the code, refactoring it and rewriting snippets and it’s super tedious and I’m annoyed because I like computers for doing exactly what I tell the to do, and now I have to argue with the darn thing about the specifications or how memory allocation or maths works a certain way or if we can pull in random libraries for a simple task… So I’m a bit split on this. At first it was very exciting and fascinating. But I think for coding that kind of got old quickly. At least for me and the stuff I do. These days I’ll use it for quick tech-demos, templates, placeholders, to google the documentation, translate Chinese and the like but I’ve cut down on the actual coding mostly because it takes the fun out of it and turns it upside-down into reviewing and correcting code.
To the extent I have grown more comfortable, it’s accepting that the AI is usually wrong and giving up on trying unless it’s obvious and short. I won’t “argue” with it, I just discard and do it myself. I’ll also click “review my code” and give it a chance to highlight mistakes. Again it is frequently wrong. But once it did catch an inconsistency that I know would have been frustrating when it eventually reared its head.
The thing that I’m thinking of turning off is code completion with tab. Problem is that the lag means I didn’t know if the tab key is going to do a normal thing or if by the time I hit it an AI suggestion pops up and I have to undo the unexpected modification. Also sometimes the suggestions linger and make the actual code hard to read long after I already decided to ignore the suggestion.
Yesterday was a fair amount of tab completing through excessively boilerplate crap thanks to AI, but most days it’s next to useless as I am in low boilerplate scenarios. Some frameworks and languages make you type a novel to do something very common, and AI helps with those. I tend to avoid those but I didn’t have a choice yesterday. Even then the AI made some very bad suggestions, so I have to be in the lookout at all times.
Not OP but I had great success letting it repeat stuff we already have, for example we have a certain pattern on how we place translations. So I just hardcode everything and in the end tell it, using a pre-written task I can just call up, to take all the hardcoded labels and place it im our system in the same way it has already been done. It then reads the code of a few existing components and replicates that. Or I let it extract some code into smaller components. Or move some component around, it can do that batter than the IDEs integrated move action. Completely novel stuff is possible but I am uncertain if I am actually not slower using it to achieve that. I mostly do it step by step, really small steps that is.
I have to measure my performance at some point, it is certainly possible that I am actually slower than before. But overall I never liked typing out the solution that is in my head, so using it as writer is nice.
Sonnet 4.5 is what I use. Some colleagues like GTP-5 but it struggles real hard to do the most basic things right in my experience. Claude is just miles ahead.
Yeah, I sometimes find the same loop with “this thing just don’t understand what I’m asking for” - I’ve had luck with breaking it down into smaller steps, and being specific about the requirements helps. I use Claude Sonnet 4.5 which is pretty decent, the OpenAI models really don’t compare and are pretty bad at best at coding.
Thanks. Yeah I didn’t try Claude. They want my phone number to sign up and I’m not providing that to people. But you’re not the only person suggesting Claude Sonnet, I’ve read that several times now. I wonder if they’re really that much better. I’ll try some more throwaway phone numbers to get in, but seems they’ve blocked most of them.
I’ve tried breaking down things as well. That’s usually a good strategy in programming. Though I guess at some point they’re small enough so I could have already typed it in myself instead of talking about doing it. And I find it often also struggles to find the right balance with the level of detail of a function and whether it’s clever to do a very specific singular thing or do it a bit more general so the function can be reused in other parts of the code. So it’ll be extra work later to revise it, once everything is supposed to come together and integrate/combine.
What model are you using that caused you to “not fancy it?”
I don’t think it’s the model, it’s more the process I don’t like. There is some appeal to programming for me in understanding things and creating something with my mind, not dealing with people but solid logic problems which are properly right or wrong and it’ll all come together from a small defined set of very basic things and then I combine them in some craft to achieve arbitrary things. It’ll come together in my mind and then I type everything down and figure out the rest and that’s kind of satisfying to me. AI programming feels more like sitting in endless meetings to discuss details and revise things, then fix stuff and that’s all the tings I don’t like, and in the meantime we’ve brushed over the one thing I did like. I’d rather it did the tedious things for me. Write specifications, documentation, sift through data and convert it, do correspondence and handle finances. Organize the mess on my computer and bring coffee… Idk. I guess I could break up tasks and delegate if it had the “intelligence” for it.
I think I tried most free commercial ones which didn’t need my phone number… Gemini (AIstudio), ChatGPT, Grok. I’ve experimented a bit with local ones like DeepSeek but I don’t really have the hardware for that, so it’s smaller variants and takes ages to ingest the code and write something.
(And btw I think the individual experience also depends on the task and programming language involved. Seems to me AI’s performance varies a lot depending on if it’s currently writing Python, JavaScript or C++ code for an embedded system… And sometimes coding problems are fairly contained and sometimes you need to have some overview of a larger codebase to tackle one single task. So I guess we might get different results depending on the specific project?!)
Got it, yeah I hated all of those too. Claude Max is a game changer if you’re looking for a better experience, since that experience is heavily tied to model choice.
I’ve found that I can spend two hours writing or 30 minutes editing, and so long as I understand every line I commit I’m keeping up my end of the bargain - but often that’s the bit where everyone gets lazy. I am also used to managing a bunch of junior engineers though, so this motion does feel very natural to me.
Best of luck
Thanks, yeah I can understand that. I’ll try to get ahold of an Anthropic account and see what it’s about. It’s not high priority for me, but eventually I’d like to know the truth. I have some random hobby projects with stuff I could throw at it.
Fuck…
Ouch, for something as sensitive, I don’t trust code reviews to catch vulnerabilities. They probably won’t happen overnight, but I don’t want to risk being a victim to the gradual laziness that comes with backseating programming over time.
Time to jump ship.








