cross-posted from: https://lemmy.dbzer0.com/post/50693956
Transcript
A post by [object Object] (@[email protected]) saying: courtesy of @[email protected], Proton is now the only privacy vendor I know of that vibe codes its apps: In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure! I am once again begging anyone who will listen to get off of Proton as soon as reasonably possible, and to avoid their new (terrible) apps in any case. https://circumstances.run/@davidgerard/114961415946154957
It has a reply by the author saying: in an unsurprising update for those familiar with how Proton operates, they silently rewrote their monorepo’s history to purge .cursor and hide that they were vibe coding: https://github.com/ProtonMail/WebClients/tree/2a5e2ad4db0c84f39050bf2353c944a96d38e07f
given the utter lack of communication from Proton on this, I can only guess they’ve extracted .cursor into an external repository and continue to use it out of sight of the public
The anti-AI circlejerk even here on lemmy is now just about as bad as the pro-AI circlejerk in the general public, no room for nuance or rational thinking, just dunking on everyone who say anything remotely positive about AI, like when I said I like the autocomplete feature of copilot.
I’m a pretty big generative AI hater when it comes to art and writing. I don’t think generative AI can make meaningful art because it cannot come up with new concepts. Art is something that AI should be freeing up time in our lives for us to do. But that’s not how it’s shaping up.
However, AI is very helpful for understanding codebases and doing things like autocompletion. This is because code is less expressive than human language and it’s easier for AI to approximate what is necessary.
Natural language processing makes TTS way more usable for people with reading disabilities. But there are absolutely no good uses of AI.
What about cancer research? AI is bad when it’s being used to find cures?
People refer to generative AI when they just say “AI” nowadays.
There are a ton of small, single purpose neural networks that work really well, but the “general purpose” AI paradigm has wiped those out in the public consciousness. Natural language processing and modern natural sounding text to speech are by definition AI as they use neural networks, but they’re not the same as ChatGPT to the point that a lot of people don’t even consider them AI.
Also AI is really good at computing protein shapes. Not in a “ChatGPT is good enough that it’s not worth hiring actual writers to do it better” way, in a “this is both faster and more accurate than any other protein folding algorithm we had” way.
Yeah, people don’t realize how huge this kind of thing is. We’ve been trying for YEARS to figure out how to correctly model protein structures of novel proteins.
Now, people have trained a network that can do it and, using the same methods to generate images (diffusion models), they can also describe an arbitrary set of protein properties/shapes and the AI will generate a string of amino acids which are most likely to create it.
The LLMs and diffusion models that generate images are neat little tech toys that demonstrate a concept. The real breakthroughs are not as flashy and immediately obvious.
For example, we’re starting to see AI robotics, which have been trained to operate a specific robot body in dynamic situations. Manually programming robotics is HARD and takes a lot of engineers and math. Training a neural network to operate a robot is, comparatively, a simple task which can be done without the need for experts (once there are Pretrained foundational models).
You’re not alone. Nuance is just harder to convey, takes more effort to post something nuanced. And so people do it less, myself included. But I think truthfully that many people are not so stuck in one or the other circlejerks. It’s lovely to see people in this thread who are annoyed by both.
I’m personally scared of AI (not angry or hateful, actually scared by just how fast it’s advancing) and that definitely clouds my judgement of it and makes nuance difficult.
It’s like a deal with the devil. You see all these amazing benefits but you just know you’re the one being taken advantage of, because, like the devil, AI corporations by definition only think about how you can be of use to them.