- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
When growing up in the 70’s “computer programmers” were assumed to be geniuses. Nowadays they are maybe one tier above fast food workers. What a world!
I’m still waiting to be replaced by robots and computers.
If only this wasn’t becoming the agenda of big corporations…they are dropping jobs left and right and it’s scary. Robots will be doing most of our jobs sooner than later…lookup flippy bot we won’t even have entry level jobs soon and the problem is we’re not doing this to become more like star trek. They are doing this to add seventeen more marble gold diamond pillars to their dogs puppies houses on their 9000 acre private islands.
Explicit programmers are needed because the general public has failed to learn programming. Hiding the complexity behind nice interfaces makes it actually more difficult to understand programming.
This comes all from programmers using programs to abstract programming away.
What if the 2030s change the approach and use AI to teach everybody how to program?
Hiding the complexity behind nice interfaces makes it actually more difficult to understand programming.
This is a very important point, that most of my colleagues with OOP background seem to miss. They build a bunch of abstractions and then say it’s easy, because we have one liner in calling code, pretending that the rest of the code doesn’t exist. Oh yes, it certainly exists! And needs to be maintained, too.
I find this to be a real problem with visual shaders. I know how certain mathematical formulas affect an input, but instead of just pressing the Enter key and writing it down, I now have to move blocks around, and oh no, they were nicely logically aligned, now one block is covering another block, oh noo, what a mess and the auto sort thing messes up the logical sorting completly… well too bad.
And I find that most solutions on the internet forget that previous outputs can be reused when using the visual editor. Getting normals from already generated noise without resampling somehow becomes arcane knowledge.
Edit: words.
You can add SQL in the 70s. It was created to be human readable so business people could write sql queries themselves without programmers.
So is COBOL.
(Is there any sane alternative to SQL?)
(Is there any sane alternative to SQL?)
Yes, no SQL.
Ironically, one of the universal things I’ve noticed in programmers (myself included) is that newbie coders always go through a phase of thinking “why am I writing SQL? I’ll write a set of classes to write the SQL for me!” resulting in a massively overcomplicated mess that is a hundred times harder to use (and maintain) than a simple SQL statement would be. The most hilarious example of this I ever saw was when I took over a young colleague’s code base and found two classes named “OR.cs” and “AND.cs”. All they did was take a String as a parameter, append " OR " or " AND " to it, and return it as the output. Very forward-thinking, in case the meanings of “OR” and “AND” were ever to change in future versions of SQL.
Object Relational Mapping can be helpful when dealing with larger codebases/complex databases for simply creating a more programmatic way of interacting with your data.
I can’t say it is always worth it, nor does it always make things simpler, but it can help.
I used to use ORMs because they made switching between local dev DBs ( like SQLLite, or Postgres) and production DBs usually painless. Especially for Ruby/Sinatra/Rails since we were writing the model queries in another abstraction. It meant we didn’t have to think as much about joins and all that stuff. Until the performance went to shit and you had to work out why.
the problem with ORM is that some people go all in on it and ignore pure SQL completely.
In reality ORM only works well for somewhat simple queries and structures, but at some times you will have to write your own queries in SQL. But then you have some bonus complexity, that comes from 2 different things filling the same niche. It’s still worth it, but there is no free cake.
I’ve always seen as that as a scapehatch for one of the most typical issues with ORMs, like the the N+1 problem, but I never fully bought it as a real solution.
Mainly because in large projects this gets abused (turns out none or little of the SQL has a companion test) and one of the most oversold benefits of ORMs (the possibility of “easily” refactor the model) goes away.
Since SQL is code and should be tested like any other code, I rather ditch the whole ORM thing and go SQL from the beginning. It may be annoying for simple queries but induces better habits.
I don’t have a lot of experience with projects that use ORMs, but from what I’ve seen it’s usually not worth it. They tend to make developers lazy and create things where every query fetches half the database when they only need one or two columns from a single row.
Yeah. Unless your data model is dead simple, you will end up not only needing to know this additional framework, but also how databases and SQL work to unfuck the inevitable problems.
Yeah. Unless your data model is dead simple, you will end up not only needing to know this additional framework, but also how databases and SQL work to unfuck the inevitable problems.

Describing what they want in plain, human language is impossible for stakeholders.
LLMs need to be trained to work with reptilian language. Problem solved.
‘I want you to make me a Facebook-killer app with agentive AI and blockchains. Why is that so hard for you code monkeys to understand?’
You forgot we run on fritos, tab, and mountain dew.
Maybe he want to write damn login page himself.
Not say it out loud. Not stupid… Just proud.
Getting ai to do a complex problem correctly takes so much detailed explanation, it’s quicker to do it myself
While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.
I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?
For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.
Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.
LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.
- Plan first, using planning modes to help you, decomposition the plan
- Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up
https://www.promptingguide.ai/
https://www.anthropic.com/engineering/claude-code-best-practices
There are community guides that take this even further, but these are some starting references I found very valuable.
So even more work than actual coding.
Early adopters will be rewarded by having better methodology by the time the tooling catches up.
Too busy trying to dunk on me than understand that you have some really helpful tools already.
Yup. It’s insanity that this is not immediately obvious to every software engineer. I think we have some implicit tendency to assume we can make any tool work for us, no matter how bad.
Sometimes, the tool is simply bad and not worth using.
Everyone is a senior engineer with an idiot intern now.
While you’re right that it’s a new technology and not everyone is using it right, if it requires all of that setup and infrastructure to work then are we sure it provides a material benefit. Most projects never get that kind of attention at all, to require it for AI integration means that currently it may be more work than it’s worth.
This is why I say some people are going to lose their jobs to engineers using AI correctly, lol.
“If I need to write boilerplate and learn a new skill, is it really worth it?”
Even writing an RFC for a mildly complicated feature to mostly describe it takes so many words and communication with stakeholders that it can be a full time job. Imagine an entire app.
You want the answer to the ultimate question of life, the universe, and everything? Ok np
Least it’s an improvement over no/low code. You can dig in and unfuck some ai code easily enough but god help you if your no code platform has a bug that only their support team can fix. Not to mention the vendor lock in and licensing costs that come with it.
After shovels were invented, we decided to dig more holes.
After hammers were invented, we needed to drive more nails.
Now that vibe coding has been invented, we are going to write more software.
No shit
Doesn’t matter if they can replace coders. If CEOs think it can, it will.
And now, it’s good enough to look like it works so the CEO can just push the problem down the road and get an instant stock inflation
And then it’ll all go to shit and proper programmers will be able to charge bank to sort it out.
I don’t want so spend my career un-fucking vibe code.
I want to create something fun and nice. If I wanted to clean other people’s mess, I would be a janitor.
If I wanted to clean other people’s mess, I would be a janitor.
I’ll take your share of the slop cleanup if you don’t want it. I wouldn’t mind twice the slop cleanup
extortionsalary.Cleaning other people’s mess is okay for a while. But making a career out of it is too much for me.
I do firmware for embedded systems and every mechanical, electronics or general engineering issue is shoved down in my court because it’s easier for people to take shortcuts in the engineering process and say “we’ll fix it in the firmware” since I can change code 100 times a day.
Slop is the embodiment of that on steroids and it will get old pretty fast.
I hope it works like that.
I hope all those companies go bankrupt, people hiring those CEOs lose everything, and the CEOs never manage to find another job in their lives…
But that’s a not bad second option.
The CEOs will get a short term boost to profits and stock price. Theyll get a massive bonus from it. Then in a few years when shit starts blowing up, they will retire before that happens with a nice compensation package, leaving the company, employeez, and stockholders up shits creek from his short sighted plan.
But the CEO will be just fine on his yacht, dont worry.
It already does, there are people selling their services to unfuck projects that were built with generated code.
They’ll end up being exploited
What do you think happens already lmao
I don’t get how an MDA would translate to “no programmers needed”. Maybe they meant “coders”?
But really, I feel like the people who use this phrase to pitch their product either don’t know how many people actually find it difficult to break down tasks into logical components, such that a computer would be able to use, or they’re lying.Software engineering is a mindset, a way of doing something while thinking forward (and I don’t mean just scalability), at least if you want it done with quality. Today you can’t vibe code but proofs of concept, prototypes that are in no way ready for production.
I don’t see current LLMs overcoming this soon. It appears that they’ve reached their limits without achieving general AI, which is what truly would obsolete programmers, and humans in general.
Yeah why is it always coders that are supposed to be replaced and not a whole slew of other jobs where a wrong colon won’t break the whole system?
Like management or C-Suits. Fuck I’d take chatgpt as a manager any day.
programmers, and humans in general
With current levels of technology, they would require humans for maintenance.
Not because they don’t have self-replication, because they can just make that if they have a proper intelligence, but because their energy costs are too high and can’t fill AI all the way.
OK, so I didn’t think enough. They might just end up making robots with expert systems, to do the maintenance work which would require not wasting resources on “intelligence”.
And after each of these, there’s been _more _ demand for developers.
LLMs often fail at the simplest tasks. Just this week I had it fail multiple times where the solution ended up being incredibly simple and yet it couldn’t figure it out. LLMs also seem to „think“ any problem can be solved with more code, thereby making the project much harder to maintain.
LLMs won’t replace programmers anytime soon but I can see sketchy companies taking programming projects by scamming their clients through selling them work generated by LLMs. I‘ve heard multiple accounts of this already happening and similar things happened with no code solutions before.
Today I removed some functions and moved some code to separate services and being the lazy guy I am, I told it to update the tests so they no longer fail. The idiot pretty much undid my changes and updated the code to something very much resembling the original version which I was refactoring. And the fucker did it twice, even with explicit instructions to not do it.
I have heard of agents deleting tests or rewriting them to be useless like ‘assert(true)’.
Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.
My own anecdotes:
In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects, or maintain existing ones.
Our product delivery has been accelerated while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.
The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.
Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.
I don’t place a power drill on the table and say “build a shelf,” expecting it to happen, but marketing of AI has people believing they can.
Instead, you give an intern a power drill with a step-by-step plan with all the components and on-the-job training available on demand.
If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.
You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.
This entire comment reads like lightly edited ai slop.
Well, I typed it with my fingers.
I seriously doubt your quality is maintained when an LLM writes most of your code, unless a human audits every line and understands what and why it is doing it.
If you break the tasks small enough that you can do this each step, it is no longer writing a full application, it’s writing small snippets, and you’re code-pairing with it.
Great? Business is making money. I already explained we have human reviewed PRs on top of full test coverage and other validations.
We’re compliant on security policies at our organization, and we have no trouble maintaining what the current code we’re generating because it’s based on years of well defined patterns and best practices that we score internally across the entirety of engineering at our organization.
As more examples in the real world:
Aider has written 7% of its own code (outdated, now 70%) | aider https://aider.chat/2024/05/24/self-assembly.html
https://aider.chat/HISTORY.html
LibreChat is largely contributed to by Claude Code, it’s the current best open source ChatGPT client, and they’ve just been acquired by ClickHouse.
https://clickhouse.com/blog/clickhouse-acquires-librechat
https://github.com/danny-avila/LibreChat/commits/main/
Such suffering from the quality! So much worse than our legacy monolith!
Your product is an LLM tool written with LLM tools. That’s is hilarious.
If the goal is to see how much middleware you can sell idiots, you’re doing great!
Incorrect, but okay.
We have human code review and our backlog has been well curated prior to AI. Strongly definitely acceptance criteria, good application architecture, unit tests with 100% coverage, are just a few ways we keep things on the rails.
I don’t see what the idea of paircoding has to do with this. Never did I claim I’m one shotting agents.
Which IDEs?
Cursor and Claude Code are currently top tier.
GitHub Copilot is catching up, and at a $20/mo price point, it is one of the best ways to get started. Microsoft is slow rolling some of the delivery of features, because they can just steal the ideas from other projects that do it first. VScode also has extensions worth looking at: Cline and RooCode
Claude Code is better than just using Claude in cursor or copilot. Claude Code has next level magic that dispells some of the myths being propagated here about “ai bad at thing” because of the strong default prompts and validation they have built into it. You can say dumb human ignorant shit, and it will implicitly do a better job than others tools you give the same commands to.
To REALLY utilize claude code YOU MUST configure mcp tools… context7 is a critical one that avoids one of those footguns, “the model was trained on older versions of these libraries.”
Cursor hosts models with their own secret sauce that improves their behavior. They hardforked VSCode to make a deeper integrated experience.
Avoid antigravity (google) and Kiro (Amazon). They don’t offer enough value over the others right now.
If you already have an openai account, codex is worth trying, it’s like Claude Code, but not as good.
JetBrains… not worth it for me.
Aider is an honorable mention.
Codex is not bad. I use it for personal projects and Claude at work, so I can directly compare and Codex seems better to me.
In my opinon, Codex is fine, but copilot has better support across AI providers (mode models), and Claude is a better developer.
Thanks!
Sure thing, crazy how anti AI lemmy users are!
One of the rare comments here that is not acid spewing rage against AI. I too went from “copying a few lines to save some time” and having to recheck everything to several hundred lines working out of the box.
deleted by creator
I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.
I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.
What are your plans when these AI companies collapse, or start charging the actual costs of these services?
Because right now, you’re paying just a tiny fraction of what it costs to run these services. And these AI companies are burning billions to try to find a way to make this all profitable.
These tools are mostly determistic applications following the same methodology we’ve used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.
Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.
What are your plans when the Internet stops existing or is made illegal (same result)? Or when…
They are not going away. LLMs are already ubiquitous, there is not only one company.
Ok, so you’re completely delusional.
The current business model is unsustainable. For LLMs to be profitable, they will have to become many times more expensive.
What are you even trying to say? You have no idea what these products are, but you think they are going to fail?
Our company does market research and test pilots with customers, we aren’t just devs operating in a bubble pushing AI.
We are listening and responding to customer needs and investing in areas that drive revenue using this technology sparingly.
Get back to us when you actually launch and maintain a product for a few months then. Because you don’t have anything in production then.
We use a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It’s modeled after the internal programs we’ve had success with.
We are doing this very responsibly, and deliver a product our customers are asking for, with the tools to help calibrate minor things based on analytics.
We take data governance and security compliance seriously.

Well, have you seen what game engines have done to us?
When tools become more accessible, it mostly results in more garbage.
I’m guessing 4 out of 5 of your favorite games have been made with either unity or unreal. What an absolutely shit take.
You’re guess is wrong. :P And anyways, I didn’t say all games using an easy to use game engine are shit.
If you use an easy game engine (idk if unreal would even fit this, btw), it is easier to produce something usable at all. Meanwhile, the effort needed to make the game good (i.e. game design) stays the same. The result is that games reach a state of being publishable with a lower amount of effort spent in development.
I barely use AI for work but I gotta say that it’s the first time I can get some very specific tasks done faster.
I currently make it write code generators, I fix the up and after that I have something better at making boilerplate than these LLMs. Today I had to throw up a bunch of CRUD for a small webapp and it saved me around 1-2 hours.
Yeah forms and very basic HTML its good. Anything complex and you have to take over. Great at saving time, like an intern. But a bit worse in that the intern will typically get better and the ai hasn’t really.
That’s a great methodology for a new adopter.
Curious if you read about it, or did it out of mistrust for the AI?
I mistrust because it’s inaccurate.













