You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
To get a bit meta for a minute, you don’t really need to.
The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.
Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.
if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.
myself and a bunch of other posters gave you solid ways that we determine which PRs are LLM slop, but it was really hard to engage with those posts so instead you’re down here aggressively not getting a joke because you desperately need the people rejecting your shitty generated code to be wrong
It’s a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.
So how do you tell apart AI contributions to open source from human ones?
for anyone that finds this thread in the future: “check if [email protected] contributed to this codebase” is an easy hack for this test
It’s usually easy, just check if the code is nonsense
To get a bit meta for a minute, you don’t really need to.
The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.
Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.
There are AI contributions happening all the time, lol. What are you even talking about?
Ask chatgpt to explain it to you.
if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.
Bad code quality is a myth.
Try and get chatgpt to make something longer than ~50 lines without it being complete soup.
Actually don’t, I prefer when people can breathe
the worst programmer is in captivity (banned)
the galaxy is at peace
Good code quality is a myth.
“If <insert your favourite GC’ed language here> had true garbage collection, most programs would delete themselves upon execution.” -Robert Feynman
I’m sorry you work at such a shit job
or, I guess, I’m sorry for your teammates if you’re the reason it’s a shit job
either way it seems to suck for you, maybe you should level your skills up a bit and look at doing things a bit better
GitHub, for one, colors the icon red for AI contributions and green/purple for human ones.
Ah, right, so we’re differentiating contributions made by humans with AI from some kind of pure AI contributions?
yeah I just want to point this out
myself and a bunch of other posters gave you solid ways that we determine which PRs are LLM slop, but it was really hard to engage with those posts so instead you’re down here aggressively not getting a joke because you desperately need the people rejecting your shitty generated code to be wrong
with all due respect: go fuck yourself
It’s a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.
I appreciate you explaining it. My LLM wasn’t working so I didn’t understand the joke
Jesus Howard Christ how did you manage to even open a browser to type this in