• VagueAnodyneComments@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    32 minutes ago

    Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?

    None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.

    • kadup@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 minutes ago

      If the people addicted to AI could read and interpret a simple sentence, they’d be very angry with your comment

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      55 minutes ago

      the prompt-related pivots really do bring all the chodes to the yard

      and they’re definitely like “mine’s better than yours”

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        37 minutes ago

        Unlike the PHP hammer, the banhammer is very useful for a lot of things. Especially sealion clubbing.

  • frezik@midwest.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 hours ago

    The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.

    That should be a big warning sign that the next generation of developers are not going to be very good. If they’re waist deep in AI slop, they’re only going to learn how to deal with AI slop.

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    What I’m feeling after reading that must be what artists feel like when AI slop proponents tell them “we’re making art accessible”.

    • dwemthy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 hour ago

      Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform’s assistant chat. The SQL it generated was invalid so the dev simply told it “fix” and it made the query valid, much to everyone’s amusement.

      The actual column names did not reflect the output they were mapped to, there’s no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      22 minutes ago

      I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I’m a Senior Dev.

      If someone uses ChatGPT like I use StackOverflow, I’m not worried. We’ve been stealing code from each other since the beginning.“Getting the answer” and then having to figure out how to plug it into the rest of the code is pretty much what we do.

      There isn’t really a direct path from an LLM to a good programmer. You can get good snippets, but “ChatGPT, build me a app” will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      48 minutes ago

      All the newbs were just copying lines from stackexchange before AI. The only real difference at this point is that the commenting is marginally better.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        45 minutes ago

        Stack Overflow is far from perfect, but at least there is some level of vetting going on before it’s copypasta’d.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 hours ago

    The headlines said that 30% of code at Microsoft was AI now! Huge if true!

    Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      17 minutes ago

      30% of code is standard boilerplate: setters, getters, etc that my IDE builds for me without calling it AI. It’s possible the claim is true, but it’s terribly misleading at best.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 minutes ago
        1. Perhaps you didn’t read the linked article. Nadella didn’t claim that 30% of MS’s code was written by AI. What he said was garbled up to the eventual headline.
        2. We don’t have to play devil’s advocate for a hyped-up headline that misquotes what an AI glazer said, dawg.
        3. “Existing code generation codes can write 30%” doesn’t imply that AI possibly/plausibly wrote 30% of MS’s code. There’s no logical connection. Please dawg, I beg you, think critically about this.
    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      yeah, the “some projects” bit is applicable, as is the “machine generated” phrasing

      @gsuberland pointed out elsewhere on fedi just how much of the VS-/MS- ecosystem does an absolute fucking ton of code generation

      (which is entirely fine, ofc. tons of things do that and it exists for a reason. but there’s a canyon in the sand between A and B)

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 hours ago

        All compiled code is machine generated! BRB gonna clang and IPO, bye awful.systems! Have fun being poor

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          No joke, you probably could make tweaks to LLVM, call it “AI”, and rake in the VC funds.

                • frezik@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 minutes ago

                  For some definition of “happiness”, yes. It’s increasingly clear that the only way to get ahead is with some level of scam. In fact, I’m pretty sure Millennials will not be able to retire to a reasonable level of comfort without accepting some amount of unethical behavior to get there. Not necessarily Slipp’n Jimmy levels of scam, but just stuff like participating in a basic stock market investment with a tax advantaged account.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    27
    ·
    4 hours ago

    Baldur Bjarnason’s given his thoughts on Bluesky:

    My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

    I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 hours ago

    Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.

    I don’t think it’s entirely useless as it is, it’s just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It’s artificial because it is being developed to artificially fulfill prompts, which they do succeed at.

    When people do develop true intelligence-on-demand, you’ll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 hours ago

      My opinion is it can be good when used narrowly.

      Write a concise function that takes these inputs, does this, and outputs a dict with this information.

      But so often it wants to be overly verbose. And it’s not so smart as to understand much of the project for any meaningful length of time. So it will redo something that already exists. It will want to touch something that is used in multiple places without caring or knowing how it’s used.

      But it still takes someone to know how the puzzle pieces go together. To architect it and lay it out. To really know what the inputs and outputs need to be. If someone gives it free reign to do whatever, it’ll just make slop.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 hours ago

        That’s the problem, isn’t it? If it can only maybe be good when used narrowly, what’s the point? If you’ve managed to corner a subproblem down to where an LLM can generate the code for it, you’ve already done 99% of the work. At that point you’re better off just coding it yourself. At that point, it’s not “good when used narrowly”, it’s useless.

        • brygphilomena@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          2 hours ago

          It’s a tool. It doesn’t replace a programmer. But it makes writing some things faster. Give any tool to an idiot and they’ll fuck things up. But a craftsman can use it to make things a little faster, because they know when and how to use it. And more importantly when not to use it.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 hour ago

            The “tool” branding only works if you formulate it like this: in a world where a hammer exists and is commonly used to force nails into solid objects, imagine another tool that requires you to first think of shoving a nail into wood. You pour a few bottles of water into the drain, whisper some magic words, and hope that the tool produces the nail forcing function you need. Otherwise you keep pouring out bottles of water and hoping that it does a nail moving motion. It eventually kind of does it, but not exactly, so you figure out a small tweak which is to shove the tool at the nail at the same time as it does its action so that the combined motion forces the nail into your desired solid. Do you see the problem here?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            It’s a tool.

            (if you persist to stay with this dogshit idiotic “opinion”:) please crawl into a hole and stay there

            fucking what the fuck is with you absolute fucking morons and not understand the actual literal concept of tools

            read some fucking history goddammit

            (hint: the amorphous shifting blob, with a non-reliable output, not a tool; alternative, please, go off about how using a php hammer is definitely the way to get a screw in)

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 hour ago

        There’s something similar going on with air traffic control. 90% of their job could be automated (and it has been technically feasible to do so for quite some time), but we do want humans to be able to step in when things suddenly get complicated. However, if they’re not constantly practicing those skills, then they won’t be any good when an emergency happens and the automation gets shut off.

        The problem becomes one of squishy human psychology. Maybe you can automate 90% of the job, but you intentionally roll that down to 70% to give humans a safe practice space. But within that difference, when do you actually choose to give the human control?

        It’s a tough problem, and the benefits to solving it are obvious. Nobody has solved it for air traffic control, which is why there’s no comprehensive ATC automation package out there. I don’t know that we can solve it for programmers, either.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        My opinion is it can be good when used narrowly.

        ah, as narrowly as I intend to regard your opinion? got it

  • vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    4
    ·
    8 hours ago

    No the fuck it’s not

    I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.

    People who think AI codes well are shit at their job

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 hours ago

      In my workflow there is no difference between LLMs and fucking grep for me.

      Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.

      (Or ripgrep rather)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 hour ago

          (I don’t mean to take aim at you with this despite how irked it’ll sound)

          I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])

          that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression

          [0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 hour ago

            Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 hours ago

        Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

        But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 hours ago

          If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 hour ago

          The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.

          And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.

          I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 hour ago

            We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            28 minutes ago

            God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        2 hours ago

        These views on LLMs are simplistic. As a wise man once said, “check yoself befo yo wreck yoself”, I recommend more education thus

        LLM structures arw over hyped, but they’re also not that simple

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 hours ago

        I’m guessing if it would actually work for that, somebody would have done it by now.

        But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.

        • gens@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 hours ago

          Yea it’s a problem already for security bugs, llms just waste maintainers time and make them angry.

          They are useless and make more work for programmers, even on python and js codebases that they are trained on the most and are the “easiest”.

        • Natanox@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          5 hours ago

          It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.

          Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.

          • V0ldek@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 hours ago

            Like, it’s really good as a learning tool

            Fuck you were doing so well in the first half, ahhh,

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            5 hours ago

            it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language

            the poster: “it’s really good as a learning tool”

            the poster: “but don’t blindly believe it”

            the learner: “how should I know when to believe it?”

            the poster: “check everything”

            the learner: “so you’re saying I should just read the actual documentation and/or source?”

            the poster: “how are you going to ask that anything? how can you fondle something that isn’t a prompt?!”

            the learner: “thanks for your time, I think I’m going to find another class”

            • Natanox@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              2 hours ago

              Nice conversation you had right there in your head. I assume you also took a closer look at it to get a neutral opinion and didn’t just ride one of the two waves “blind AI hype” or “blind AI hate”?

              I’ve taken a closer look at Codestral (which is locally hostable), threw stuff at it and got a sense for what it can and can’t do. The general gist is that its (Python) syntax is basically always correct, however it sometimes messes up the actual code logic or gets the user request wrong. That makes it a good tool for code questions aimed at specific features, how certain syntax in a language works or to look up potential alternative solutions for smaller code snippets. However it should absolutely not be used to create huge chunks of your code logic, that will always backfire.

              And since some people will read this and think I’m some AI worshipper, fuck no. They’re amoral as fuck, the only models not screwed up through their creation process are those very few truly FOSS ones. But if you hate on something you have to actually know shit about it and understand its appeal and non-hyped usecases (they do have them, even LLMs). Otherwise you’ll end up in a social corner filled with bitterness and, depending on the topic, perhaps even increasingly extreme opinions (not saying we shouldn’t smash OpenAI and other corposcum into tiny pieces, we absolutely should).

              There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI. We just live in an economy that’s good in abusing everything and everyone.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 hour ago

                Nice conversation you had right there in your head

                that you recognize none of this is telling. that someone else got it, more so.

                I assume

                you could just ask, you know. since you seem so comfortable fondling prompts, not sure why you wouldn’t ask a person. is it because they might tell you to fuck off?

                I’ve taken a closer look…

                fuck off with the unrequested advertising. never mind that no-one asked you for how you felt for some fucking piece of shit. oh, you feel happy that the logo is a certain tint of <colour>? bully for you, now fuck off and do something worthwhile

                That makes it a good tool

                a tool you say? wow, sure glad you’re going to replace your *spins the wheel* Punctured Car Tyre with *spins the wheel again* Needlenose Pliers!

                think I’m some AI worshipper, fuck no. They’re amoral as fuck

                so, you think there’s moral problems, but only sometimes? it’s supes okay to do your version of leveraged exploitation? cool, thanks for letting us know

                those very few truly FOSS ones

                oh yeah, right, the “truly FOSS ones”! tell me again how those are trained - who’s funding that compute? are the licenses contextually included in the model definition?

                wait, hold on! why are you squealing away like a deflating balloon?! those are actual questions! you’re the one who brought up morals!

                Otherwise you’ll end up in a social corner filled with bitterness

                I’ve met people like you at parties. they’re often popular, but they’re never fun. and I always regret it.

                There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI

                citation. fucking. needed.

                • Natanox@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  1 hour ago

                  Holy shit, get some help. Given how nonsensically off-the-rails you just went you clearly need it.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                2 hours ago

                Otherwise you’ll end up in a social corner filled with bitterness

                This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.

  • mriswith@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    12 hours ago

    You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.

    Mostly said by tech bros and startups.

    That should really tell you everything you need to know.

  • vga@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    edit-2
    9 hours ago

    So how do you tell apart AI contributions to open source from human ones?

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      7 hours ago

      To get a bit meta for a minute, you don’t really need to.

      The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

      Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 hours ago

      if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 hour ago

            “If <insert your favourite GC’ed language here> had true garbage collection, most programs would delete themselves upon execution.” -Robert Feynman

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          5 hours ago

          I’m sorry you work at such a shit job

          or, I guess, I’m sorry for your teammates if you’re the reason it’s a shit job

          either way it seems to suck for you, maybe you should level your skills up a bit and look at doing things a bit better

      • vga@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        5 hours ago

        Ah, right, so we’re differentiating contributions made by humans with AI from some kind of pure AI contributions?

        • KubeRoot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          13
          ·
          4 hours ago

          It’s a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.

          • Mniot@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            49 minutes ago

            I appreciate you explaining it. My LLM wasn’t working so I didn’t understand the joke

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    104
    ·
    18 hours ago

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    This is the most entertaining thing I’ve read this month.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 hours ago

      yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit

      he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him

      long may his PRs fail

    • makeshiftreaper@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      ·
      18 hours ago

      I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn’t publish my work

  • BarrierWithAshes@fedia.io
    link
    fedilink
    arrow-up
    92
    ·
    18 hours ago

    Man trust me you don’t want them. I’ve seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.

    • ImplyingImplications@lemmy.ca
      link
      fedilink
      English
      arrow-up
      42
      ·
      edit-2
      13 hours ago

      The maintainers of curl recently announced any bug reports generated by AI need a human to actually prove it’s real. They cited a deluge of reports generated by AI that claim to have found bugs in functions and libraries which don’t even exist in the codebase.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 hours ago

        you may find, on actually going through the linked post/video, that this is in fact mentioned in there already

    • Hasherm0n@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      13 hours ago

      Today the CISO of the company I work for suggested that we should get qodo.ai because it would “… help the developers improve code quality.”

      I wish I was making this up.

      • Rayquetzalcoatl@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        8 hours ago

        My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there’s an issue with what a client is requesting, I’ll approach him with:

        1. What the issue is
        2. At least two possible solutions or alternatives we can offer

        He will then, almost always, ask if I’ve checked with the AI. I’ll say no. He’ll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.

        It’s getting very boring dealing with the roboloving freaks.

      • Aux@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        6 hours ago

        90% of developers are so bad, that even ChatGPT 3.5 is much better.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 hours ago

          wow 90%, do you have actual studies to back up that number you’re about to claim you didn’t just pull out of your ass?

          • Mniot@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            52 minutes ago

            This reminds me of another post I’d read, “Hey, wait – is employee performance really Gaussian distributed??”.

            There’s this phenomenon when you’re an interviewer at a decently-funded start-up where you take a ton of interviews and say “OMG developers are so bad”. But you’ve mistakenly defined “developer” as “person who applies for a developer job”. GPT3.5 is certainly better at solving interview questions than 90% of the people who apply. But it’s worse than the people who actually pass the interview. (In part because the interview is more than just implementing a standard interview problem.)

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              47 minutes ago

              your post has done a significantly better job of understanding the issue than a rather-uncomfortably-large amount of programming.dev posters we get, and that’s refreshing!

              and, yep