• vga@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    6
    ·
    edit-2
    14 hours ago

    So how do you tell apart AI contributions to open source from human ones?

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      12 hours ago

      To get a bit meta for a minute, you don’t really need to.

      The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

      Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 hours ago

      if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

      • vga@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        10 hours ago

        Ah, right, so we’re differentiating contributions made by humans with AI from some kind of pure AI contributions?

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          yeah I just want to point this out

          myself and a bunch of other posters gave you solid ways that we determine which PRs are LLM slop, but it was really hard to engage with those posts so instead you’re down here aggressively not getting a joke because you desperately need the people rejecting your shitty generated code to be wrong

          with all due respect: go fuck yourself

        • KubeRoot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          19
          ·
          9 hours ago

          It’s a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.