• vortic@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    1 day ago

    That exains a lot…

    I do use AI to assist my programming, but I always take what it suggests as likely highly flawed. It frequently sends me in the right direction but almost never is fully correct. I read the answers carefully, throw away answers frequently, and never use a solution without modifying it in some way.

    Also, it is terrible at handling more complex tasks. I just use it to help me construct small building blocks while I design and build the larger code.

    If 30% of my code was written by AI it would be utter trash.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      AI is like a utils library: it can do well known boilerplate like sorting very well, but it’s not likely to actually write your code for you

      AI is like fill down in spreadsheets: it can repeat a sequence with slight, obvious modifications but it’s not going to invent the data for you

      AI is like static analysis for tests: it can roughly write test outlines, but they might not actually tell you anything about the state of the code under test

    • jonne@infosec.pub
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      And presumably must developers at Microsoft take a similar approach (all the ‘this explains everything’ comments notwithstanding, so it’s ridiculous that they’re even tracking this as a metric. If 30% is AI generated, but the devs had to throw away 90% of it, that doesn’t mean you could get rid of the developer, as they did a huge amount of work just checking the AI and potentially fixing stuff after it.

      This is a metric that is misleading and will cause management to make the wrong decisions.