• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    2 days ago

    It’s not that developers are switching to AI tools it’s that stack overflow is awful and has been for a long time. The AI tools are simply providing a better alternative, which really demonstrates how awful stack overflow is because the AI tools are not that good.

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Undoubtedly. But you agree that the crowdsourced knowledge base of existing answers is useful, no? That is what the islop searches and reproduces. It is more convenient than waiting for a rude answer. But I don’t think islop will give you a good answer if someone has not been bothered answer it before in SO.

      islop is a convenience, but you should fear the day you lose the original and the only way to get that info is some opaque islop oracle

      • Ledivin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Most answers on SO are either from a doc page, are common patterns found in multiple books, or is mostly opinion-based. Most code AIs are significantly better at the first two without even being trained on SO (which I wouldn’t want anyway - SO really does suck nowadays)

  • eronth@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    3 days ago

    Honestly just funny to see. It makes perfect sense, based on how they made the site hostile to users.

    • ByteOnBikes@discuss.online
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      2 days ago

      I was contributing to SO in 2014-2017 when my job wanted our engineers to be more “visible” online.

      I was in the top 3% and it made me realize how incredibly small the community was. I was probably answering like 5 questions a week. It wasn’t hard. For some perspective, I’m making like 4-5 posts on Lemmy A DAY.

      What made me really pissed was how often a new person would give a really good answer, then some top 1% chucklefuck would literally take that answer, rewrite it, and then have it appear as the top answer. And that happened to me constantly. But again, I didn’t care since I’m just doing this to show my company I’m a “good lil engineer”.

      I stopped participating because of how they treated new users. And around 2020(?), SO made a pledge to be not so douchy and actually allow new users to ask questions. But that 1% chucklefuck crew was still allowed to wave their dicks around and stomp on people’s answers. So yeah, less “Duplicate questions”, more “This has been answered already [link to their own answer that they stole]”.

      So they removed the toxic attitude with asking questions, but not the toxicity when answering. SO still had the most sweaty people control responses, including editing/deleting them. And you can’t grow a community like that.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    What? People would rather have their balls licked by AI rather than have some neckbeard moderator change the entire language of their question and not answer shit? Fuck SO. That shit was so ass to interact with.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    76
    ·
    edit-2
    3 days ago

    This is not because AI is good at answering programming questions accurately, it’s because SO sucks. The graph shows its growth leveling off around 2014 and then starting the decline around 2016, which isn’t even temporally correlated with LLMs.

    Sites like SO where experienced humans can give insightful answers to obscure programming questions are clearly still needed. Every time I ask AI a programming question about something obscure, it usually knows less than I do, and if I can’t find a post where another human had the same problem, I’m usually left to figure it out for myself.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      3 days ago

      2016 is probably when they removed freedom by introducing aggressive moderation to remove duplicates and ban people

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        It was a toxic garbage heap way before 2016. I remember creating an account to try building karma there back in about 2011 when doing that was seen as a good way to land senior job roles. Gave up very quickly.

  • perry@aussie.zone
    link
    fedilink
    English
    arrow-up
    133
    arrow-down
    4
    ·
    4 days ago

    I post there every 6-12 months in the hope of receiving some help or intelligent feedback, but usually just have my question locked or removed. The platform is an utter joke and has been for years. AI was not entirely the reason for its downfall imo.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      3 days ago

      Not common I’m sure, but I once had an answer I posted completely rewritten for grammar, punctuation, and capitalization. I felt so valued. /s

      • SleeplessCityLights@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        3 days ago

        The last time I asked a question, I followed the formatting of a recent popular question/post. Someone did not like that and decided to implement their formatting, thebvproceeded to dramatically change my posts and updates. Also people kept giving me solutions to problems I never included in my question. The whole thing was ridiculous.

      • poopkins@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        3 days ago

        As a mod, this is all I ever did on the platform. Thanks for the appreciation!

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        haha I ran into this too, someone changed the title of my question on one of their non-programming boards - I was so pissed, I never went back to that particular board (it was especially annoying because it was a quite personal question)

    • chrischryse@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      I used to post had the same thing. Then people would insult me for not knowing like “why you think I’m asking?”

  • rumschlumpel@feddit.org
    link
    fedilink
    English
    arrow-up
    216
    arrow-down
    3
    ·
    4 days ago

    TBH asking questions on SO (and most similar platforms) fucking sucks, no surprise that users jump at the first opportunity at getting answers another way.

    • slate@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      237
      arrow-down
      1
      ·
      4 days ago

      Removed. Someone else already said this before. Also, please ensure you stick to the stlye guides next time, and be less ambiguous. SO could mean a plethora of things.

      • rumschlumpel@feddit.org
        link
        fedilink
        English
        arrow-up
        107
        ·
        edit-2
        4 days ago
        Spoiler

        Last time this question was answered was for several years older software versions, and the old solutions don’t work anymore. Whoops!

            • comador @lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              Shivers…

              I remember when I signed up for SO and was immediately put off by the fact you couldn’t post a conversation asking for help until you had helped others out AND gotten enough positive points.

              I still did it, but damn their moderation system is ass.

              • amateurcrastinator@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                Ah yes the famous: you need to add more details, may e a picture but you need to have above 100 reputation before you can add a picture or edit your question

          • ZILtoid1991@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            3 days ago

            In a video covering the toxicity of Stackoverflow, it was found ot at least some of the admins are also extremely toxic on other sites, in that same exact manner.

    • thebestaquaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      3 days ago

      I will never forget the time I posted a question about why something wasn’t working as I expected, with a minimal example (≈ 10 lines of python, no external libraries) and a description of the expected behaviour and observed behaviour.

      The first three-ish replies I got were instant comments that this in fact does work like I would expect, and that the observed behaviour I described wasn’t what the code would produce. A day later, some highly-rated user made a friendly note that I had a typo that just happened to trigger this very unexpected error.

      Basically, I was thrashed by the first replies, when the people replying hadn’t even run the code. It felt extremely good to be able to reply to them that they were asshats for saying that the code didn’t do what I said it did when they hadn’t even run it.

      • rumschlumpel@feddit.org
        link
        fedilink
        English
        arrow-up
        31
        ·
        edit-2
        4 days ago

        I do understand being rigorous about questions, and technical forums were even worse a lot of the time, but SO’s methods led to the site becoming severely outdated. They really should have introduced a mechanism to mark old content as outdated. It should have been obvious like 10 years ago that solutions often stop working come next major version of the programming language, framework or operating system.

  • micka190@lemmy.world
    link
    fedilink
    English
    arrow-up
    138
    arrow-down
    1
    ·
    edit-2
    4 days ago

    According to a Stack Overflow survey from 2025, 84 percent of developers now use or plan to use AI tools, up from 76 percent a year earlier. This rapid adoption partly explains the decline in forum activity.

    As someone who participated in the survey, I’d recommend everyone take anything regarding SO’s recent surveys with a truckfull of salt. The recent surveys have been unbelievably biased with tons of leading questions that force you to answer in specific ways. They’re basically completely worthless in terms of statistics.

    • chaosCruiser@futurology.today
      link
      fedilink
      English
      arrow-up
      62
      arrow-down
      11
      ·
      4 days ago

      Realistically though, asking an LLM what’s wrong with my code is a lot faster than scrolling through 50 posts and reading the ones that talk about something almost relevant.

      • Rob T Firefly@lemmy.world
        link
        fedilink
        English
        arrow-up
        64
        arrow-down
        23
        ·
        4 days ago

        It’s even faster to ask your own armpit what’s wrong with your code, but that alone doesn’t mean you’re getting a good answer from it

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          40
          arrow-down
          11
          ·
          4 days ago

          If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.

          • chaosCruiser@futurology.today
            link
            fedilink
            English
            arrow-up
            38
            arrow-down
            2
            ·
            4 days ago

            Also depends on your level of expertise. If you have beginner questions, an LLM should give you the correct answer most of the time. If you’re an expert, your questions have no answers. Usually, it’s something like an obscure firmware bug edge case even the manufacturer isn’t aware of. Good luck troubleshooting that without writing your own drivers and libraries.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              16
              ·
              3 days ago

              If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.

              Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.

              • Zos_Kia@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                11
                ·
                3 days ago

                Yeah the internet seems to think coding is an expert thing when 99.9% of coders do exactly what you described. I do it, you do it, everybody does it. Even the people claiming to do big boy coding, when you really look at the details, they’re mostly slapping bog standard code on business needs.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                6
                ·
                3 days ago

                Boring standard coding is exactly where you can actually let the LLM write the code. Manual intervention and review is still required, but at least you can speed up the process.

                • Aceticon@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  arrow-down
                  2
                  ·
                  edit-2
                  3 days ago

                  Code made up of severally parts with inconsistently styles of coding and design is going to FUCK YOU UP in the middle and long terms unless you never again have to touch that code.

                  It’s only faster if you’re doing small enough projects that an LLM can generate the whole thing in one go (so, almost certainly, not working as professional at a level beyond junior) and it’s something you will never have to maintain (i.e. prototyping).

                  Using an LLM is like giving the work to a large group of junior developers were each time you give them work it’s a random one that picks up the task and you can’t actually teach them: even when it works, what you get is riddled with bad practices and design errors that are not even consistently the same between tasks so when you piece the software together it’s from the very start the kind of spaghetti mess you see in a project with lots of years in production which has been maintained by lots of different people who didn’t even try to follow each others coding style plus since you can’t teach them stuff like coding standards or design for extendability, it will always be just as fucked up as day one.

            • SkunkWorkz@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              3 days ago

              Yeah but in that edge case SO wouldn’t help either even before the current crash. Unless you were lucky. I find LLM useful to push me in the right direction when I’m stuck and documentation isn’t helping either not necessarily to give me perfectly written code. It’s like pair programming with someone who isn’t a coder but somehow has read all the documentation and programming books. Sometimes the left field suggestions it makes are quite helpful.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                I’ve found some interesting and even good new functions by moaning my code woes to an LLM. Also, it has taken me on some pointless wild goose chases too, so you better watch out. Any suggestion has the potential to be anywhere from absolutely brilliant to a completely stupid waste of time.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            4
            ·
            3 days ago

            How do you know it’s a good answer? That requires prior knowledge that you might have. My juniors repeatedly demonstrate they’ve no ability to tell whether an LLM solution is a good one or not. It’s like copying from SO without reading the comments, which they quickly learn not to do because it doesn’t pass code review.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              edit-2
              3 days ago

              That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.

              If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?

              (Below is just skippable anecdotes)


              Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.

              So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.

              In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.

              This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.

              So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.

              But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.


              Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.

              He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.

              Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              This is the big issue. LLMs are useful to me (to some degree) because I can tell when its answer is probably on the right track, and when it’s bullshit. And still I’ve occasionally wasted time following it in the wrong direction. People with less experience or more trust in LLMs are much more likely to fall into that trap.

              LLMs offer benefits and risks. You need to learn how to use it.

          • PlutoniumAcid@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            4 days ago

            Also depends on how you phrase the question to the LLM, and whether it har access to source files.

            A web chat session can’t do a lot, but an interactive shell like Claude Code is amazing - if you know how to work it.

    • MoogleMaestro@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Will the AI still flame me if I ask the wrong question?

      Is nothing sacred anymore?

      Real talk though, it is concerning when it feels like 3/5 times you ask AI something, you’ll get a completely hair brained answer back. SO will probably need to clamp down non-logged in browsing and enforce API limits to make sure that AI trainers are paying for the data they need.

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Depends on the model, I think Opus 4.5 is the only model that I’ve prompted which is getting close to not just being a boring sycophant.

  • Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    Already before the LLMs for me it was the last chance before I would post over there. The desperation move. It was too toxic and I would always get pissed to get my question closed because too similar or too easy or whatever. Hey I wasted 15 minutes to type that, if the other question solved the problem I wouldn’t post again…

    In the beginning it wasn’t like that…

    I went to watch my stack overflow account and the first questions that I posted (and that gave me 2000 karma) would have been almost all of them rejected and removed

  • BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    3 days ago

    Even before AI I stopped asking any questions or even answering for that matter on that website within like the first few months of using it. Just not worth the hassle of dealing with the mods and the neck beard ass users and I didn’t want my account to get suspended over some BS in case I really needed to ask an actual question in the future, now I can’t remember the last time I’ve been to any stack website and it does not show up in the Google search results anymore, they dug their own grave

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      I stopped using it once I found out their entire business model was basically copyright trolling on a technicality that anyone who answers a question gives them the copyright to the answer, and using code audits to go after businesses that had copy/pasted code. Just left a bad taste in my mouth, even beside stopping using it for work even though I wasn’t copy/pasting code.

      And even before LLMs, I found ignoring stack exchange results for a search usually still got to the right information.

      But yeah, it also had a moderation problem. Give people a hammer of power and some will go searching for nails, and now you don’t have anywhere to hang things from because the mod was dumber than the user they thought they needed to moderate. And now google can figure out that my question is different from the supposed duplicate question that was closed because it sends me to the closed one, not the tangentially related question the dumbass mod thought was the same thing. Similar energy to people who go to help forums and reply useless shit like RTFM. They aren’t really upset at “having” to take time to respond, they are excited about a chance to act superior to someone.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 days ago

      The humans of StackOverflow have been pricks for so long. If they fixed that problem years ago they would have been in a great position with the advent of AI. They could’ve marketed themselves as a site for humans. But no, fuckfacepoweruser found an answer to a different question he believes answers your question so marked your question as a duplicate and fuckfacerubberstamper voted to close it in the queue without critically thinking about it.

      • theolodis@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        I used to moderate and answer questions on SO, but stopped because at some point you see the 500th question about how to use some javascript function.

        Of course I flagged them all as duplicate and linked them to an extensive answer about the specific function, explaining all aspects and edge cases, because I don’t think there need to be 500 similatlr answers (who’s going to maintain them?)

        But yeah, sorry that I didn’t fix YOUR code sample, and you had to actually do your homework by yourself.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          My questions weren’t homework problems with 500 duplicates. Maybe that type of shit being the most common in the vote to close queue is why fuckfacerubberstamper can’t be bothered to actually think about what they’re closing as dupes.

      • ramjambamalam@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        3 days ago

        If the alternative is the cesspit that is Yahoo Answers and Quora, I’ll take the heavy-handed moderation of StackOverflow.

          • ramjambamalam@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Of course there’s a middle ground, that’s much closer in my ideal world to StackOverflow than it is to Yahoo Answers or Quora.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              3 days ago

              Like Lemmy? The site we’re all using?

              But no my point wasn’t about a specific site, it’s about the moderation approach. Do you really think there’s no middle ground in approach to moderation between Yahoo Answers and StackOverflow?

              • elephantium@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                2 days ago

                Like Lemmy? The site we’re all using?

                Cute. Except Lemmy hasn’t helped me solve any programming problems. StackOverflow has.

                And I think you missed my point, so I’ll restate it: If this theoretical middle-ground moderation were actually viable, it would have eaten StackOverflow’s lunch like a decade ago. People were SALTY about SO’s hostility even before the “summer of love” campaign in 2012.

                • JackbyDev@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  It’s viable, StackExchange as a company is just shit. See: then never listening to meta, listening to random Twitter users more, and defaming their volunteer moderators.

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 days ago

      Hear hear, it was the hostile atmosphere that pushed me away from Stack Exchange years before LLMs were a thing. That very clear impression that the site does not exist to help specific people, but a vague public audience, and the treatment of every question and answer is subjugated to that. Since then I just ask/answer questions on platforms like Lemmy, Reddit, Discord, or the Discourse forums ran by various organisations, it’s a much more pleasant experience.

    • dgmib@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      The stupidest part is that their aggressive hostility against new questions means that the content is becoming dated. The answers to many, many questions will change as the tech evolves.

      And since AI’s ability to answer tech questions depends heavily on a similar question being in the training dataset, all the AIs are going to increasingly give outdated answers.

      They really have shot themselves in the foot for at best some short term gain.

    • THE_GR8_MIKE@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      This was my issue. The two times I posted real, actual questions that I needed help with, and tried to provide as much detail as possible while saying I didn’t understand the subject,

      I got clowned on, immediately downvoted negative, and got no actual help whatsoever. Now I just hope someone else had a similar issue.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    3 days ago

    I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.

    the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 days ago

      Works well for now. Wait until there’s something new that it hasn’t been trained on. It needs that Stack Exchange data to train on.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow

      • cherrari@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        23
        ·
        3 days ago

        I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          edit-2
          3 days ago

          Lol no, AI can’t do a single thing without humans who have already done it hundreds of thousands of times feeding it their data

          • okmko@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            3 days ago

            I used to push back but now I just ignore it when people think that these models have cognition because companies have pushed so hard to call it AI.

        • 123@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          Technical specs don’t capture the bugs, edge cases and workarounds needed for technical subjects like software.

          • cherrari@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            20 hours ago

            I can only speak for myself obviously, and my context here is some very recent and very extensive experience of applying AI to some new software developed internally in the org where I participate. So far, AI eliminated any need for any kind of assistance with understanding and it was definitely not trained on this particular software, obviously. Hard to imagine why I’d ever go to SO to ask questions about this software, even if I could. And if it works so well on such a tiny edge case, I can’t imagine it will do a bad job on something used at scale.

            • 123@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              If we go by personal experience, we recently had the time of several people wasted troubleshooting an issue for a very well known commercial Java app server. The AI overview hallucinated a fake system property for addressing an issue we had.

              The person that proposed the change neglected to mention they got it from AI until someone noticed the setting did not appear anywhere in the official system properties documented by the vendor. Now their personal reputation is that they should not be trusted and they seem lazy on top of it because they could not use their eyes to read a one page document.

              • cherrari@feddit.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                20 hours ago

                That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          3 days ago

          It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.

          It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.

          • webadict@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              Interesting story, but I’ve seen the same work with how many ass in assassian

              you can probe the stuff it’s bad at, and a lot of it doesn’t line up well with the story that it’s how people were corrected.

              • webadict@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.

                Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.

                Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?

                I’m confused where you are getting your information from, but this is not particularly special behavior.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          The whole point of StackExchange is that it contained everything that isn’t in the docs.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      3 days ago

      The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

      Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        Probably explains why quora started sending me multiple daily emails about shit i didn’t care about and removed unsubscribe buttons form the emails.

        I don’t delete many accounts… but that was one of them

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 days ago

      What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          It’ll certainly be of lesser quality even if they go through steps to make it able to address it.

          good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.