• jim_v@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      39 minutes ago

      If I was 14 and had an interest in coding, the promise of ‘vibe coding’ would absolutely reel me in. Most of us here on Lemmy are more tech savvy and older, so it’s easy to forget that we were asking Jeeves for .bat commands and borrowing* from Planet Source Code.

      But yeah, it feels like satire. Haha.

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 minutes ago

        I feel you, and I agree that as a learning tool that’s probably how it’s being used (whether that’s good or bad is a different topic), but the fact that they immediately talk about having to pay a dev makes it sound like someone who isn’t trying to learn but trying to make a product.

  • sol6_vi@lemmy.makearmy.io
    link
    fedilink
    arrow-up
    12
    ·
    14 hours ago

    I’m not a programmer by any stretch but what LLM’s have been great for is getting my homelab set up. I’ve even done some custom UI stuff for work that talks to open source backend things we run. I think I’ve actually learned a fair bit from the experience and if I had to start over I’d be able to do way way more on my own than I was able to when I first started. It’s not perfect and as others have mentioned I have broken things and had to start projects completely from scratch but the second time through I knew where pitfalls were and I’m getting better at knowing what to ask for and telling it what to avoid.

    I’m not a programmer but I’m not trying to ship anything either. In general I’m a pretty anti-AI guy but for the non-initiated that want to get started with a homelab I’d say its damn near instrumental in a quick turnaround and a fairly decent educational tool.

    • AppearanceBoring9229@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      ·
      13 hours ago

      This is the correct way to do it, use it, see if it works for you and try to understand what happened. It’s not that different from using examples or stack overflow. With time you get better, but you need to have that last critical thinking step. Otherwise you will never learn and will just copy paste hoping it works

    • gerryflap@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      11 hours ago

      As a programmer I’ve found it infinitely times more useful for troubleshooting and setting up things than for programming. When my Arch Linux nukes itself again I know I’ll use an LLM, when I find a random old device or game at the thrift store and want to get it to work I’ll use an LLM, etc. For programming I only use the IntelliJ line completion models since they’re smart enough to see patterns for the dumb busywork, but don’t try to outsmart me most of the time which would only cost more time.

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    33
    ·
    edit-2
    22 hours ago

    As a software developer, I’ve found some free LLMs to provide productivity boosts. It is a fairly hairpulling experience to not try too hard to get a bad LLM to correct itself, and learning to switch quickly from bad LLMs is a key skill in using them. A good model is still one that you can fix their broken code, and ask them to understand why what you provided them fixes it. They need a long context window to not repeat their mistakes. Qwen 3 is very good at this. Open source also means a future of customizing to domain, ie. language specific, optimizations, and privacy trust/unlimited use with enough local RAM, with some confidence that AI is working for you rather than data collecting for others. Claude Sonnet 4 is stronger, but limited free access.

    The permanent side of high market cap US AI industry is that it will always be a vector for NSA/fascism empire supremacy, and Skynet goal, in addition to potentially stealing your input/output streams. The future for users who need to opt out of these threats, is local inference, and open source that can be customized to domains important to users/organizations. Open models are already at close parity, IMO from my investigations, and, relatively low hanging fruit, customization a certain path to exceeding parity for most applications.

    No LLM can be trusted to allow you do to something you have no expertise in. This state will remain an optimistic future for longer than you hope.

    • Donkter@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      22 hours ago

      I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.

      They get confused easily, and despite what is being pitched, they don’t really learn very well. So if they get something wrong the first time they aren’t going to figure it out after another hour or two.

      • mad_lentil@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        In my experience, they’re better at poking holes in code than writing it, whether that’s green or brownfield.

        I’ve tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM’s work than if I’d just written it myself.

        But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you’re unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.

        • lapping6596@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          19 hours ago

          At work we’ve got coderabbit set up on our github and it has found bugs that I wrote. Sometimes the thing drives me insane with pointless comments, but just today found a spot that would have been a big bug in prod in like 3 months.

      • humanspiral@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        20 hours ago

        But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product.

        Yes. Kind of. It takes ( a couple of days) experience with LLMs to know that failing to understand your corrections means immediate delete and try another LLM. The only OpenAI llm I tried was their 120g open source release. It insisted that it was correct in its stupidity. That’s worse than LLMs that forget the corrections from 3 prompts ago, though I also learned that is also grounds for delete over any hope for their usefulness.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    116
    arrow-down
    2
    ·
    1 day ago

    It is not useless. You should absolutely continue to vibes code. Don’t let a professional get involved at the ground floor. Don’t inhouse a professional staff.

    Please continue paying me $200/hr for months on end debugging your Baby’s First Web App tier coding project long after anyone else can salvage it.

    And don’t forget to tell your investors how smart you are by Vibes Coding! That’s the most important part. Secure! That! Series! B! Go public! Get yourself a billion dollar valuation on these projects!

    Keep me in the good wine and the nice car! I love vibes coding.

    • vala@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      Kinda hard to find jobs right now in the midst of all this but looking forward to the absolutely inevitable decade long cleanup.

    • sturger@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      Also, don’t waste money on doctor visits. Let Bing diagnose your problems for pennies on the dollar. Be smart! Don’t let some doctor tell you what to do.

      IANAL so: /s

    • Ajen@sh.itjust.works
      link
      fedilink
      arrow-up
      12
      ·
      24 hours ago

      Not me, I’d rather work on a clean code base without any slop, even if it pays a little less. QoL > TC

  • TomMasz@piefed.social
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 day ago

    I’m sure it’s fun to see a series of text prompts turn into an app, but if you don’t understand the code and can’t fix it when it doesn’t work without starting over, you’re going to have a bad time. Sure, it takes time and effort to learn to program, but it pays off in the end.

    • Ledivin@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      14 hours ago

      Yeah, mostly agreed. In my experience so far, an experienced dev that’s really putting time into their setup can greatly accelerate their output with these tools, while an inexperienced dev will end up taking way longer (and they’ll understand less) than it would have if they worked normally

  • Two9A@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    1 day ago

    So there are multiple people in this thread who state their job is to unfuck what the LLMs are doing. I have a family member who graduated in CS a year ago and is having a hell of a time finding work, how would he go about getting one of these “clean up after the model” jobs?

    • vala@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      I’ve been an engineer for over a decade and am now having a hard time finding work because of this LLM situation so I can’t imagine how a fresh graduate must feel.

        • buttnugget@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          20 hours ago

          It would be nice if software development were a real profession and people could get that experience properly.

          • sturger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            It was. Wall St is destroying it, along with everything else in its insatiable drive for more profit. Everything must be sacrificed to the golden idol.

    • OpenPassageways@lemmy.zip
      cake
      link
      fedilink
      arrow-up
      20
      ·
      1 day ago

      It makes me so mad that there are CS grads who can’t find work at the same time as companies are exploiting the H1B process saying “there aren’t enough applicants”. When are these companies going to be held accountable?

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        This is in no way new. 20 years ago I used to refer to some job postings as H1Bait because they’d have requirements that were physically impossible (like having 5 years experience with a piece of software <2 years old) specifically so they could claim they couldn’t find anyone qualified (because anyone claiming to be qualified was definitely lying) to justify an H1B for which they would be suddenly way less thorough about checking qualifications.

        • OpenPassageways@lemmy.zip
          cake
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          Yeah companies have always been abusing H1B, but it seems like only recently is it so hard for CS grads to find jobs. I didn’t have much trouble in 2010 and it was easy to hop jobs for me the last 10 years.

          Now, not so much.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        After they fill up on H1B workers and find out that only 1/10 is a good investment.

        H1B development work has been a thing for decades, but there’s a reason why there are still high-paying development jobs in the US.

    • CodeMonkey@programming.dev
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      1 day ago

      No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.

      • mad_lentil@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        Where is this coming from? I don’t think an LLM can code at the level of a recent cs grad unless it’s piloted by a cs grad.

        Maybe you’ve had much better luck than me, but coding LLMs seem largely useless without prior coding knowledge.

      • MrRazamataz@lemmy.razbot.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 hours ago

        What’s this based on? Have you met a fresh CS graduate and compared them to an LLM? Does it not vary person to person? Or fuck it, LLM to LLM? Calling them not qualified seems harsh when it’s based on sod all.

    • immutable@lemmy.zip
      link
      fedilink
      arrow-up
      16
      ·
      1 day ago

      The difficult part is going to be that new engineers are not generally who people think about to unfuck code. Even before the LLMs junior engineers are generally the people that fuck things up.

      It’s through fucking lots of stuff up and unfucking that stuff up and learning how not to fuck things up in the first place that you go from being a junior engineer to a more senior engineer. Until you land in a lofty position like staff engineer and your job is mostly to listen to how people want to fuck everything up and go “maybe let’s try this other way that won’t fuck everything up instead”

      Tell your family member to network, that’s the best way to get a job. There are discord servers for every programming language and most projects. Contribute to open source projects and get to know the people.

      Build things, write code, open source it on GitHub.

      Drill on leet code questions, they aren’t super useful, but in any interview at least part of the assessment is going to be how well they can do on those.

      There are still plenty of places hiring. AI has just made it so that most senior engineers have access to a junior engineer level programmer that they can give tasks to at all time, the AI. So anything you can do to stand out is an advantage.

    • Zron@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Answer is probably the same as before AI: build a portfolio on GitHub. These days maybe try to find repos that have vibe code in them and make commits that fix the AI garbage.

      • Alaknár@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Answer is probably the same as before AI: build a portfolio on GitHub

        You really think that using GitHub falls in the usual vibecoding toolbox? As in: would they even know where/how to look?

        • Zron@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          You think vibe coders don’t love the smell of their own shit enough to show it to the world?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      My path was working for a consulting firm (Accenture) for a few years, making friends with my clients, and then jumping to freelance work a few years later when I can get paid my contract rate directly rather than letting Accenture take a big chunk of it.

    • CodeMonkey@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        a coding LLM can code as well as a fresh CS grad.

        For a couple of hundred lines of code, they might even be above average. When you split that into a couple of files or start branching out, they usually start to struggle.

        after you give them a piece of advice once or twice, they stop making that same mistake.

        That’s a damn good observation. Learning only happens with re-training and that’s wayyy cheaper when done in meat.

  • rozodru@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    God bless vibe coders, because of them I’m buying a new PC build this week AND I’ve decided to get a PS5.

    Thank you Vibe Coders, your laziness and and sheer idiocy are padding my wallet nicely.

    • kidney_stone@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      1 day ago

      My boss is literally convinced we can now basically make programs that take rockets to mars, and that it’s literally clicks away. For the life of me, it is impossible to convince him that this is, in fact, not the case. Whoever fired developers because ‘AI could do it’ is going to regret it.

      • jkercher@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        Maybe try convincing him in terms he would understand. If it was really that good, it wouldn’t be public. They’d just use it internally to replace every proprietary piece of software in existence. They’d be shitting out their own browser, office suite, CAD, OS, etc. Microsoft would be screwing themselves by making chatgpt public. Microsoft could replace all the Adobe products and drive them out of business tomorrow.

      • Phineaz@feddit.org
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        I mean … the first moon landings took a very low number of clicks to make the calculations, technically speaking

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        it is impossible to convince him that this is, in fact, not the case

        He’s probably an investor.

        The tech economy is struggling. Every company needs 20% more every year, or it’s considered a failure. The big fish have bought up every promising property on the map in search of this. It’s almost impossible to go from small to large without getting gobbled up, and the guys gobbling up already have 7 different flavors of what you’re trying to make on ice in a repo somewhere. There’s no new venture capital flowing into conventional work.

        AI has all the venture capitalists buzzing, handing over money like it’s 1999. Investors are hopping on every hype train because each one has the chance of getting gobbled up and making a good return on investment.

        These mega CEO’s have moved their personal portfolios into AI funding and their companies pushing the product will line their pockets indirectly.

        At some point, that $200/pp/m price will shoot up. They’re spending billions on datacenters, and eventually those investments will be called in for returns.

        When they hit the wall for training-based improvement, things got slippery. Current models are costing exponentially more, making several calls for every request. The market’s not going to bear that without an exponential cost increase, even if they’re getting good work done.

      • KazuyaDarklight@lemmy.world
        link
        fedilink
        English
        arrow-up
        156
        ·
        2 days ago

        Fake in that it’s almost assuredly written and posted by someone who is actively anti-vibe coding and this is a troll on the true believers.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        2 days ago

        I love the one guy on that thread who is defending vibe coding, and is “about to launch his first application,” and anyone who tells him how dumb he is is only doing so because they feel threatened.

        • suicidaleggroll@lemmy.world
          link
          fedilink
          arrow-up
          25
          arrow-down
          3
          ·
          edit-2
          2 days ago

          Nah I’m on that guy’s side. His experience lines up with my own, namely that vibe coding is not useful for people who don’t know how to program, but it can be useful for people who do know how to program, and simply aren’t familiar with the specific syntax used in a language they’re not an expert in.

          In that case, the queries to the AI model aren’t, “write me a program that can do X”, it’s more like “write me a function in this language that can take A, B, and C as inputs, do operation Y with them, and return Z”, or “what’s the best way to find all of the unique elements in an array and sort it alphabetically in this language”. Then the programmer can take those pieces and build up a proper application with them. The AI isn’t actually writing the program for you, it’s more like a customized Stack Overflow generator, without having to wade through a decade of people arguing back and forth in the comments about inane bullshit.

          Does it save a ton of time? No, but it’s still helpful, and can get you up and running in a new language much faster than the alternative.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            1 day ago

            My company is doing a big push for LLM/codegen/“everyday ‘AI’”

            Sorry - threw up in my mouth a little bit there

            And pretty much the only thing I acquiesce to using is the “better autocomplete” feature. Most of the other stuff it seems to offer is essentially useless on a day-to-day basis for me.

            And moreover, it’s actively harmful to the entire practice of engineering, because management and execs see it as this magical oracle/panopticon that can magically make people more productive and churn out 10x more bullshit products that they didn’t consult with engineers on than before. It can’t and it doesn’t. But that doesn’t stop them from thinking it can.

            And then they stop hiring junior levels because “codegen can do that”. And then you have a generational gap in the entire fucking discipline of coding as an art, because the entire fucking tech industry is doing this. And we haven’t even touched on the ecological and infrastructural (as in: water and power, not “which cloud or bare metal do we put this on”) implications and how they’re being blatantly ignored and hand-waved away, or the comical license and usage violations that are perfectly fine when large companies do but you’ve been a naughty boy if you torrent a fucking movie. But I digress.

          • korazail@lemmy.myserv.one
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 day ago

            I really like the description of AI coding as ‘custom stack overflow generator’ because it really sells the flaws as well, to an experienced dev. We go to stack overflow for help with some weird quirk of a language or find an obscure library that solves our specific need.

            I think vibe coding is cobbling together a project from a bunch of stack overflow posts – and they only use the question part of the post.

          • Neshura@bookwyr.me
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            I’m just using AI to get me the damn standard library function I want to use but can’t remember. Way faster than clicking through a couple links of a search result for it.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            8
            ·
            2 days ago

            An HtML class ten years ago isn’t anything close to knowing how to program. It’s like saying “I wrote a bullet point lost years ago so I know how to write a novel.”

          • Serinus@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            I’m currently doing this with an angular project that’s a bit of a clusterfuck. So many layers.

            I’m still having to break it down into much, much smaller chunks and it’s not able to do much, but it is helpful. Most useful thing was that I started with writing a pure SQL query with several joins and told it “turn this into linq using existing entities”.

            I think they’ll completely replace ORMs.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 day ago

            Sure, it can be useful for people who do know how to program, though I find it usually takes more effort to get it to create what I want and make sure it works than it takes to just do it myself.

            This guy explicitly says he doesn’t know how to program though. He says he took an HTML (not a programming language, a markdown language) class a decade ago. He probably doesn’t remember shit from it, not that it’d be helpful anyway because writing HTML has nothing to do with writing a program to perform a task.

            Does it save a ton of time? No, but it’s still helpful, and can get you up and running in a new language much faster than the alternative.

            You obviously aren’t a programmer. You either know how to program or you don’t. The language is just syntax, which is trivial to learn. It doesn’t help you get running in a new language because you still need to learn the syntax to make sure it’s writing something reasonable. That time has to be spent no matter what.

            • silasmariner@programming.dev
              link
              fedilink
              arrow-up
              7
              arrow-down
              1
              ·
              1 day ago

              you obviously aren’t a programmer

              Don’t be a dick, the example is a perfectly reasonable one, and it’s something ppl would’ve used Rosetta code or learnxiny or stack overflow for in the past.

    • MrSmith@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      2 days ago

      You should(n’t) watch Quin69. He’s currently “vibe-coding” a game with Claude. Already spent $3000 in tokens, and the game was in such a shit state, that a viewer had to intervene and push an update that dragged it to a “playable” state.

      The game is at a level of a “my first godot game”, that someone who’s learning could’ve made over a weekend.

      • zeropointone@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        2 days ago

        I better not watch this, it would make me angry. I hate people who could have hired someone like me for the same or even less money and get a working product. But no, they always throw money at fraudsters. Because wasting resources is their very nature.

        • Danitos@reddthat.com
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 day ago

          I watched a bit out of curiosity and even vibe-coding aside, he is annoying as fuck. Couldn’t stand him 20 seconds.

    • NoiseColor @lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      2 days ago

      It’s strange, but I’ve seen lots of comments that are not aware this is fake. The ai hater crowd is using it as their proof, the other side saying he is using it wrong.

      • zeropointone@lemmy.world
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        2 days ago

        That’s depressing. This is so obviously fake because of how entertaining it is written and how the conclusion gets shoved in your face. No subtlety.

    • andioop@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 days ago

      Is that what the weird extra width on some letters is, artifacts from some AI generating the post?

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        35
        ·
        2 days ago

        No, the phrasing makes it clear someone wrote a fictional account of becoming self aware that the output of vibe coding isn’t maintainable as it scales.

        • andioop@programming.dev
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          2 days ago

          I’m entirely too trusting and would like to know what about the phrasing tips you off that it’s fictional. Back on Reddit I remember so many claims about posts being fake and I was never able to tease out what distinguished the “omg fake! r/thathappened” posts from the ones that weren’t accused of that, and I feel this is a skill I should be able to have on some level. Although taking an amusing post that wasn’t real as real doesn’t always have bad consequences.

          But I mostly asked because I’m curious about the weird extra width on letters.

            • andioop@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              20 hours ago

              That’s a bit difficult because I already go into anything from The Onion knowing it’s intended to be humorous/satirical.

              What I lack in ability to recognize satire or outright deception from posts written online, I make up for by reading comment threads: seeing people accuse things of being fake, seeing people defend it as true, seeing people point out the entire intention of a website is satire, seeing people who had a joke go over their heads get it explained… relying on the collective hivemind to help me out where I am deficient. It’s not a perfect solution at all, especially since people can judge wrong—I bet some “omg so fake” threads were actually real, and some astroturf-type things written to influence others without real experience behind it got through as real.

          • AmidFuror@fedia.io
            link
            fedilink
            arrow-up
            35
            ·
            2 days ago

            When something is too “on the nose,” for example, it’s written in exactly the way that would induce the most cheering and virality because it appeals so much to one group of people, it’s worth considering it may have been written to provoke exactly that reaction.

            • andioop@programming.dev
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              2 days ago

              Thanks!

              I really wish people did not do this. This isn’t something I was ever taught to look for, and I like to think I got a good education. I was taught to make sure my source is credible, to consider biases and spin and what things are facts and what is just opinion, but I wasn’t taught to look for a lot of deception people call out online. But I guess I have to live with this and gain the skill to look for deception. Genuinely, thanks for helping me, since I don’t think I ever would have figured out what raises “fake” flags in most peoples’ heads on my own.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                14
                ·
                2 days ago

                AmidFuror’s description is on point and I see it as a variant of Poe’s Law. Instead of sarcasm being mistaken for a real belief, it is presenting a fictional account of someone being self aware that is mistaken for someone actually becoming self aware.

                There are two lines that make me absolutely certain it is written by someone who it not a vibe coder and is leaning into the sarcasm.

                • ‘pulling out my wallet for someone that knows what they are doing’ implies the poster knows they don’t know what they are doing
                • ‘vibe coding is just roleplaying for guys who want to feel like hackers’ is a joke I’ve seen directed at vibe coders more than once

                Keep in mind that not all deception is malicious, but most people see the word deception as having a negative implication. An actor/actress pretending to be someone else is technically deceptive the same way as whoever wrote this hilarious post. They are presenting a fictional account for an audience.

                • andioop@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  2 days ago

                  You are right about the thespian thing, but when you watch TV/film/theatre everyone is in on the “joke” and we all know they’re not really falling in love, getting murdered, or whatever dramatic happening. I’m not sure if OOP is just trying to entertain and expects everyone to realize they’re joking, which would stick them on the thespian side, or if they have other motives. But hey, interesting point to bring up!

          • Windex007@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            2 days ago

            r/thatHappened was the worst thing to happen to Reddit and I sincerely hate whoever created that sub

      • zeropointone@lemmy.world
        link
        fedilink
        arrow-up
        17
        ·
        2 days ago

        No, the text itself. No vibe coder would write something like that. The artifacts you mentioned are the result of simple horizontal and vertical upscaling. If you zoom in you can see it better.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    151
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I don’t really care about vibe coders but as a dev with just under 2 decades in the field:

    1. Your vibe coding shit will not go to prod until humans fully review it
    2. You better review it yourself first before offloading that massive mental drain to someone else (which means you still need to have some semblance of programming skills). Don’t open a PR with 250 files in it and then tell someone else to validate it.
    3. Use more context. Don’t give it vague ass prompts.
    4. Don’t use auto-accept. That’s just lazy asshole shit.

    I can’t stress this enough: if you give me a PR with tons of new files and expect me to review it when you didn’t even review it yourself, I will 100% reject it and make you do it. If it’s all dumped into a single commit, I will whip your computer into the nearest body of water and tell you to go fish it out.

    I don’t care what AI tool wrote your code. You’re still responsible for it and I will blame you.

    • i_stole_ur_taco@lemmy.ca
      link
      fedilink
      arrow-up
      82
      ·
      2 days ago

      When I see a sloppy PR I remind people “AI didn’t write that. You wrote it. Your name is on the git blame.”

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      If it’s all dumped into a single commit, I will whip your computer into the nearest body of water and tell you to go fish it out.

      I’m going to steal this for an update to an internal guidance document for my dev team. Thank you.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        Lmao glad I could help! I hate those big commits. They’re so much harder to traverse and know what’s going on. Developer experience has been big on my mind lately. Working 5 days a week is already hard, but there are moments when we can make tiny bits easier for each other.

    • Adalast@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      I have never used an AI to code and don’t care about being able to do it to the point that I have disabled the buttons that Microsoft crammed into VS Code.

      That said, I do think a better use of AI might be to prepare PRs in logical and reasonable sizes for submission that have coherent contextualization and scope. That way when some dingbat vibe codes their way into a circle jerk that simultaneously crashes from dual memory access and doxxes the entire user base, finding issues is easier to spread out and easier to educate them on why vibe coding is boneheaded.

      I developed for the VFX industry and I see the whole vibe coding thing as akin to storyboards or previs. Those are fast and (often) sloppy representations of the final production which can be used to quickly communicate a concept without massive investment. I see the similarities in this, a vibe code job is sloppy, sometimes incomprehensible, but the finished product could give someone who knew what the fuck they are doing a springboard to write it correctly. So do what the film industry does: keep your previs guys in the basement, feed them occasionally, and tell them to go home when the real work starts. (No shade to previs/SB artists, it is a real craft and vital for the film industry as a whole. I am being flippant about you for commedic effect. Love you guys.)

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        I think storyboards is a great example of how it could be used properly.

        Storyboards are a great way for someone to communicate “this is how I want it to look” in a rough way. But, a storyboard will never show up in the final movie (except maybe fun clips during the credits or something). It’s something that helps you on your way, but along the way 100% of it is replaced.

        Similarly, the way I think of generative AI is that it’s basically a really good props department.

        In the past, if a props / graphics / FX department had to generate some text on a computer screen that looked like someone was Hacking the Planet they’d need to come up with something that looked completely realistic. But, it would either be something hand-crafted, or they’d just go grab some open-source file and spew it out on the screen. What generative AI does is that it digests vast amounts of data to be able to come up with something that looks realistic for the prompt it was given. For something like a hacking scene, an LLM can probably generate something that’s actually much better than what the humans would make given the time and effort required. A hacking scene that a computer security professional would think is realistic is normally way beyond the required scope. But, an LLM can probably do one that is actually plausible for a computer security professional because of what that LLM has been trained on. But, it’s still a prop. If there are any IP addresses or email addresses in the LLM-generated output they may or may not work. And, for a movie prop, it might actually be worse if they do work.

        When you’re asking an AI something like “What does a selection sort algorithm look like in Rust?”, what you’re really doing is asking “What does a realistic answer to that question look like?” You’re basically asking for a prop.

        Now, some props can be extremely realistic looking. Think of the cockpit of an airplane in a serious aviation drama. The props people will probably either build a very realistic cockpit, or maybe even buy one from a junkyard and fix it up. The prop will be realistic enough that even a pilot will look at it and say that it’s correctly laid out and accurate. Similarly, if you ask an LLM to produce code for you, sometimes it will give you something that is realistic enough that it actually works.

        Having said that, fundamentally, there’s a difference between “What is the answer to this question?” and “What would a realistic answer to this question look like?” And that’s the fundamental flaw of LLMs. Answering a question requires understanding the question. Simulating an answer just requires pattern matching.

        • Adalast@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          14 hours ago

          See, I agree with everything up to the end. There you are getting into the philosophy of cognition. How do humans answer a question? I would argue, for many, the answer for most topics would be "I am repeating what I was taught/learned/read. An argument could be made that your description of responding with “What would a realistic answer to this question look like?” is fundamentally symmetric with “This is what I was taught.” Both are regurgitating information fed to them by someone who presumably (hopefully) actually had a firm understanding of the material themselves. As an example: we are all taught that 2+2=4, but most people are not taught WHY 2+2=4. Even fewer are taught that 2+2=11 in base 3 or how to convert bases at all. So do people “know” that 2+2=4 or are they just repeating the answer that they were told was correct?

          I am not saying that LLMs understand or know anything, I am saying that most humans don’t either for most topics.

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            13 hours ago

            How do humans answer a question? I would argue, for many, the answer for most topics would be "I am repeating what I was taught/learned/read.

            Even children aren’t expected to just repeat verbatim what they were taught. When kids are being taught verbs, they’re shown the pattern: “I run, you run, he runs; I eat, you eat, he eats.” They’re are told that there’s a pattern, and it’s that the “he/she/they” version has an “s” at the end. They now understand some of how verbs work in English, and can try to apply that pattern. But, even when it’s spotting a pattern and applying the right rule, there’s still an element of understanding involved. You have to recognize that this is a “verb” situation, and you should apply that bit about “add an ‘s’ if it’s he/she/it/they”.

            An LLM, by contrast, never learns any rules. Instead it ingests every single verb that has ever been recorded in English, and builds up a probability table for what comes next.

            but most people are not taught WHY 2+2=4

            Everybody is taught why 2+2=4. They normally use apples. They say if I have 2 apples and John has 2 apples, how many apples are there in total? It’s not simply memorizing that when you see the token “2” followed by “+” then “2” then “=” that the next likely token is a “4”.

            If you watch little kids doing that kind of math, they do understand what’s happening because they’re often counting on their fingers. That signals that there’s a level of understanding that’s different from simply pattern matching.

            Sure, there’s a lot of pattern matching in the way human brains work too. But, fundamentally there’s also at least some amount of “understanding”. One example where humans do pattern matching is idioms. A lot of people just repeat the idiom without understanding what it really means. But, they do it in order to convey a message. They don’t do it just because it sounds like it’s the most likely thing that will be said next in the current conversation.

            • Adalast@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              10 hours ago

              I wasn’t attempting to attack what you said, merely pointing out that once you cross the line into philosophy things get really murky really fast.

              You assert that LLMs aren’t taught the rules, but every word is not just a word. The tokenization process includes part of speech tagging, predicate tagging, etc. The ‘rules’ that you are talking about are actually encapsulated in the tokenization process. The way the tokenization process for LLMs, at least as of a few years ago when I read a textbook on building LLMs, is predicated on the rules of the language. Parts of speech, syntax information, word commonality, etc. are all major parts of the ingestion process before training is done. They may not have had a teacher giving them the ‘rules’, but that does not mean it was not included in the training.

              And circling back to the philosophical question of what it means to “learn” or “know” something, you actually exhibited what I was talking about in your response on the math question. Putting to piles of apples on a table and counting them to find the total is a naïve application of the principals of addition to a situation, but it is not describing why addition operates the way it does. That answer does not get discussed until Number Theory in upper division math courses in college. If you have never taken that course or studied Number Theory independently, you do not know ‘why’ adding two numbers together gives you the total, you know ‘that’ adding two numbers together gives you the total, and that is enough for your life.

              Learning, and by extension knowledge, have many forms and processes that certainly do not look the same by comparison. Learning as a child is unrecognizable when compared directly to learning as an adult, especially in our society. Non-sapient animals all learn and have knowledge, but the processes for it are unintelligible to most people, save those who study animal intelligence. So to say the LLM does or does not “know” anything is to assert that their “knowing” or “learning” will be recognizable and intelligible to the lay man. Yes, I know that it is based on statistical mechanics, I studied those in my BS for Applied Mathematics. I know it is selecting the most likely word to follow what has been generated. The thing is, I recognize that I am doing exactly the same process right now, typing this message. I am deciding what sequence of words and tones of language will be approachable and relatable while still conveying the argument I wish to levy. Did I fail? Most certainly. I’m a pedantic neurodivergent piece of shit having a spirited discussion online, I am bound to fail because I know nothing about my audience aside from the prompt to which you gave me to respond. So I pose the question, when behaviors are symmetric, and outcomes are similar, how can an attribute be applied to one but not the other?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        I think this is great. I like hearing about your experience in the VFX industry since it’s unfamiliar to me as a web dev. The storyboard comparison is spot on. I like that people can drum up a “what if” at such a fast pace, but vibe coders need to be aware that it’s not a final product. You can spin it up, gauge what works and what doesn’t, and now you have feasibility with low overhead. There’s real value to that.

        Edit: forgot to touch on your PR comment.

        At work, we have an optional GitHub workflow that lets you call Claude in a PR and it will do its own assessment based on the instructions file we wrote for it. We stress that it’s not a final say and will make mistakes, but it’s been good in a pinch. I think if it misses 5 things but uncovers 1 bug, that’s still a win. I’ve definitely had “a-ha” moments with it where my dumb brain failed to properly handle a condition or something. Our company is good about using it responsibly and supplying as much context as we possibly can.

      • DesertCreosote@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I like your previs analogy, because that’s how I’ve been thinking of it in my head without really knowing how to communicate it. It’s not very good at making a finished project, but it can be useful to demonstrate a direction to go in.

        And actually, the one time I’ve felt I was able to use AI successfully was literally using it for previs; I had a specific idea of design I wanted for a logo, but didn’t know how to communicate it. So I created about a hundred AI iterations that eventually got close to what I wanted, handed that to my wife who is an actual artist, told her that was roughly what I was thinking about, and then she took the direction it was going in and made it an actual proper finished design. It saved us probably 15-20 iterations of going back and forth, and kept her from getting progressively more annoyed with me for saying “well… can you make it like that, but more so?”