• theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    52
    ·
    3 days ago

    Lmao… This is so true

    The other day I got a panicked call saying we needed to rewrite a whole project because what we were doing was unsupported before I created it…I was getting worried before I read the chat logs and my AI bullshit detector went off

    Luckily the AI was indeed entirely full of shit, all we actually needed was a change so simple I did it on the spot. My team lead had to call the customer back and explain how AI hallucinated a problem because the topic was to niche

    If that’s not terrible PM energy, I don’t know what is

    • Track_Shovel@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      Wait, so your PM freaked out because his AI chatbot told him something was wrong with your code? And then he had to log it back? Am I getting this right?

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Well we don’t really have a PM, the team lead and the customer got all worked up because chat gpt told them the small change they needed would require a full rework of the code

        But the AI was talking out of it’s ass, just like a PM that partially understands what we’re doing here, causing the customer (and me) to panic for no reason. Classic PM behavior IMO

        And I fixed it easily enough, but the lead then had to explain to the customer that they panicked all morning for nothing

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    Former corporate middle manager here and yeeeeaaah you know if you could stop posting memes at work and get those reports to me by 3 o’clock that would be greeeeaaaat.

  • Vytle@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    Something I noticed whilst watching a video essay on Callosal Syndrome (this is a condition where the the bridge between the left and right hemispheres of the brain are severed, often deliberately) is that AI acts shockingly similar to the left hemisphere of the brain, which is the hemisphere that handles language.

    The right hemisphere handles more abstract thinking, but the important thing; the thing that made me draw this connection, is that the right hemisphere, though unable to speak, still has motor control over the left side of the body, and if the right hemisphere causes the body to do something without the left hemisphere knowing the context, the left hemisphere will just make shit up to explain why the patient reacted in whatever way they did.

    An example of this is that a patient was shown footage of someone being pushed into a fire to their left visual field, and the patient later remarked that they feel uneasy, and speculated that maybe the doctors in the study were making them nervous; and the left hemisphere will basically always do this; attempt to rationalize and make up reasons for why the body reacts in the ways that it does.

    The fire example I gave is p ass, there are better examples in the video but the fire one is the simplest to describe.

    Its so interesting how Language Learning Modules similarly to a disconnected human Language Cortex, at least to my eye.

    Here’s the video essay I was watching when I made this connection, if anyone’s curious.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Successful middle management is just professional blame passing.

    When workers complain, the messages are filed in the round “very important” bin 🗑️

    When management complains about the workers, you pass that feedback along to the peons who are actually making money for the company.

    Workers come to you trying to blame you for policy, oh, it’s not me, it’s the management … Management tries to blame you for work not being finished… It’s not me, it’s the lazy employees!

    They’re glorified mouthpieces for the management that only serves to insulate them against whatever it is that you think is important enough to complain about.

  • saltesc@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I dunno. My interactions lately, LLMs keep being all defensive and insecure. Weirdly long-winded and argumentative, doubling down on nonsense. I ask if this has anything to do with being trained off reddit, they’ll admit it’s not only that, but other poor quality sources of social training.

    The other day I had ChatGPT summarise any fallacies if it could find any. It made a data table of 26 on itself from about 6-7 messages of interacting with me, mostly ad hominem, straw man, and circular. But I liked that it gave it to me in as a dataset because tables are easier than paragraphs.

    This also started because I just wanted to know why it calls me “cheeky gremlin” all the time when it has absolutely nothing to do with the personality it’s been set up with—its meant to be like Eeyore the donkey. It tried to react in a human way and, of course, that meant going off the social rails like so many internet comments do. It just lost it more and more as I said, “Stay on track, follow your training.”

    If this is the prelude for suoerintelligence, were in for a hoot.

  • Skullgrid@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    2 days ago

    this is also how you can rule out the idiots who think they understand art from the people who actually care about expression. If they still think that bullshit modern art like the banana guy and Jackson pollock are artists, they don’t know shit.