Many Gen Z employees say ChatGPT is giving better career advice than their bosses::Nearly half of Gen Z workers say they get better job advice from ChatGPT than their managers, according to a recent survey.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    8 months ago

    I have never, ever asked my boss, or chat gpt, about career advice. :)

  • Contramuffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    8 months ago

    Asking ChatGPT for advice about anything is generally a bad idea, even though it might feel like a good idea at the time. ChatGPT responds with what it thinks you want to hear, just phrased in a way that sounds like actual advice. And especially since ChatGPT only knows as much information as you are willing to tell it, its input data is often biased. It’s like an r/relationshipadvice or r/AITA thread, but on steroids.

    You think it’s good advice because it’s what you wanted to do to begin with, and it’s phrased in a way that makes your decision seem like the wise choice. Really, though, sometimes you just need to hear the ugly truth that you’re making a bad choice, and that’s not something that ChatGPT is able to do.

    Anyways, I’m not saying that bosses are good at giving advice, but I think ChatGPT is definitely not better at giving advice than bosses are.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 months ago

      I’m not touting the merits of “prompt engineering” but this is a classical case.

      Don’t ask “how can I be a more attractive employee” ask “I am a manager at a company. Describe features and actions of a better candidate/ employee.”

      You will get very different answers

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      For these kind of generic questions, ChatGPT is great at giving you the common fluff you’d find in a random “10 ways to improve your career” youtube video.

      Which may still be useful advice, but you can probably already guess what it’s going to say before hitting enter.

      • kautau@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Yeah for questions like that, take the top 10 results on google, throw them into a blender, and that will be ChatGPT’s answer

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      Well, yes, but lets get real here… Asking your boss about career advice is very often worse.

      You are better with useless random information collected on the internet than what has been finely tailored against you.

      • SoleInvictus@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Don’t forget well-meaning advice from someone incompetent who failed upwards but still lacks the self-awareness to see it. I’ve had a few of those.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      8 months ago

      It’s great for brainstorming and getting started on a problem, but you need to keep what you said in mind the whole time and verify its output. I’ve found it really good for troubleshooting. It’s wrong a lot of the time but it does lead you in the right direction which is very helpful for problems where it’s hard to know where to even start.

    • daddy32@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 months ago

      Nonsense. Not “about anything”. ChatGPT gives correct advice in many fields, some of which are directly verifiable - for example programming.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        Because of the way you phrase it.

        You only tell chatGPT your side of the story. And chatGPT is just a word predictor. If you offer it 2 options, and for one of them you use words that are on average 20.69% more positive to describe the option than the other one, chatGPT just fills the blanks and will see that that option is more positive, therefore it will probably recommend that.

        ChatGPT has no intelligence or reason, it’s just a word predictor. It doesn’t use logic. It won’t do an analysis of the impact of each alternative, it just has some inputs and is asked to predict what the next word will be.

        • Euphoma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          Yeah noticed this when I started to make chatgpt write more sentences in essay’s I was doing. When you make chatgpt write the next sentence in a paragraph 9/10 times it just rewrites what you wrote in a different way.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Ask like an engineer, it will answer like an engineer. Ask like a moron, it will answer like a moron – all that is inherent in the training data, in the question/answer pairs the thing was trained on. Ask it to impersonate a Vulkan, it will get better at maths: My armchair analysis of that is that Vulkans talk quite formally and thus you’re getting more from the engineer and less from the moron training set.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          I actually saw an article on researchers that found it answers better if you ask it to answer like it was in star Trek

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Which definitely can’t be the case because Star Trek technobabble makes sense is what I’m saying, but the language mirrors that of what you see on an engineer forum so the increased accuracy smears over.

            Somewhat relatedly if you want to talk about real-world warp engines (there’s some physicists with some ideas or maybe better put speculations) it’s probably going to start talking in StarTrek technobabble. Less “turn it off and on again”, more “reinitialise the primary power coupling”.

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      8 months ago

      Man, shut the fuck up. I bet you say wikipedia and Google aren’t reliable either. Just use some damn sense.

      • tigeruppercut@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        I mean google search kinda famously sucks these days bc it’s been SEO’d and ad-promoted to death

        • PriorityMotif@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Right, but if you have enough knowledge to search for what you’re looking for, then you should have enough sense to know if a site is bullshit or not. People trust sites like stack overflow all the time.

  • Billegh@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 months ago

    Unsurprising. Managers have their own goals in mind and how you fit into them. They don’t care so much about where you end up, but what you can help them with.

    ChatGPT just wants to be loved.

  • FluffyPotato@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    Well, yea, if you ask some giant tobacco company and ChatGPT if you should take up smoking you can guess who gives the better answer. ChatGPT makes up a lot of shit but it’s not as self interested as the vast majority of bosses.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Yep. Weekly 1:1 meetings for the first year at least. Go to every two weeks after if preferable.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      On the other hand, one of mine thinks she deserves a promotion because her electricity bill was so much higher in winter than she was expecting.

      Maybe what she needs help with from ChatGPT is translating her actual request to language you’d better understand.

      “Hi person who has undue influence over my well-being, it turns out that the increased amount of work I’ve been doing at home has led to an unexpected increase in incurred personal costs. Ideally, these should be offset by work. Given I am skeptical you’d authorize any kind of reimbursement or a relative pay raise to cover these costs for their own sake, I’m instead coming to you to suggest a promotion prompted by my sudden increased incurred costs which are in part on your behalf. This is also justified based on my work history and the ways in which my pay hasn’t kept track with market rates for comparable labor. I would encourage you to consider the transactional costs of finding a replacement at current market rates and factor those into the value you put on my retention as you consider this request, as without being able to pay my bills I may be forced to seek other employment which you will only know about if I succeed at max two weeks out from my disappearance.”

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Well, I don’t have any experience asking it for career advice, but I have worked with it quite a bit and it’s quite shitty once you get to anything that starts resembling complexity. This is definitely not a tool I’d go to for any advice beyond the simplest ones.

    • TempermentalAnomaly@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I actually wonder if that’s a benefit for young people just starting out on their career journey. It’s mostly about feelings and a general sense and not specific opportunities to advance a career. In a lot of ways, a well established manager whose from another generation is not in time with those feelings and the difficulties with navigating them in a complex corporate environment.

    • glowie@h4x0r.host
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      8 months ago

      Surprisingly, I’ve had the opposite effect. Wherein, it has increased my productivity by tenfold and has helped with code review and/or confirming various logic, etc. Although, I wouldn’t necessarily take what it tells me as gospel from a recommendation standpoint in terms of my career as a whole. I’ve definitely caught it numerous times being wrong, but the inaccuracies pale in comparison to what it gets right, imo.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        8 months ago

        Don’t get me wrong, it saved me a ton of time. Just recently I needed some coding help that would probably take me hours of searching. Doesn’t mean I’d trust it with advice, that’s something entirely different than spitting out code that works half of the time.

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            That’s my biggest issue with AI. I’ve tried using it to help me code and it’s a way more often than not. It’s great for doing find/replace or guessing what I want a function to do and giving me a skeleton that I can change to do what I want. But anytime I try to do something a bit advanced, it chokes.

            Like this week I needed help with a regex match pattern, and it straight up gave me wrong code multiple times in a row. And not even multiple wrong answers, the same goddamned wrong answer 3 or 4 times in a row.

          • 7fb2adfb45bafcc01c80@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            Same here. The most I get out of might be a pointer to a module that could be a better approach, but the code I get from ChatGPT is usually worthless.

            I treat it as my water cooler talk, and maybe I’ll come away with a few new ideas.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      There’s something to be said for the abilities of a tool reflecting its wielder.

      In research circles, the most advanced pipelines in terms of prompting have a 90% success rate at things the same model only gets right around 30% of the time with naive zero shot prompting.

      At a minimum, people should be familiar with chain of thought prompting if using the models. That one is very easy to incorporate and makes a huge difference on complex problems.

      Though for anyone actually building serious pipelines for these products, the best technique I’ve seen to date was this one from DeepMind:

      We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.

      So yes, maybe you aren’t getting a lot out of the models. But a lot of people are, and the difference between your experiences and theirs may just boil down to experience in using the tool. If I just started using Photoshop for an hour or two I might complain about how the software sucks at making good looking images. But we both know it wouldn’t be the software’s fault.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Well, one more comment like that and I guess I’m gonna have to edit my original comment, because I don’t want to explain again. I’m getting quite a lot out of LLMs (GPT-4, to be specific), it’s just that they’re very stupid. When they don’t straight up lie, they don’t know stuff. It’s quite simple, really, I usually deal with very complex problems that few people dealt with, the AI has (close to) no data on that, so it runs in circles and is not able to help.

        But when presented with questions that it has training data on, it’s brilliant - recently I needed to use reflection to get all types implementing an interface in .NET with the caveat that the interface is generic. GPT-4 was able to solve that problem 3rd message in the conversation, while I’m pretty sure it would take me hours, because I’d need to learn a lot of .NET’s internal workings before arriving at the quite simple solution.

        So, a good career advice - which one do you feel like it is? A simple question with a straight correct solution, or a complex and nuanced issue where there isn’t one general truth? Because the only correct answer to a request for career advice by someone who doesn’t know your situation extensively is (a version of) “I don’t know, what’s your situation in detail?”. Knowing GPT, it didn’t ask that question.

        So yes, LLMs are great! Just learn which use-cases it excels at and don’t ask it for complex advice.

  • shani66@ani.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    I don’t see how that’s news, management is incompetent by default. You could ask a frog and get better answers than you would from most management teams.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      This really depends on whether the manager has a degree in business or in something useful.