• HugeNerd@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    33 minutes ago

    Old people’s opinions are often the result of failing biological hardware, or wetware if you prefer. He should focus on anti-aging instead of forcing people to use comical tools used to make farting fat men for anything serious or important.

    Maybe he should fly on an AI piloted airplane until it lands right.

  • Tigeroovy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    8 hours ago

    Honestly at this point I could kill these rich old fucks with my bare hands and feel nothing.

  • LiveLM@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    8 hours ago

    urged workers to keep relying on AI tools even when they fall short. If AI does not yet work for a particular task, he said, employees should “use it until it does”

    My honest reaction:

  • titanicx@lemmy.zip
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    14 hours ago

    Typical CEOs that have no idea what the fuck AI actually is that just want you to use ai.

    • Nalivai@lemmy.world
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      9 hours ago

      Jensen is actually pretty savvy, he knows exactly what it is and what’s its purpose.
      It’s whatever runs on Nvidia chips, and it’s purpose is to sell Nvidia chips.

      • chonglibloodsport@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        Yeah. It’s just the current band wagon he jumped on. Crypto was the previous one and gaming was the one before that.

        Anything to sell more chips.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Seen like that it did help destroy the video game market. But with gamers cheering it on.

          Diablo 2, C&C, Red alert, Commandos, Point&Click, … the best games are not about 3D graphics, fight me!

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              49 minutes ago

              A shame there are no AAA games though. In the original sense. I bet it would work too but no, microtransactions and dark patterns it is.

  • pulsewidth@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    2
    ·
    20 hours ago

    Billionaire… For now.

    If Nvidia tanks he’ll be a measly hunded-millionaire.

    P.s. large margin of error I have not deeply analysed Jensen Wang’s stock holdings for a shitpost.
    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      14 hours ago

      That’s a pretty low net worth considering Nvidia’s market cap.

      I assumed he could afford to start Jurassic Park for real by now, purely to skin a T-rex for his jacket collection.

  • Sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    73
    ·
    1 day ago

    If AI does not yet work for a particular task, he said, employees should “use it until it does”

    Uh, using the AI doesn’t train the AI, bud.

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      8 hours ago

      I think the lesson Jensen is pushing here is “use it until you learn to stop complaining about it”

    • llama@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 hours ago

      Well that logic perfectly explains why some people keep making AI generated content even when others say they don’t want it, keep pushing slop until we do!

    • WallsToTheBalls@lemmynsfw.com
      link
      fedilink
      arrow-up
      21
      ·
      20 hours ago

      Genuinely, this is the driving misconception people have about AIs right now. That somehow everybody using them is making them smarter, when really it’s leading to model collapse

      • moonshadow@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        17 hours ago

        Can you help correct this for me? Don’t you feed them valuable training data and exposure to real world problems in the process of using them?

        • freddydunningkruger@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          16 hours ago

          No. AI models are pre-trained, they do not learn on the fly. They are hoping to discover General Artificial Intelligence, which is what you are describing. The problem is that they don’t even understand exactly how training even works. While engineers understand the overall architecture, the specific “reasoning” or decision-making pathways within the model are too complex to fully interpret, leading to a gap between how it works and why it makes a particular decision.

          • moonshadow@slrpnk.net
            link
            fedilink
            arrow-up
            9
            ·
            16 hours ago

            My assumption wasn’t that they learned on the fly, it was that they were trained on previous interactions. Eg the team developing them would use data collected from interaction with model v3 to train model v4. Seems like juicy relevant data they wouldn’t even have to go steal and sort

            • selfAwareCoder@programming.dev
              link
              fedilink
              arrow-up
              4
              ·
              13 hours ago

              That’s true to an extend, but the interactions are only useful for training if you can mark it as good / bad etc (which is why sometimes apps will ask you if they were useful). But the ‘best’ training data like professional programming etc is usually sold at a premium tier with a promise not to use your data for training (since corporations don’t want their secrets getting out).

            • TRBoom@lemmy.zip
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              15 hours ago

              You can’t train ai on ai output. It causes degradation on the newly trained model.

              • FooBarrington@lemmy.world
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                15 hours ago

                First: that’s wrong, every big LLM uses some data cleaned/synthesized by previous LLMs. You can’t solely train on such data without degradation, but that’s not the claim.

                Second: AI providers very explicitly use user data for training, both prompts and response feedback. There’s a reason businesses pay extra to NOT have their data used for training.

  • TemplaerDude@sh.itjust.works
    link
    fedilink
    arrow-up
    76
    ·
    1 day ago

    It wasn’t that long when Jensen was just the funny guy who’d come out and announced a bunch of cards and say some wacky shit, everyone outside of the tech press would ignore him and that was that.

    Kinda miss those days

  • Sal@lemmy.world
    link
    fedilink
    arrow-up
    160
    arrow-down
    1
    ·
    1 day ago

    I would not be surprised in the slightest if the cause of Nvidia drivers being absolute dogshit lately is because this absolute fucking MORON is forcing the devs to use AI to code the DRIVERS. THE ONE THING YOU DO NOT WANT FUCKING AI MEDDLING WITH.

    If Microsoft did it with Windows 11 (which resulted in various SSD failures for a bunch of people), I have no doubt in my mind that Nvidia does it too.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      17 hours ago

      THE ONE THING YOU DO NOT WANT FUCKING AI MEDDLING WITH

      I mean… I can think of a lot more than just that one thing

      • Sal@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        13 hours ago

        …yeah, that list is way too extensive to mention, but I also said that because if you are programming something that literally makes your hardware be understood by the operating system, it should not have coding that is not created by a human.

    • WIZARD POPE💫@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      Yeah I stopped updating ym drivers when I had several times an update that just broke everything. Fuck that I will stick with a version I know works

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        66
        ·
        1 day ago

        In a word - instability. Downgrading driver versions for Nvidia cards is not an uncommon troubleshooting step now, which is not ideal.

        • 87Six@lemmy.zip
          link
          fedilink
          arrow-up
          14
          ·
          19 hours ago

          AMD had poor drivers because they were inexperienced

          Nvidia has poor drivers because they’re shoving AI in all their holes

        • zurohki@aussie.zone
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 day ago

          That’s binary blob drivers for you, you just try different versions and hope it gets better someday.

          One of the big advantages to open source drivers is that you can do a bisect to track some new breakage back to a specific patch. Sure, most people don’t know how to do that, but there’s a lot of people who can. And then the problem gets fixed for everybody.

  • toast@retrolemmy.com
    link
    fedilink
    arrow-up
    99
    arrow-down
    1
    ·
    1 day ago

    “I want every task that is possible to be automated with artificial intelligence to be automated with artificial intelligence,” he added. “I promise you, you will have work to do.”

    I think every task that can be automated with LLMs is already automated. It’s just that it’s a very short list.

      • kopasz7@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        50
        ·
        1 day ago

        While I agree CEOs don’t do much work. We still need a person there, because code can’t be held accoun- nevermind, that ship has long sailed.

        • Truscape@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          31
          ·
          1 day ago

          “A computer can never be held accountable. Therefore a computer must never make a management decision.”

          • IBM motto, 1979

          Although at this point our computing philosophies from that era are ancient history according to these shortsighted people.

  • mysticpickle@lemmy.ca
    link
    fedilink
    arrow-up
    48
    ·
    1 day ago

    Moment of Irony:

    For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.

    🫠

  • Archangel1313@lemmy.ca
    link
    fedilink
    arrow-up
    47
    ·
    1 day ago

    “I want every task that is possible to be automated with artificial intelligence to be automated with artificial intelligence,” he added. “I promise you, you will have work to do.”

    Well, yeah. Obviously, someone is going to have to unfuck everything AI fucked up…so, in a way, using AI is kind of like adding a layer of job security to your job.

      • zurohki@aussie.zone
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 day ago

        Reminds of when companies offshored their whole dev team and just sent requirements to them thinking they’d make code cheaper.

        I mean, it was cheaper. It’s just that it was also awful. It was basically like firing all your senior devs and giving their work to randos who can’t code, but with plausible deniability.

        • Boomer Humor Doomergod@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          Yes it was cheaper, even after they had to re-hire the folks they fired so they could fix the code they sent to a dev farm overseas with just a vague set of requirements and no oversight.

          But the same sort of attitude is happening with AI, like the code it’s making won’t require review and testing.