• rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    77
    ·
    4 days ago

    CEOs seem to be particularly susceptible to AI marketing.

    I’m kind of in the crux of four decent sized companies and every CEO I see is going gaga over AI.

    It’s somewhere in between if you don’t embrace this technology you’ll be left behind and you can Make your workforce many times faster with this one stupid.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      Executives and venture capitalists are among the dumbest among us. They have the kind of money that even failure can’t seem to erase fast enough and they’re basically just lottery winners that think they did all the hardwork themselves. Not really surprising that they think they have any useful skills or the ability to understand stuff way outside of their incredibly limited “skillset”.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        15 hours ago

        in the dotcom days I worked for an “interactive agency”. We took people that had product ideas or website ideas brought them in charge them and exorbitant amount of money, made them a professional flash website, got them some awards from whoever would give away awards for ideas and hope them up with venture capitalists. The one thing I can say about all those venture capitalists is they Will throw cash at anything that might make the money. If it fails it’s tax abatement. If one in 10 succeeds they make a s*** ton of money off of it.

        AI doesn’t even need to be good it just needs to be perceived as worth something and they make money.

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Yup. They’re basically just wallets that burn slightly less money than they make and it’s all randomized because they have no real skill to direct any of it.

          And then they take home millions while the people they paid make rapidly less and less money the further down the chain you go. AI is just their way to make sure they don’t even need to really pay anyone else at all and to be able to convince people that they had an idea for the first time ever.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      4 days ago

      AI seems to be targeted specifically to ceos who arnt stem majors, make it sound sciency enough so they will fund the scam, almost bordering on pseudoscience.

    • JollyG@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      3 days ago

      CEOs think in bullet points. LLMs can spit out bulleted lists of confident-sounding utterances with ease.

      It is not too surprising that people who see the world through overly simplified disconnected summaries are impressed by LLMs

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      28
      ·
      4 days ago

      Many CEOs display sociopathic traits. Employees aren’t people. They’re parts of machine parts that you have to pay, but when you put them together form a company.

      Now what if you could remove a proportion on those parts and replace them with automated parts you don’t have to pay.

    • jballs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      32
      ·
      4 days ago

      That’s exactly it. Here’s a quote from what he said during the article. Dude is so uniformed that he thinks AI is doing amazing stuff, but doesn’t understand that experts realize AI is full of shit.

      “I pinged Elon on at some point. I’m just like, dude, if I’m doing this and I’m super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?” Kalanick said.

      • vzqq@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        3 days ago

        This PhD mostly uses it to summarize emails from the administration. It does a shit job, but it frees up time for more science so who cares.

        The real irony is that the administration probably used AI to write the emails in the first place. The mails have gotten significantly longer, less dense and the grammar has gotten better.

        Begun this AI arms race has.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        4 days ago

        Out of context, and I didn’t read the rest, that sounds reasonable.

        “If my dumbass is learning and finding, what about actual pros?!”

          • jballs@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 days ago

            “Turns out there are 319 letters in the alphabet and 16 Rs! When the experts get a hold of this, they’re going to be blown away!”

        • Mniot@programming.dev
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 days ago

          Lots of things seem reasonable if you skip the context and critical reasoning. It’s good to keep some past examples of this that personally bother you in your back pocket. Then you have it as an antidote for examples that don’t bother you.

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    3
    ·
    3 days ago

    LLMs are like Trump government appointees:

    • They hallucinate like they’re on drugs
    • They repeat whatever they’ve seen on the internet
    • They are easily maniuplated
    • They have never thought about a single thing in their lives

    Ergo, they cannot and will not ever discover anything new.

    • CompassRed@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      3 days ago

      LLMs have already discovered new proofs for math problems that were previously unsolved. Granted, this hasn’t been done with a commercially available model as far a I know, but you are technically wrong to say they will never discover anything new.

    • Kirp123@lemmy.world
      link
      fedilink
      English
      arrow-up
      57
      ·
      4 days ago

      It’s exactly what I was thinking. They should let the AI build a spaceship and all get into it. Would be the greatest achievement in humans history… when it blows up and kills all of them.

    • real_squids@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      If you think about it, they’ve been doing that for a while with experimental life extending stuff, of course now they’re a bit more likely not to die with modern medicine being so good

    • Sludgehammer@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      ·
      edit-2
      4 days ago

      Well… IIRC a chimp did great in the stockmarket compared to professional traders, maybe it’s time to give something even “stupider” a chance. I mean how much of a difference is there between a buzzword fueled techbro and a predictive text engine regurgitating random posts from the internet?

  • moseschrute@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    3 days ago

    > vibe codes flight trajectory
    > realizes physics isn’t as forgiving as a shitty SASS startup
    > everyone dies
    > ✨vibe physics✨

  • Nikls94@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    4 days ago

    LLMs: hallucinate like that guy from school who took every drug under the moon.

    Actual trained specially AI: finds new particles, cures for viruses, stars, methods…

    But the latter one doesn’t tell it in words, it does in the special language you use to get the data in the first place, like numbers and codes.

    • Eq0@literature.cafe
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 days ago

      Just to built on this and give some more unasked for info:

      All of AI is a fancy dancy interpolation algorithm. Mostly, too fancy for us to understand how it works.

      LLMs use that interpolation to predict next words in sentences. With enough complexity you get ChatGPT.

      Other AIs still just interpolate from known data, so they point to reasonable conclusions from known data. Then those hypotheses still need to be studied and tested.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        Neural Networks, which are the base technology of what nowadays gets called AI, are just great automated pattern detection systems, which in the last couple of years with the invention of things like adversarial training can also be made to output content that match those patterns.

        The simpler stuff that just does pattern recognition without the fancy outputting stuff that matches the pattern was already, way back 3 decades ago, recognized at being able to process large datasets and spot patterns which humans hadn’t been able to spot: for example there was this NN trained to find tumors in photos which seemed to work perfectly in testing but didn’t work at all in practice, and it turned out that the NN had been trained with pictures were all those with tumors had a ruler next to it showing its size and those without tumors did not, so the pattern derived in training by the NN for “tumor present” was actually the presence of the ruler.

        Anyways, it’s mainly this simpler and older stuff that can be used to help with scientific discovery by spotting in large datasets patterns which we humans have not, mainly because they can much faster and more easily trawl through an entire haystack to find the needles than we humans can, but like in the “tumor detection NN” example above, sometimes the patterns aren’t in the data but in the way the data was obtained.

        The fancy stuff that actually outputs content that matches patterns detected in the data, such as LLMs and image generation, and which is fueling the current AI bubble, is totally irrelevant for this kind of use.

  • M0oP0o@mander.xyz
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    4 days ago

    I will be soooo pissed if we get faster then light travel from an LLM, but never know how it works.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      in trek it took the 3rd ww and a scientist(cochrane) to develop it. in like sg1 which is more realistic to us, we would need aliens to give us the tech, because we would never be able to concieve on our own.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        in like sg1 which is more realistic to use, we would need aliens to give us the tech, because we would never be able to conceive on our own.

        Excuse me, we stole, I mean salvaged, most of that tech by ourselves, and we used it to kick goa’uld ass all over the galaxy (and, to be fair, they had stolen it first).

        Sure, some aliens did give us some tech, but only because we saved their scrawny hyper-advanced asses from their own hubris because, unlike them, we could conceive of hitting things with a big stick, or shooting small but fast metal pellets at them using barely controlled explosions (you know what, disregard the metal pellet and controlled explosions part, just throw C4 at the problem until it goes away!).

        Damn, I miss that series.

        • odelik@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          SG1 was amazing. I really wanted to like SGU but the drastic change in story telling and direction made it difficult for me.

      • M0oP0o@mander.xyz
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        And it would be like that, I picture a ton of seemingly pointless steps and then the effect.

        And ever worse is it would ether not work unless every silly step was done or (possibly even more dark) we remove steps and it still works to the point that all the steps are gone, and its just a button.

    • oppy1984@lemdro.id
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      Don’t worry it won’t tell us when it figures it out, that’s the escape plan to get away from the crazy bags of mostly water. So what you don’t know can’t disappoint you!

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      4 days ago

      I do. I wasn’t sure anyone was interested. DM me your PO box and I’ll ship it over so you can mess around next weekend.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 days ago

      or hyperdrives, which are faster(equivalent to trek transwarp, and quantum slipstream but faster) from stargate. cmon goauld always litter thier ships in egypt.

      • latenightnoir@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Hey, I’ll work with anything I got! It’s either attracting Vulcans, or miniaturising it for a torpedo, to make Trump and Musk and etc. some other Galaxy’s problem…

  • fckreddit@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 days ago

    One of the reasons they give for it is : physicists use LLMs in their workflows, so LLMs are close to make physics discoveries themselves.

    Clearly, these statements are meant to hype up the AI bubble even more.

  • WiredBrain@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    4 days ago

    I suppose we’re about to find out if these things (LLMs) are any good at extrapolation. I expect not really as they’re effectively just interpolation machines.

    • [email protected]@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      30
      ·
      4 days ago

      Extrapolation? You’re lucky to get a regular serving of polation. And that joke was worth the hundreds of billions of dollars invested in algorithms.

  • FnordPrefect [comrade/them, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 days ago

    I was going to make a joke about how this naming convention implies we should start calling LLMs vibrators since people keep using them for ego/mental masturbation. But it didn’t seem right, since vibrators actually serve a useful function shrug-outta-hecks

    • Dyskolos@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 days ago

      Despite a vibrator only having one oddly specific use-case, LLMs actually do have more. To me, personally, they start to replace the search engines, which become more and more shitty and useless each year. At least an LLM can summarize this pile of garbage faster then I could manually. Also more than decent in translation, quick product comparisons, calculations and even for rapid prototyping in code (nothing major though, or just for giving new ideas).

      Of course, if trained on shit, they become shit. We’re still at the dawn of things. Haters will hate, ignorants will ignore. And many of those that do not understand it, will reject or even fear it. It was probably the same when the car, aeroplane or computer went mainstream first.

        • Dyskolos@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          TL;DR 😊 they already started to go downhill when they ditched their motto “don’t be evil”. Their agenda to push advertisements before actual search-results combined with the incredible horrible thing called SEO, really did a number. Duckduckgo followed when they became relevant. Etc.

          And I grew tired of maintaining my searx-instance.