• Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    ·
    17 hours ago

    Fun fact, this is also true of crypto and NFTs. The more people know about them, the more skeptical they become.

    Almost like they’re all scams.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      16 hours ago

      This is shockingly true.

      I am working in it and a coworker who is a windows server admin (please take a moment to pray for his soul) is, according to him, making money with crypto. As I got fairly disappointed by the reality of crypto but was fairly excited over it before, I reject crypto because I looked into it.

      So he is a bit of fan and I am the opposite, and I decided to talk with him about his perspective on my issues with crypto. Before we even got there, I had to realise that he is unfamiliar with the basics of crypto.

  • Thorry@feddit.org
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    21 hours ago

    I recently read a cool book and wanted to know what other people thought about it. I had no idea how to find out, probably obscure forums or something. But with search engines being shit these days, I could only find one line reviews. I was looking for something a little more in depth.

    So I thought hey let’s try some kind of LLM based solution, this is something it should be able to do right? So I told Chatgpt hey I read this book and I liked it, what are some common praises and criticisms of that book? And the “AI” faithfully did as told. A pretty good summery of pros and cons, with everything being explained properly without becoming too verbose. Some of the points I agreed with, others less so. Wow, that’s pretty neat.

    But then alarm bells started ringing in my head. Time for a sanity check. So in a new chat I posed the exact same question, word for word. However I replaced the name of the book and the name of the author with something completely made up. Real sounding for the context, not obviously fake, but weird enough a human would give pause. And of course, not similar to anything that actually exists. The damn thing proceeded to give a very similar result as before. Different points, but the same format and gist. In depth points about pacing and predictability of a book I made the fuck up just seconds earlier.

    I almost fell into the trap thinking LLMs could be useful in some cases. But in fact they are bullshit generators that just happen to be right some of the time.

    • Hazzard@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      15 hours ago

      The way I imagine it in my head is like a text autocomplete trying to carry on a story about a person talking to a brilliant AI.

      If something is real, of course the hypothetical author would try to get those details correct, so as not to break the illusion for educated readers. But if something is fake (or the LLM just doesn’t know about it), well of course the all knowing fictional AI it’s emulating would know about it. This is a fictional story, whatever your character is asking about it is probably just part of the setting. It wouldn’t make sense for the all knowing AI in this story to just not know.

      Obviously, OpenAI or whoever would try to prompt their LLMs to believe they’re not in a fictional setting, but the LLMs are trained on as much fiction as non-fiction, and fiction doesn’t usually break to tell you it’s fiction, but often does the opposite. And even in non fiction there aren’t many examples of people saying they don’t know things. I wouldn’t write a book review just to say I haven’t heard of the book. Not to mention the non-fiction examples of people confidently being wrong or flat out lying.

      Simply based on the nature of human writing, I frankly wouldn’t ever expect LLMs to be immune to writing fiction. I expect that it’s fundamental to the technology, and “hallucinations” (a metaphor that gives far too much credit, IMO) and jailbreaks won’t ever be fully stamped out.

    • leftytighty@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      the only time they’re useful is when assisted by an algorithmic search that provides good contextual information for it to summarize and more importantly link to for verification…

      if you’re struggling to find good results online it will absolutely not be helpful, if you’re struggling to read results then it might help you hone in on an area and save you time.

      however, chances are you’ll continue to get worse at independent information gathering

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    As soon as hallucinations were an option, it proved it could never be trusted as anything other than a toy. That takes literally no knowledge about it other than the fact it can tell you lies. Anyone that thinks otherwise is clearly an idiot.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      LLMs are generally OK, if you can craft an unbiased question that demands facts. Never seen ChatGPT get it wrong. But it’s stunning how easily you can manipulate them just a couple of prompts deep.

      Thing is, most people, in America anyway, didn’t get the science training I got in 70s elementary school, and even though I’m barely above average IQ, I was a star science student. Imagine those people who don’t understand empiricism using LLMs. The mind boggles.

    • chaogomu@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      16 hours ago

      Generative AI actually does have a few real uses. Most notable is in the generation of new protein sequences.

      Not long ago, you had PhD’s whose entire career was understanding a single protein sequence. Now we can generate thousands of properly folded proteins. Millions. Stuff nature never thought of.

      Due to patent law, the biotech revolution is still a few years out, but it’s coming.

      You want an enzyme that breaks apart plastic? We can design one now and have yeast producing it within a day or two.

      And there are millions more that we can now play with.

      Anyway, there are a few more niche uses for generative AI. But then idiot CEOs decided to shove that shit into everything. To decidedly mixed success.

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    20 hours ago

    Yeah, you just figure out what it’s doing and eye-roll a bit.

    I had that moment.

    Oh, in no way is this sentience, Ai is just a Google search with extra steps… Google would say less, but, I mean, it really depends.

  • cm0002@piefed.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    23 hours ago

    Whaaaat? You mean to tell me that as a person learns a new tool they become more and more aware of its downsides‽

    Crazy man lmao