Pressing the copilot button to instantly bring up a text box where you can interact with an LLM is amazing UI/UX for productivity. LLMs are by far the best way to retrieve information(that doesnt need to be correct).

If this had been released with Agentic features that allow it to search the web, use toolscripts like fetching time/date and stuff from the OS, use recall, properly integrate with the microsoft app suite. It would be game changing.

We already have proof that this is a popular feature for users since its been integrated in every mobile phone for the past 10 years.

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    17
    ·
    12 days ago

    Clearly you haven’t worked with one.

    Its great for getting detailed references on code, or finding sources for info that would take a LOT longer otherwise.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      12 days ago

      Not who you’re responding to, but I used one extensively in a recent work project. It was a matter of necessity, as I didn’t know how to word my question in the technical terms specific to the product, and it was something that was just perfect for search engines to go “I think you actually mean this completely different thing”. There was also a looming deadline.

      Being able to search using natural language, especially when you know conceptually what you’re lookong for but not the product or system specific technical term, is useful.

      Being able to get disparate information that is related to your issue but spread across multiple pages of documentation in one spot is good too.

      But detailed references on code? Reliable sources?

      I have extensive technical background. I had a middling amount of background in the systems of this project, but no experience with the specific aspects this project touched. I had to double check every answer it gave me due to how critical what I was working on was.

      Every single response I got had a significant error, oversight, or massive concealed footgun. Some were resolved by further prompting. Most were resolved by me using my own knowledge to work from what it gave me back to things I could search on my own, and then find ways to non-destructively confirm the information or poke around in it myself.


      Maybe I didn’t prompt it right. Maybe the LLM I used wasn’t the best choice for my needs.

      But I find the attitude of singing praises without massive fucking warnings and caveats to be highly dangerous.

      • CompactFlax@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        12 days ago

        Great response.

        It’s great until you realize it’s led you down the garden path and the stuff it’s telling you about doesn’t exist.

        It’s horrendously untrustworthy.

    • FinjaminPoach@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      12 days ago

      or finding sources for info that would take a LOT longer otherwise.

      Maybe. It adds to the list of sources you have to check from, but i’ve found i still have to manually check to see if it’s actually on topic rqther than only tangentially related to what I’m writing about. But that’s fair enough, because otherwise it’d be like cheating, having whole essays written for you.

      Its great for getting detailed references on code

      I know it’s perhaps unreasonable to ask, but if you can share examples/anecdotes of this I’d like to see. To understand better how people are utilising LLMs

    • pimento64@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 days ago

      Skill issue. I’m better at retrieving and then actioning real and pertinent information than you and an AI combined, guaranteed.

    • Jerkface (any/all)@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      I’ve spent many many hours working with LLMs to produce code. Actually, it’s an addictive loop, like pulling a slot machine. You forget what you’re actually trying to accomplish, you just need the code to work. It’s kinda scary. But the deeper you get, the worse the code gets. And eventually you realize, the LLM doesn’t know what it’s talking about. Not sometimes, ever.

      • nandeEbisu@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        It has been useful for me with poorly documented libraries, not generating more than code snippets or maybe small utilities.

        It’s more of an API search engine to me. I find it’s about 80% correct but it’s easier to search for a specific method to make sure it does what you expect than scroll through pages of generated class documentation, half of which look like internal implementation details I won’t need to care about unless I’m really digging into it as a power user.

        Also, even if the method isn’t correct or is more convoluted to use than a more direct one. it’s usually in the same module as the correct one.