Pressing the copilot button to instantly bring up a text box where you can interact with an LLM is amazing UI/UX for productivity. LLMs are by far the best way to retrieve information(that doesnt need to be correct).
If this had been released with Agentic features that allow it to search the web, use toolscripts like fetching time/date and stuff from the OS, use recall, properly integrate with the microsoft app suite. It would be game changing.
We already have proof that this is a popular feature for users since its been integrated in every mobile phone for the past 10 years.


LLM is just a slow way to do things that have better ways to do them.
Or to have an expensive autocorrect do your thinking.
Upvoted. It’s utterly useless.
So you agree that pressing a button to bring up a box that you can query with natural language is a good feature you just think the LLM part is slower and computationally inefficient? I could agree with that if there was something better proposed. I just see an LLM being a good tech for this because of how dynamic it is and with the addition of tools to do specific tasks in a determinism fashion its a powerful tool for the users.
Clearly you haven’t worked with one.
Its great for getting detailed references on code, or finding sources for info that would take a LOT longer otherwise.
Not who you’re responding to, but I used one extensively in a recent work project. It was a matter of necessity, as I didn’t know how to word my question in the technical terms specific to the product, and it was something that was just perfect for search engines to go “I think you actually mean this completely different thing”. There was also a looming deadline.
Being able to search using natural language, especially when you know conceptually what you’re lookong for but not the product or system specific technical term, is useful.
Being able to get disparate information that is related to your issue but spread across multiple pages of documentation in one spot is good too.
But detailed references on code? Reliable sources?
I have extensive technical background. I had a middling amount of background in the systems of this project, but no experience with the specific aspects this project touched. I had to double check every answer it gave me due to how critical what I was working on was.
Every single response I got had a significant error, oversight, or massive concealed footgun. Some were resolved by further prompting. Most were resolved by me using my own knowledge to work from what it gave me back to things I could search on my own, and then find ways to non-destructively confirm the information or poke around in it myself.
Maybe I didn’t prompt it right. Maybe the LLM I used wasn’t the best choice for my needs.
But I find the attitude of singing praises without massive fucking warnings and caveats to be highly dangerous.
Great response.
It’s great until you realize it’s led you down the garden path and the stuff it’s telling you about doesn’t exist.
It’s horrendously untrustworthy.
That’s some funny shit.
Clearly you’ve not been fact checking the shit it hallucinates.
Skill issue. I’m better at retrieving and then actioning real and pertinent information than you and an AI combined, guaranteed.
Maybe. It adds to the list of sources you have to check from, but i’ve found i still have to manually check to see if it’s actually on topic rqther than only tangentially related to what I’m writing about. But that’s fair enough, because otherwise it’d be like cheating, having whole essays written for you.
I know it’s perhaps unreasonable to ask, but if you can share examples/anecdotes of this I’d like to see. To understand better how people are utilising LLMs
I’ve spent many many hours working with LLMs to produce code. Actually, it’s an addictive loop, like pulling a slot machine. You forget what you’re actually trying to accomplish, you just need the code to work. It’s kinda scary. But the deeper you get, the worse the code gets. And eventually you realize, the LLM doesn’t know what it’s talking about. Not sometimes, ever.
It has been useful for me with poorly documented libraries, not generating more than code snippets or maybe small utilities.
It’s more of an API search engine to me. I find it’s about 80% correct but it’s easier to search for a specific method to make sure it does what you expect than scroll through pages of generated class documentation, half of which look like internal implementation details I won’t need to care about unless I’m really digging into it as a power user.
Also, even if the method isn’t correct or is more convoluted to use than a more direct one. it’s usually in the same module as the correct one.