Pressing the copilot button to instantly bring up a text box where you can interact with an LLM is amazing UI/UX for productivity. LLMs are by far the best way to retrieve information(that doesnt need to be correct).
If this had been released with Agentic features that allow it to search the web, use toolscripts like fetching time/date and stuff from the OS, use recall, properly integrate with the microsoft app suite. It would be game changing.
We already have proof that this is a popular feature for users since its been integrated in every mobile phone for the past 10 years.


No its not. Firstly 99% of people have no idea what that button is.
Secondly opening a web browser and going to google typing in your question then pressing ‘im feeling lucky’ then searching through the webpage is way slower than hitting the copilot button typing your question and getting a quick direct answer.
Then write yourself a desktop plugin, an icon, an input box, anything, to take you to the first Google search result. What the fuck does this have to do with LLM? How is this justified to use gallons of water, gigawatt of electricity, and PBs of stolen training data?
Ok so your main complaint is that its to energy intensive? Would you concede that its an OS assistant is a good feature if the query computation cost was lowered? Because I’d argue it already is and the cost of an LLM query isnt unreasonable. The large power costs come from model training and per query cost is negligible.
Also I wont make an argument on the copyright for training data because i dont respect copyright.