Just heard this today - co worker used an LLM to find him some shoes that fit. Basically prompted it to find specific shoes that fit wider/narrower feet and it just scraped reviews and told him what to get. I guess it worked perfectly for him.
I hate this sort of thing - however this is why normies love LLM’s. It’s going to be the new way every single person uses the internet. Hell, on a new win 11 install the first thing that comes up is copilot saying “hey use me i’m better than google!!”
Frustrating.
I’m a student, I’ve strictly removed myself from this fucked up story. I’ll be it, in my CS class with about 125 student, I’ll say 87℅ of them use chatGPT, probably even more. It’s easy to see that almost everyone in the class cheats, I spend about 5 days on a project or lab, they spend only 1 day. Its not that they’re good at CS, this is after all an “introductory” CS class, and at the moment were learning data structures. Everyone hates data structures, and the people I talked to said “yea I spent a day on the linked list lab and got 100” it is so rare for me to get a 100 in this class, and I spent 5 days on that lab.
I’ve completely removed myself from using chatGPT and other LLM’s. for fuck sake, I’m using a query based search engine called marginalia search, because even the internet has been fucked with unreliable information.
I love it because now I don’t have everything at my fingertips, I do have a few things with the search engine, but you have to conserve the words that you use and not everything will pop up. For example, You can’t just search “how to concatinate numbers in c++” you need to say “string concatenation c++” since you’re using too many keywords
Because of this, I’ve started checking out library books from my university, and what’s so funny is that the books I checkout aren’t due for months in advance because no one checks out books anymore. I got a book on C/C++ reference guide and it isn’t due till January of next year.
You are my spirit animal. Id hire you just based on this comment.
Ok, but how much are people willing to pay an “AI” to find sneakers?
How long can “AI” grifters dump money/resources into these free services?
If “normal people” had to pay the actual costs, I’m sure that many of them would look for their own sneakers.
Personally I don’t need/want a machine to pick my shoes for me in the first place.
Anyway I guess my wider point is that people will stop using “AI” so much when they actually have to pay for it.
People do not pay for online services they’re used to getting for free. That means there’ll always be free LLMs available, but as a result of cost considerations will be even worse than what we have today.
I have found no use for trivial stuff as I don’t trust what it outputs. If I have to verify its answer, I might as well just research it on my own, it’s literally more work to ask and research than to just research
At work, the only use I have found is to provide fake data so I can run some tests as I work on confidential stuff I cannot use directly. For example, they other day I asked for 30 super hero name, last name, gender and DOB table just to pick from. I could easily do it by hand as it does not have to be accurate at all, but it was faster to prompt than to just randomly type and I did not care if it missed the prompt (very rare scenario IMO)
Sponsored search results suck, but sponsored LLM results are gonna be wild.
LLMs are good at templates or starting points for standard documents or communications and coding examples
BUT … you have to double check every single word
This happens because search engines have become worse and worse over the years. Unless you’ve installed adblockers or have a pi-hole. Not to mention that many search results Google returns are just AI generated websites. And the average person isn’t going to pay for Kagi to get a better search engine.
Enshittification squared. Create a service that customers come to rely on. Then turn the service into shit to squeeze more profit out of it. Then create a new service that replicates the functionality of the old service customers relied on. Then enshittify that. And so on.
Yes. That is how the internet will be used, to a significant degree. Even today AI represents an extremely powerful and astonishingly human-adapted user interface.
The internet brought a vast quantity of knowledge together and made it accessible to anyone, in theory. In practice you need arcane knowledge to get what you want. You need to wiggle the mouse just so, need to know the abstract structure of the internet, the peculiarities of search terms, … it’s eminently doable but it’s not natural or intuitive. You must be taught how to use it.
If you put a medieval Tamil farmer in a room with a ChatGPT audio interface they could use it and have access to all of that internet knowledge.
I understand a lot of the backlash against AI but I don’t get hating on it because of how good of an interface it makes.
Unfortunately youre right. If it wasn’t completely owned by massive corps pushing techo-facist ideology and mass surveillance, i could maybe see it as a positive thing. But it will not be used for good in the long run.
Right. So that is what we need to change.
You cant change that, short of sabotaging data centers or getting rid of billionaires.
Better to rally against it and not use it whenever possible.
Yeah, and how does that Tamil farmer fact check their black box audio interface when it tells them to spray Roundup on their potatoes, or warns them to buy bottled water because their Hindu-hating Muslim neighbors have poisoned their well, or any other garbage it’s been deliberately or accidentally poisoned with?
One of the huge weaknesses of AI as a user interface is that you have to go outside the interface to verify what it tells you. If I search for information about a disease using a search engine, and I find an .edu website discussing the results of double blind scientific studies of treatments for a disease, and a site full of anti-Semitic conspiracy theories and supplement ads telling me about THE SECRET CURE DOCTORS DON’T WANT YOU TO KNOW, I can compare the credibility of those two sources. If I ask ChatGPT for information about a disease, and it recommends a particular treatment protocol, I don’t know where it’s getting its information or how reliable it is. Even if it gives me some citations, I have to check its citations anyway, because I don’t know whether they’re reliable sources, unreliable sources, or hallucinations that don’t exist at all.
And people who trust their LLM and don’t check its sources end up poisoning themselves when it tells them to mix bleach and vinegar to clean their bathrooms.
If LLMS were being implemented as a new interface to gather information - as a tool to enhance human cognition rather than supplant, monitor, and control it - I would have a lot fewer problems with them.
Why would anybody hate on endless free CPU cycles? It’s like handing out candy. Just gotta keep that “investor” cash flowing…
I don’t think people realize what’s going to quickly happen is that people making the models will start extorting brands to get a better ranking. You want our model to recommend your brand then pay us $X. Then any perceived utility about reading reviews vanishes kind of like with fake reviews today.
Worse than that people and brands are going to enshitify the internet In an effort to get their products and brands into the training data with a more positive context.
Just use one AI to create hundreds of thousands of pages of bullshit about how great your brand is and how terrible your competitors brands are.
Then every AI scraping those random pages trying to harvest as much data as possible folds that into their training data set. And it doesn’t just have to be things like fake product reviews. Fake peer-reviewed studies and fake white papers. It doesn’t even have to be on the surface. It can be buried in a 1000 web servers accessible to scrapers but not to typical users.
Then all the other brands will have to do the same to compete. All of this enshitifying the models themselves more and more as they go.
Self-inflicted digital brain tumors.
Most AI company executives have already spoken openly about how that’s their plan for future financial growth: advertisements delivered naturally in the output with no clear division between ads and the content.
Oh 100%, we already see what fElon programmed HitlerBot to do. It’s going to be an ultra-capitalists wet dream once the internet is destroyed and people only have access to Corpo LLM for only the cost of 3 pints of blood a month!
Arguably this is already happening. AIs are trained mostly by web scraping and specifically scraping Reddit which has a known astroturfing problem. So it’s already being fed non-genuine inputs and isn’t likely isn’t being used with tools to flag reviews as fake
Already happening. I was using chatgpt to make a script to download my YouTube music liked videos and it’s keep giving me a pop up with the message “use spotify instead”
This isn’t so dystopian if open-weight LLMs keep their momentum. Hosts for models become commodities, not brands to capture users, if anyone can host them, and it gives each host less power for extortion.
You seem to imply that they would care about that at all. They won’t. It’s already happening in the shops they frequent (Amazon) and they don’t care.
Reports exist that there are already software companies building features that are hallucinated by AI because people search and demand them.
Around me ppl use LLM for pretty much any question they cannot answer, ranging from tech support to life advice.
For free? That won’t last forever.
Ofc not. They’re paying with their data.
hey use me i’m better than google
I don’t use copilot (or windows), nor do I believe that LLMs are appropriate for use as a search engine replacement, but to be fair, google is really bad now, and I wouldn’t be surprised if people are having a better experience using LLMs than google.
I have a feeling google is slowly reducing its own quality so using llms become the norm, since it’s even easier to inject ads. Might be paranoia tho.
Google has already been caught out doing this. They reduced the quality of search results and placed ads and SEO (companies that pay to be first in the SEO rankings) ahead of other results. This was happening before they had a Gen AI LLM.
It’s intent is to keep you on the search page longer, viewing ads so they can get more ad revenue.
They’re an ad aggregation company first and foremost and search (along with their other suite of products) is how they serve those ads.
It’s also easier to inject propaganda. Look at Grok - extreme example, sure, but it shows exactly what these are designed to do.
Nah, that’s silly. Google search ranking is definitely just as easy if not easier for them to manipulate and push specific content than dealing with a non deterministic LLM.
Both are propaganda machines, but traditional search algorithms are way more direct than LLMs.
I think the other half of this is the confidence with which it’s programmed to give the answer. You don’t have to go through a couple individual answers from different sites and make a decision. It just tells you what to do in a coherent way, eliminating any autonomy you have in the decision process/critical thinking, of course. “Fix my sauce by adding lemon? Ok! Add a bay leaf and kill yourself? Can do!”
I spend significantly more time keeping AI at bay than using it.
- It is not good enough to write code at the expertise level I’m usually required to work at. I’ve spent more time fixing generated code than it would have taken me to write it from scratch.
- It hallucinates too much for me to trust any information provided by one
- Security of AI companies is about the same as IoT companies with the difference being that if IoT leaks my data it’s going to be incompetence and not malice - I don’t trust AI with any of my local data.
- AI agents require to be given even more access and permissions and that’s just not happening.
- I contact support when I’ve exhausted what I can do myself and as a result AI chatbots are an annoying obstacle that can’t help me and I have to waste my time going through to reach a person that actually has power to help me
This is how my younger brother already is. Whenever something goes wrong, his first instinct is to open the ChatGPT app and ask it what to do.
Car engine making weird noises? Ask ChatGPT. Botched dinner recipe? Ask ChatGPT how to fix it.
Evidently, he sees nothing wrong with this, and does not consider the fact that the answers regurgitated by the LLM may not even be relevant/correct/up-to-date. I imagine this is how most people today treat LLMs.
Does your brother pay for these services? If not, how much would they be willing to pay?
Eventually these “AI” grfiters will run out of “investor” money to hand out.
People are literally outsourcing their critical thinking to LLMs that constantly hallucinate.
What could go wrong?
I feel a major reason why people do this is, they don’t want to waste time and effort going to one or two websites to actually know about something.
Honesty this is sad. Modern media are conditioning their brains for instant gratification and LLMs provide just that with quick and direct answers. But may not be the correct one. But people are willing to over look that fact.
This is totally it.
I dont care, ill continue researching actual data made by humans on websites and deem if they are trustworthy.
The simple solution here is to make the internet usable. I shouldn’t have to dig through three pages of search results to find something vaguely related to what i am asking for. Not to mention, with search engines there is no way to correct them. I can tell an llm to find me something, and if it gets the wrong item, i can just tell it that and it will fix. Im not about to do mental gymnastics to figure out the perfect search query to type into google for the same result, minus the wrong item. ALL THE TIME it thinks im talking about something else, and google refuses to show me the thing i am trying to find. Yet when their llm messes up my search, i tell it it screwed up and it tries again WITHOUT SHOWING ME THE EXACT SAME WRONG RESULT
I fucking hate how unusable google has become. And dont tell me to try ddg, its just as bad. Yandex is aight for porn but thats about it. The entire internet has become completely unusable. And yeah, you can inject propaganda or artifically promote a certain companies items to the top. But you know what I hate a lot worse than being forced into buying on ecompanies item? Not being able to buy that item at all because i cant find it.
That’s what Google was
My mother is constantly googling things and reading me the AI overview. And I know LLMs make shit up all the time, and I don’t want AI hallucinations to infect my brain and slowly fuck up my worldview. So I always have to drop everything and go confirm the claims from the AI overview. And I’ve caught plenty of inaccuracies and hallucinations. (One I remember: she googled for when the East Wing of the White House was originally built and the AI overview told her the year of a major renovation, claiming it was the year it was built, but it had been built much earlier.)
I never understood how people order shoes online except if they already have that pair. I go to a real-world shop and try 10 pairs and honestly, 8 of them aren’t so great. I wouldn’t know how that works with Amazon or an AI.
I’d never buy any kind of clothing online. I’m not especially vain, I don’t even wear makeup, but if I can’t tell how it’s going to feel and look on my body, why would I trust it?
I just got a pair of Adidas on the Costco website for $16.76. I took a gamble and they are NICE. I can’t believe I got that deal. I’ll chance it at that cost.
most shoe sizes are the same. i wear 11.5. i’ve been wearing it for 20 years and probably had 100s of shoes of that size.
only time i ever have a different shoe size is for leather dress shoes or boots. which are typically sized down from a standard shoe size.
yeah, I’d say size is about right. not entirely, I have shoes plus or minus half a size, especially sport shoes (I don’t own more than one pair of dress shoes). And some brands are off, though they seem to be consistent with it. But mostly size works out for me as well. They seem to do a good job with that number. I meant more generally how they fit. They’d be all the right size and I’ll try them and walk around the aisles in the shop and some shoes are just way more comfortable to walk in than others.
What? As a person with wide feet, there are sizes that are specifically made for wide feet. This isn’t anything they couldn’t have learned from a normal search engine.
Are you kidding? Normies would love nothing than to never learn how to do anything ever again and have all their needs catered to them
Windows won’t even have a keyboard in 5 years, you’ll have to call a service team for that. How antiquated! We will all speak to “computer” like Star Trek!
Windows won’t even have a keyboard in 5 years… We will all speak to “computer” like Star Trek!
It’s called Alexa and it’s extremely mediocre.








