This is the main reason I completely ditched Reddit, if you use the new Reddit interface instead of the old one (old.reddit.com), you’ll see a constant request being made to “https://www.reddit.com/svc/shreddit/events” (open your DevTools > Network tab, can’t see on Firefox idk why).
The problem is, if you add this to your Ublock Origin filters the website won’t load properly, that’s why uBO team didn’t block it already.
You’ll notice this request isn’t only being made from a interval but also when you do basically any action in the site, like pausing or resuming a video (send timestamps of when did you pause or resumed).
It sends other kind of data like what subjects you’re seeing when closed a tab or the related subjects of a post you click, this all can be used to trace a perfect profile of you and things you like.
You can avoid that by using the old.reddit but it still has the same kind of tracker, even tho you can block it here without major issues.
By my analysis, old Reddit interface does the same but to a random URL path that always starts with “reddit.com/api/something”. Ex.: reddit.com/api/friends So you can block anything that starts with “www.reddit.com/api” in your custom filters (after all you’re using old.reddit.com), then you’re mostly free from Reddit trackers (more or less). Side effect is, you won’t be able to use the chat in the old interface.
Reddit sucks for many reasons and I refuse to use it, but as a software engineer, this hardly looks nefarious. That looks like a pretty typical event gateway in networked applications, which is used for all kinds of things to make a platform run. We have one in our application, and it’s not used for any kind of privacy-invasive tracking. We use it for things like bulk data processing for things like userbase-level analytics (like, how many users are using this feature?), or for billing purposes for our customers (since we bill based on usage).
And calls to
/api/*
routes are absolutely completely normal for any SPA (single page app), and are required for them to function. There’s certainly a technical argument to be made against SPAs in favor of more traditional server-side rendering (augmented by tools like https://htmx.org/ for dynamic content), which could be used to avoid these kinds of API calls (and, in fact, it’s a model I’m very much in favor of), but that kind of architecture is far from the norm these days. The SPA model is the current (IMO bad, from a technical perspective) standard.We have many reasons to shit on reddit and their behavior, but this honestly isn’t one of them.
The point isn’t the endpoint call, the point is while accessing old.reddit.com it’s making calls to reddit.com, I inspect it closely, reddit.com/api has no use when you’re using old.reddit.com, the calls to the website work are made to old.reddit.com/api, the calls for tracking are made to reddit.com/api. And old reddit isn’t a SPA, you can access it with JS disabled.
I know old reddit is not an SPA, but that’s entirely the point. New reddit is clearly written as an SPA. Old reddit was created before SPAs were super common, so it uses a different architecture.
Yes, so when we’re talking about calls to
/api/*
we’re talking about old.reddit.com, I didn’t say anything about calls to this endpoint on new reddit, the problem with new Reddit are calls made tohttps://www.reddit.com/svc/shreddit/events
with a lot of trackers. It can’t be blocked.Right, and I explained that looks like a very common event gateway kind of architecture, which has many legitimate uses.
Now, it’s entirely possible that Reddit is also using it for tracking shit (because of who they are), but the mere fact that an event gateway exists isn’t evidence of that. Here’s the Wikipedia article on the architecture: https://en.m.wikipedia.org/wiki/Event-driven_architecture.
If you want to believe Reddit is using it merely to make their website to work properly, be my guest.
I have no doubt Reddit is doing shitty things (as evidenced by, well, everything in the last several years), but that’s entirely unrelated to what kind of architecture is involved. You can do shitty stuff with regular JS, cookies, etc. on webpages.
I simply don’t want people thinking that this is actual evidence of wrongdoing, because it isn’t.
You really can’t tell that this isn’t being used for evil practices, personal info is leaving your machine via client-side requests, end of story. You can use your judgement, but by fact, you can’t really tell anything. I wouldn’t trust Reddit, if you trust them good of your.
I use Redlib to access Reddit, especially because I constantly get the “your request was blocked by network security” issues
Sad. There was a time when Reddit, despite all its flaws, was great for connecting with obscure interests and getting inside knowledge on niche topics. It made small communities feel big. I find myself on Reddit occasionally, usually just the "you’ve been blocked by network security ;-) page, but I don’t feel the same way on Reddit anymore. The more I hear about their behavior in the background, the less I want to read or contribute in the foreground. Lemmy captures some of the vibes of reddit, but the in-depth, obscure communites, were kind of shattered. I’m slowly discovering some cool forums and stuff that have weathered the web centralization era.
I’m a bit confused here. API calls aren’t trackers. It’s how the frontend communicates with the backend. Of course Shreddit wouldn’t work if you blocked them, Reddit doesn’t live on your PC.
I’m actually surprised Old Reddit still works with them blocked. Are you sure it actually works works, or does it just look like it’s working locally? If you save a post for example, does it stay saved if you close the window and open it again?Edit: Yeah, just tested it: even on Old Reddit, saving, for example, doesn’t work if you block
reddit.com/api/*
. Voting looks like it works, but if you reload the page, your vote is gone. Because it only ever existed client-side.It should explicit start with “www.reddit” or it’ll be targeting “old.reddit.com/api” which is used for site features. Then you’ll see everything works as usual, no issues, but this request is blocked (the trackers).
I use www.reddit, so there’s no old.reddit calls happening. You can change in the settings whether www is old reddit or shreddit (until you clear your cookies).
Anyway, my point was more that that’s just the general frontend-backend communication happening there. If you turn that off, even old reddit won’t work anymore. It’s not something you should be blocking.
The old reddit will work fine my friend, the new one won’t.
This entire post is just a massive misunderstanding of how web software works. This is a normal, every day thing, and every single non static (and even sometimes static) site out there does this, or something parallel to this.
They don’t need to make api calls to track you, you’re logged in or at the very least connecting in a way that lets them follow your session.
They don’t even need api calls to fingerprint your browser, they can just throw that fingerprinting into every post when you go to get more content.
Bottom line: This ain’t it, chief.
Inspect this request by yourself and you’ll see, they send info about everything, this is not normal. If this is normal all Ublock Origin team work is non-sense, why bother blocking trackers on client-side? If, according to you, they can tracker the same way via server-side.
Blocking this request is a layer of protection, not a silver bullet to not be tracked, other means should be used for that.
99% of that is stuff they already know. What api endpoint is this?
They can’t know at what minute you paused or resumed a video via server side (this is sick to me)… Actually, this doesn’t even make sense, if they are able to know all of that via server-side why send redundant data via client-side?
I took a look and besides the amount of info they send, what draw my attention is that they send an accurate flag if you’re using an adblocker or not, not sure why tho, but they know who use and who don’t, they just chose to do nothing.
There are way more if you inspect it closely.
It’s the endpoint of new reddit that I mentioned in the post, if you block it the feed won’t load anymore.
Their video player streams, not unlike YouTube. Sending the “the user paused” action can easily be explained by them wanting to tell the server to stop sending data when the client isn’t even attempting to view it.
Look, I get that you’re gung go about privacy, but you’re going to see damn near exactly these types of things with any API. Sometimes it’s nefarious, sure. But the vast majority of it is simply how web based software works.
I mean, just look at the majority of that data. What feed you’re in? You pulled the feed data from their server, they don’t need to gather that for tracing purposes. What media is open? Obviously they can track that when you pull the media info down from their servers. How long that media is? The sizing of the media? When that post was posted?
I’m seeing virtually nothing in there they don’t already have. This is just an API being fed information it needs to run. The fact that it’s all information they can already track is kinda proof of that. Why go to all this trouble tracking the information a second time?
That’s like warning me there’s bacteria in the shitting bucket
Why is nobody using the special room for taking a shit? I was just in there and the newspaper we use to clean up with is the same date as the day we installed it.
Why are you shitting in a bucket?
What, you’re too good for the bucket?
I mean…
Elitist everyone! Get him!
I am actually using the federated outhouse now, but still like to keep up to date with the sufferings of people who are so used to the bucket they can’t let go
The new American dream.
It’s a guilty little pleasure of mine to read about how shitty it is for those who freely choose to remain there. It’s tragic, but I can’t help but shake my head in disbelief at such stupidity. Why do people put up with shit like that, totally blows my mind.
it’s got all the content and users
IMO it had all the content and users some time ago. I follow mostly specific interest/hobby subreddits, but lately even they have devolved into illiterates asking the same exact questions repeatedly; some strange attention-seeking posts such as pictures captioned “getting started”/“this just arrived" or “what should I do with this thing that I got?”; and really dumb stuff such as “I inhaled solder fumes, will I get lead poisoning?” (These are at least entertaining in a way)
I mean it depends on what you use it for. Lemmy instances might not have as much content, but the question is if they have enough content. If I can’t scroll to the end of reddit or Lemmy, what’s the difference? Lemmy doesn’t have the depth of knowledge, though; I’m not going to find 12 year old troubleshooting tips for my GBA on Lemmy.
This is good to know - thanks for sharing. I dread that old.reddit.com’s days are numbered…
Same, pretty sure they’ll completely shut down old reddit soon.
It’s been acting really squirrely on my phone (no, I don’t use the app), to the point where I’ve just don’t bother with it anymore. It seems like they are changing stuff, and I’m done with it.
The problem is, if you add this to your Ublock Origin filters the website won’t load properly
This is unfortunately true for many many many sites.
Use non-commercial web pages & internet stuff as much as possible.
Many non-commercial websites still need to talk to their backend to function. Lemmy for example has the same exact “issue” OP is pointing out about Reddit here. It also makes those API calls whenever you do something, and if you block them, the site doesn’t work anymore.
weird. old reddit loads fine for me with js completely disabled.
That’s what I miss about the good old web. Websites that just work, without JavaScript
Indeed, it’ll work, you just won’t be able to interact, upvote/downvote, join subreddits or comment etc…
If you’ll not interact at all better to use RedLib (private frontend red.artemislena.eu), I think they’ll hide even your IP address (not sure tho).
yeah, those and old reddit (also available as tor onion site) is good.
at least a few years ago one could set some cookie to a weird value and bypass old reddit’s vpn restriction page, not sure if it’s got patched now…
Could the fact that you don’t see the requests in Firefox be a sign that uBO actually works better in Firefox and worse in Chrome? Possibly something about manifest V3 being implemented? Or do you think the Firefox DevTools are just less advanced? Or something else?
I’m only speculating, but I believe uBO behaves differently between Chromium-based and Gecko-based browsers because Chrome is restricting the abilities of extensions like uBO to do their thing.
My point is, when you open both browsers without any extension you can see the HTTP requests in the Devtools > Network page, so it isn’t related to uBO, for some reason I can’t see this specific request on Firefox, but I can see in Chromium based.
I know that this request is happening in Firefox because I tested with a MITM Proxy, so I can see all my network traffic, I noticed the request is being made but Firefox DevTools doesn’t show it for some reason.
I tested without uBO installed of course, but anyway, requests blocked by uBO are shown in red with “Blocked by uBO” in the “Transferred” column.
Yeah, iirc chrome killed webextension manifest v2 which is what ubo originally was built on.
I still don’t get why people didn’t leave Chrome and Chromium browsers en masse when they did this.
Users are so fucking dumb at this point… Like, do they not want any power or ownership? This shit is so easy to fight against, yet they just sleep on it.
For me personally, it’s about a cost vs benefit tradeoff. The benefits of staying with Edge outweight the cost of doing so, especially considering my adblocker hasn’t gotten noticeably worse.
Firefox has always run like ass for me (worse performance and more resources used), and is missing a lot of features I’ve come to love from Edge.
The move from Chromium sends a bad message, but considering my adblocker still works as well as ever, and Edge still does not have a date for when it drops MV2 support (though it’s deprecated), the cost of switching is far heavier than the benefit right now for me.
What if there is no alternative when it drops MV2 support because chrome/chromium worked good enough for people, so the alternatives couldn’t justify funding?
Also, weird that Firefox find like ass. I hear this a lot but I’ve never had problems with it since I switched back. I used chrome primarily for about a decade from 2013 until 2023 or so.
I get you. But it’s the same with WhatsApp and windows and … They won’t leave comfort zone, which I can understand but the argumentation drives me crazy sometimes. I fucking despise the ads /- industry today, so that’s a no brainer for me, but people don’t seem to mind seeing the same slop again and again.
Maybe it’s time for me to delete it and step away for good then. The “algorithmic” corpoweb as a whole makes me feel nauseous.
I think it would depend on why you were visiting reddit. If you wanted to comment or pm someone you’d need to visit the site. If you’re just reading the site, you could use one of the anonymous front ends out there.
Do you know if there’s anything that can be blocked from the Javascript side of things? (uMatrix lets me handle that on a case-by-case basis)
It seems you can’t block this request, I tested using mitm proxy, you can stop requests being made, so when you do it, the Reddit home page stop loading, if I resume but discard the request (same thing as blocking), the page doesn’t load anymore.
Back when I noticed this, I asked on uBO origin subreddit, and the uBO Team told me there’s nothing they can do, they tried and all attempts broke the site, that’s why I said this tracker is unblockable.
I suggested them to intercept the request, change the parameters with fake ones, then proceed, so it’s sending a wrong info in the tracker, they said they tried it already but didn’t work.
Okay, thanks much for looking in to that!
What I know for sure-- when “old” Reddit is eventually abolished, that’ll be my last participation with the site, other than maybe when useful search engine results come up.
The problem is, if you add this to your Ublock Origin filters the website won’t load properly, that’s why uBO team didn’t block it already.
How do I add this to uBlock? I added it with double vertical bars, but I’m not noticing a difference.
Screenshot:
Add this:
It must explicit start with www.reddit or all the old reddit won’t work.
You should see this when you open your Network tabs of DevTools (F12)
The first one has no effect if you’re using old.reddit.com
Awesome. Looks like it’s working now. Thanks for the guidance!
Keep in mind, adding that means you can’t interact with Reddit anymore. Like, you can load pages on Old Reddit of course, but you can’t vote and do anything else that needs to go through the API.
Mentioning this because Old Reddit isn’t super clear on this. If you upvote something while that filter is on, for example, it’ll look upvoted. But when you reload the page, it’ll be gone. Because your upvote got blocked from reaching the server and only ever existed client-side.