

WTF is up with that title. Jesus.


WTF is up with that title. Jesus.


I use Wayland now but there are still apps I run in X mode. Notably mpv and Firefox, because I cannot for the life of me configure them sensibly in Wayland, and I don’t want to write arcane KWin scripts just to get widow sizing/positioning to stay the way I want them on launch. I tried; it was extremely frustrating and still not quite functional.
Perhaps there are other window managers that would make my life easier. I haven’t tried many, but in principle, there is no way for the widow manager to know the correct size and location of new windows for arbitrary applications, so I doubt it. I consider this a user-hostile design choice in Wayland and I pray it will change in the future.


In practice they’re cheap. I saw the Pixel 9 on sale for under $400 before the 9a was even released.
MSRP is an absolute joke, but most people either get it for much cheaper than that, or think they’re getting it for much cheaper through obfuscated costs with carrier deals.
Also, brand reputations tend to outlive reality by a decade or more, so people still think Pixels have great software and Samsung is bloated as hell. The reality is that Samsung and Google have met in the middle.
I can’t fucking wait for a non-Pixel GrapheneOS phone. So tired of Google’s shit.


Yeah, there is no consensus on quantum gravity. There are competing theories, none of which have any viable path to test.
Here’s the abstract from a paper from last year at https://arxiv.org/pdf/gr-qc/0601043 (PDF, unfortunately):
Freeman Dyson has questioned whether any conceivable experiment in the real universe can detect a single graviton. If not, is it meaningful to talk about gravitons as physical entities? We attempt to answer Dyson’s question and find it is possible concoct an idealized thought experiment capable of detecting one graviton; however, when anything remotely resembling realistic physics is taken into account, detection becomes impossible, indicating that Dyson’s conjecture is very likely true. We also point out several mistakes in the literature dealing with graviton detection and production.
Edit: That said, the paper does address this. They cover a variety of QG theories and try to address the fundamental requirements any theory must meet.
As we do not have a fully consistent theory of quantum gravity, several different axiomatic systems have been proposed to model quantum gravity Witten:1985cc ; Ziaeepour:2021ubo ; Faizal2024 ; bombelli1987spacetime ; Majid:2017bul ; DAriano:2016njq ; Arsiwalla:2021eao . In all these programs, it is assumed a candidate theory of quantum gravity is encoded as a computational formal system
ℱQG={ℒQG,ΣQG,ℛalg}.
It’s over my head, personally.


The majority of people will trail behind by 5-10 years, same as always. As long as a small minority at the cutting edge continue to use and develop better things, everyone will have access to them eventually.


They announced that they’re working with an OEM to support new non-pixel phones (perhaps even shipped with GOS).
The Pixel 9 series will be supported for another 6 years, and GOS support for the Pixel 10 is probably coming after Google releases QPR1 source. Hopefully there will be viable replacements by then.
Google is obviously going to keep making this more difficult but the rest of the world isn’t going to just sit still.


Other terms that made the shortlist of finalists for this year’s Word of the Year included “agnetic,”
Surely they mean “agentic”, right? Right???
I searched for “agnetic” to see if I was out of the loop and it’s kind of funny, kind of sad. I found a lot of what I guess is AI slop that took a typo and just ran with it. Like this one: https://www.linkedin.com/pulse/clash-intelligences-agnetic-ai-vs-agent-explained-robin-biwre
Agnetic AI is a newer conceptual framework that extends beyond the traditional agent-based model. The term “Agnetic” is derived from the word “magnetic,” signifying its dynamic, adaptive nature.
https://www.agnetic.ai/ also looks like slop, but it’s realllllly hard to distinguish between AI bullshit and traditional tech marketing bullshit.


The actual paper presents the findings differently. To quote:
Our results clearly indicate that the resolution limit of the eye is higher than broadly assumed in the industry
They go on to use the iPhone 15 (461ppi) as an example, saying that at 35cm (1.15 feet) it has an effective “pixels per degree” of 65, compared to “individual values as high as 120 ppd” in their human perception measurements. You’d need the equivalent of an iPhone 15 at 850ppi to hit that, which would be a tiny bit over 2160p/UHD.
Honestly, that seems reasonable to me. It matches my intuition and experience that for smartphones, 8K would be overkill, and 4K is a marginal but noticeable upgrade from 1440p.
If you’re sitting the average 2.5 meters away from a 44-inch set, a simple Quad HD (QHD) display already packs more detail than your eye can possibly distinguish
Three paragraphs in and they’ve moved the goalposts from HD (1080p) to 1440p. :/ Anyway, I agree that 2.5 meters is generally too far from a 44" 4K TV. At that distance you should think about stepping up a size or two. Especially if you’re a gamer. You don’t want to deal with tiny UI text.
It’s also worth noting that for film, contrast is typically not that high, so the difference between resolutions will be less noticeable — if you are comparing videos with similar bitrates. If we’re talking about Netflix or YouTube or whatever, they compress the hell out of their streams, so you will definitely notice the difference if only by virtue of the different bitrates. You’d be much harder-pressed to spot the difference between a 1080p Bluray and a 4K Bluray, because 1080p Blurays already use a sufficiently high bitrate.


Does it do that even if you set it to “use device MAC” for the wi-fi network you’re on?
The exact location might depend on brand/OS, but in stock Android it’s in Settings > Network & Internet > Internet > gear icon next to active wi-fi network > Privacy.


The only thing I would use such a thing for is installing an ad blocker for the real world.


It’s been a while since I ran a full-fat VM. What’s the go-to these days?


On the one hand, yes, I agree with you, calculus is not that complicated, but at the same time, I think you’d be hard-pressed to teach even the basic concepts to your average adult today.
I loved that line in whichever TNG episode it was, because it was just an off-hand joke that shows how much humanity has advanced.


If 8-year-olds can understand calculus, I think 5-year-olds can understand basic self-preservation.


If you’re maga
and a star trek fanthen you’re just a fucking idiot.
FTFY


Just saw this article today on ArsTechnica, which seems relevant: https://arstechnica.com/science/2025/10/believing-misinformation-is-a-win-for-some-people-even-when-proven-false/


Well, telling time is one more feature than most jewelry has, and that’s what mechanical watches really are. That’s not even very expensive as far as watches go.
I sure wouldn’t buy one myself, but I won’t judge anyone for their taste in fashion accessories. In this case I will absolutely judge them, but for entirely different reasons.


Robots commuting to the moon.
Robots.
Commuting.
To the moon.
This is the most extreme case of affluenza I’ve ever seen. Let’s pray that it’s terminal.


I’ll speculate.
My money’s on Asus. Asus is a bit more mainstream than Nothing but still enough of an underdog that I think they should see the value in a partnership. They already target an enthusiast niche with the ROG line.
The Nothing Phone 3 uses an SD 8s Gen 4, which is not Qualcomm’s “flagship” SOC, and it would be stretching the definition of “major” OEM, but who knows? This seems the most likely after Asus.
Moto’s only flagship Snapdragon phone is the Razr Ultra, which I guess is possible. It’d be weird, but hey, I’d buy one.
OnePlus has been moving in the opposite direction for years now, locking things down more and more. I think they’re too big for their britches at this point.
Sony’s flagships are crazy expensive, well beyond the price of Pixels. They also don’t cover the US market, though I’m not sure how important that is to the Graphene devs.
HMD doesn’t make any phones with flagship SOCs. I think their best is the Skyline, with a 7s gen 2, Qualcomm’s fourth-tier SOC line (the “s” stands for shitty).
Fairphone doesn’t use flagship Snapdragons and GOS has had some pretty nasty things to say about them in the past.
Samsung is a pipe dream. They’d have no motivation. The entire GOS user base would be a rounding error to them.
On a global scale, Xiaomi would be a huge get. Not sure I see any of the Chinese OEMs focusing on this though.
Lenovo and Blackberry…might still exist? I think?


Representation…in AI image generation?
The idea that this is something anyone should want is hard to wrap my head around.
If I could opt out of being deepfake-able, I would.
It makes sense to me IF it actually works.
Having extra capacity when a device is brand-new isn’t a huge boon, but having stable capacity over the long term would be. At least for me.
Of course this will depend on your habits. If you replace your phone every year, then it doesn’t matter. If you’re a light user and only go through a couple charge cycles per week, it’ll matter less than if you go through 1-2 cycles per day.
Personally I’m at around 1 cycle per day on my current phone, and after nearly 3 years (over 1000 charge cycles now) the battery life is shit — much worse than just 80% of its original battery life. Performance also suffers. With my last phone, I replaced the battery after 3 years and I was amazed at how much faster it was. I didn’t realize throttling was such a big problem.
I might replace my current battery, but it’s such a pain, and it costs more than my phone is realistically worth.