The second comment on the page sums up what I was going to point out:
I’d be careful making assumptions like this ; the same was true of exploits like Spectre until people managed to get it efficiently running in Javascript in a browser (which did not take very long after the spectre paper was released). Don’t assume that because the initial PoC is time consuming and requires a bunch of access that it won’t be refined into something much less demanding in short order.
Let’s not panic, but let’s not get complacent, either.
It depends, some M-devices are iOS and iPadOS devices, which would have this hardware issue but don’t have actual background processing, so I don’t believe it’s possible to exploit it the way described.
On Mac, if they have access to your device to be able to set this up they likely have other, easier to manage, ways to get what they want than going through this exploit.
But if they had your device and uninterrupted access for two hours then yes.
Someone who understands it all more than I do could chime in, but that’s my understanding based on a couple of articles and discussions elsewhere.
So it’s been a while since I had my OS and microcomputer architecture classes, but it really looks like GoFetch could be a real turd in the punch bowl. It appears like it could be on par with the intel vulns of recent years.
which would have this hardware issue but don’t have actual background processing
So I’ve read the same about iOS only allowing one user-space app in the foreground at a time, but… that still leaves the entirety of kernal-space processes allowed to run at anytime they want. So it’s not hard to imagine that a compromised app could be running in the foreground, all the the while running GoFetch trying to mine, while the OS might be shuffling crypto keys in the background on the same processor cluster.
The other thing I’d like to address, is you’re assuming this code would necessarily require physical access to compromise a machine. That is certainly one vector, but I’d posit there’s other simpler ways to do the same. The two that come to mind really quick, are (1) a compromised app via official channels like the app store, or even more scary, (2) malicious javascript hidden on compromised websites. The white paper indicates this code doesn’t need root, it only needs to be executed on the same cluster where the crypto keys keep passing through by chance; so either of these vectors seem like very real possibilities to me.
Edit to add:
I seem to recall reading a paper on the tiktok apps with stock installation were actually polyglot, in that the app would actually download a binary after installation, such that what’s executed on an end user’s machine is not what went through the app store scanners. I had read of the same for other apps using a similar technique for mini-upgrades, which is a useful way to not have to go through app store approval everytime you need to roll out a hotfix or that latest minor feature.
If these mechanisms haven’t already been smacked down by apple/google, or worse, aren’t detectable by apple/google, this could be a seriously valuable tool for state level actors able to pull off the feat of hiding it in plain sight. I wonder if this might be part of what congress was briefed about recently, and why it was a near unanimous vote to wipe out tiktok. “Hey congress people, all your iphones are about to be compromised… your tinder/grindr/onlyfans kinks are about to become blackmail fodder.”
Doesn’t it require a separate process to be using the cryptographic algorithm in the first place in order to fill the cache in question?
If it’s done in-process of a malicious app that you’re running, why wouldn’t the app just steal your password and avoid all of this in the first place?
An efficient and fast version of this in Javascript would be worrisome. But as-is it’s not clear if this can be optimized to go faster than 1-2 uninterrupted hours of processing, so hopefully that doesn’t end up being the case.
This requires local access to do and presently an hour or two of uninterrupted processing time on the same cpu as the encryption algorithm.
So if you’re like me, using an M-chip based device, you don’t currently have to worry about this, and may never have to.
On the other hand, the thing you have to worry about has not been patched out of nearly any algorithm:
https://xkcd.com/538/
The second comment on the page sums up what I was going to point out:
Let’s not panic, but let’s not get complacent, either.
That’s the sentiment I was going for.
There’s reason to care about this but it’s not presently a big deal.
I mean, unpatchable vulnerability. Complacent, uncomplacent, I’m not real sure they look different.
Can’t fix the vulnerability, but can mitigate by preventing other code from exploiting the vulnerability in a useful way.
Sure. Unless law enforcement takes it, in which case they have all the time in the world.
Yup, but they’re probably as likely to beat you up to get your passwords.
No way! Even the evil ones will try to avoid jail.
Meanwhile they might have a friggin budget for the GrayKey, the Stingray…
Definitely believe rights are more likely to be violated when they can just plug in or power on without getting their gloves dirty.
It still requires user level access, which means they have to bypass my login password first, which would give them most of that anyways.
Am I missing something?
Ah yes, good old Rubber-hose cryptanalysis.
So if someone somehow gets hold of the device then it is possible?
It depends, some M-devices are iOS and iPadOS devices, which would have this hardware issue but don’t have actual background processing, so I don’t believe it’s possible to exploit it the way described.
On Mac, if they have access to your device to be able to set this up they likely have other, easier to manage, ways to get what they want than going through this exploit.
But if they had your device and uninterrupted access for two hours then yes.
Someone who understands it all more than I do could chime in, but that’s my understanding based on a couple of articles and discussions elsewhere.
So it’s been a while since I had my OS and microcomputer architecture classes, but it really looks like GoFetch could be a real turd in the punch bowl. It appears like it could be on par with the intel vulns of recent years.
So I’ve read the same about iOS only allowing one user-space app in the foreground at a time, but… that still leaves the entirety of kernal-space processes allowed to run at anytime they want. So it’s not hard to imagine that a compromised app could be running in the foreground, all the the while running GoFetch trying to mine, while the OS might be shuffling crypto keys in the background on the same processor cluster.
The other thing I’d like to address, is you’re assuming this code would necessarily require physical access to compromise a machine. That is certainly one vector, but I’d posit there’s other simpler ways to do the same. The two that come to mind really quick, are (1) a compromised app via official channels like the app store, or even more scary, (2) malicious javascript hidden on compromised websites. The white paper indicates this code doesn’t need root, it only needs to be executed on the same cluster where the crypto keys keep passing through by chance; so either of these vectors seem like very real possibilities to me.
Edit to add:
I seem to recall reading a paper on the tiktok apps with stock installation were actually polyglot, in that the app would actually download a binary after installation, such that what’s executed on an end user’s machine is not what went through the app store scanners. I had read of the same for other apps using a similar technique for mini-upgrades, which is a useful way to not have to go through app store approval everytime you need to roll out a hotfix or that latest minor feature.
If these mechanisms haven’t already been smacked down by apple/google, or worse, aren’t detectable by apple/google, this could be a seriously valuable tool for state level actors able to pull off the feat of hiding it in plain sight. I wonder if this might be part of what congress was briefed about recently, and why it was a near unanimous vote to wipe out tiktok. “Hey congress people, all your iphones are about to be compromised… your tinder/grindr/onlyfans kinks are about to become blackmail fodder.”
Doesn’t it require a separate process to be using the cryptographic algorithm in the first place in order to fill the cache in question?
If it’s done in-process of a malicious app that you’re running, why wouldn’t the app just steal your password and avoid all of this in the first place?
An efficient and fast version of this in Javascript would be worrisome. But as-is it’s not clear if this can be optimized to go faster than 1-2 uninterrupted hours of processing, so hopefully that doesn’t end up being the case.
Fetching remote code isn’t allowed on the play store at least, though I’m not sure how well they’re enforcing that.
That’s the reason termux isn’t updated in the play store anymore iirc, it has its own package manager that downloads and runs code.
I want to say “passkeys” but if I’m honest, that too is susceptible to this attack.
deleted by creator