I would say this is a nice & clever attack vector by calculating from rendering time aka side channeling. Kudos to the researchers though it would take lot of time and capture pixels even for Google authenticator. My worry is now how much of this could be reproduced to steal OTP from messages.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
No OS vendor wants you to do that, unless you're using a desktop, and then Google wants you to use Chrome. They all want a 30% cut of revenue and/or platform lock-in. They'll rely on dark patterns and nerfing features to push you to their app stores.
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
I'm no expert in security, but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
Note that for TOTP the attack is only feasible if the font and pixel-perfect positions on the screen are known:
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping?
We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
If genuine, this finger pointing is an interesting approach to a security vulnerability. Last time I read such arguments was 20 years ago from a different firm in California and it was not to their advantage.
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
I would say this is a nice & clever attack vector by calculating from rendering time aka side channeling. Kudos to the researchers though it would take lot of time and capture pixels even for Google authenticator. My worry is now how much of this could be reproduced to steal OTP from messages.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
My takeaway:
Do not install apps. Use websites.
Apps have way too much permissions, even when they have "no permissions".
No OS vendor wants you to do that, unless you're using a desktop, and then Google wants you to use Chrome. They all want a 30% cut of revenue and/or platform lock-in. They'll rely on dark patterns and nerfing features to push you to their app stores.
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
With JS disabled!
The unfortunate truth is that so many things require a dedicated mobile app these days to use.
I don't own or carry a smart phone. I'm still able to get by without one, but just barely.
I am not familiar to this type of side-channel attacks but the article says they use GPU.zip which is exploitable through Chrome:
https://www.hertzbleed.com/gpu.zip/
I wish Uber or Lyft allowed me to use a website. I hate having to find a regular taxi or rely on the kindness of others to use their app.
I'm no expert in security, but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
>but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
On desktop, apps aren't sandboxed. On mobile, they are. Breaking out of the sandbox is a security breach.
On desktop, people don't install an app for every fast food chain. On mobile, they do.
inb4 "graphene solves this"
Note that for TOTP the attack is only feasible if the font and pixel-perfect positions on the screen are known:
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
You know it's serious because it's got a domain and a logo. Even security researchers gotta create engagement and develop their brand.
I'd say it's _not_ serious when they need to market it.
Anyone remembers the OG heartbleed?
It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping? We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
If genuine, this finger pointing is an interesting approach to a security vulnerability. Last time I read such arguments was 20 years ago from a different firm in California and it was not to their advantage.
P.S.: where did you see this discussion?
TFA: https://www.pixnapping.com
The initial disclosure to Google was on February 24, 2025. They had more than enough time.
I was looking for a nice browser game, just judging by the name.
The best defence seems to be to configure your 2FA app to require biometrics. I'm not sure why they didn't mention this option.
Biometrics can't be changed if someone ever figures out how to duplicate them.
think it's a fair point. but it still triggered this in me: "only way to prevent more of my data from being stolen is to give Android more of my data"
Modern devices are simply too complex to be completely secure.
We have this tendency of adding more and more "features", more and more functionality 85% of which nobody asked for or has use for.
I believe that there will be a market for a small, bare bones secure OS in the future. Akin to how freeBSD is being run.
Would love a terminal and make world while on the go (-;
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
https://www.bunniestudios.com/blog/2020/introducing-precurso... (currently down, might be up later)
Discussion: https://news.ycombinator.com/item?id=45574613
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
That seems well worth the trade to me.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Yes; that is a perfect example of where I would prefer security over not looking out of place.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
interesting
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
Consider that your thoughts are a consequence of what you've consumed. They're not guessing what you think, they're influencing it.
Similar people thinking similar thoughts I'd wager
Are you sure that isn't just the horoscope effect?
[dead]
[dead]
Huh. I don’t know that I’ve seen a whole domain name registered, for a paper on a single CVE, before.
It's quite standard for "big" CVEs nowadays
I'd say that it started with heartbleed.
Maybe Linus has a point
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
https://www.techpowerup.com/242340/linus-torvalds-slams-secu...
It started at least since https://www.heartbleed.com/ if not earlier
Interesting. Looks like I upset someone. Not sure why admitting to ignorance is so offensive. Maybe because it's so rare, hereabouts?