-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[css-fonts-4] Detection-prevention approach to the local font privacy issue #11648
Comments
That would be ideal, of course. I agree that (largely as a consequence of the extensive discussion between the two horizontal groups) this is currently framed as an inherently at-odds situation which is solved by users adjusting a single slider on a privacy-I18n axis. @drott @jfkthame @hober @fantasai for additional input. Thanks so much @noamr for suggesting this new option to explore! |
This strikes me as the part that needs the most fleshing-out before it could be experimentally tested. The rest of it looks straightforward, and likely to work. |
Given that one of the important use-cases for local fonts is to allow users access to fonts that support uncommon languages and writing systems, including perhaps those that are under ongoing development, I don't think we can realistically handle this by just providing a predefined list of local->webfont mappings. Web authors and users need to be able to specify and use fonts that we as standards authors and browser developers have never heard of; that perhaps did not even exist at the time we created our "stock list". So that suggests the "equivalent web-font" needs to be specified by the author. This is exactly what an author can already do using However, I don't think this works to prevent fingerprinting, because a malicious site that wants to use local fonts as a fingerprinting vector can simply provide the wrong URL for the webfont source. (How would the browser know?) Then the page will trivially be able to tell whether a local font or the (completely different) webfont was used. |
Agreed, but those uncommon fonts are also less likely to be useful for fingerprinting at scale.
We could use that "wrong" font as the local font next time for that origin, or do something like quickly download some of the metrics for the font to compare, or avoid rendering this local font for this origin going forward? Origins that use valid URLs for their font won't suffer from this. I'm sure we can come up with solutions to some of these problems. |
Also, the list could be improved over time. Once we had the feature in place to make use of an initial list, it would be very easy to add new entries to make this mechanism that more effective. |
Thank you @noamr for the novel suggestion! A couple of thoughts, questions, and reactions:
I think requiring these kinds of annotations are at root a useful direction for the problem. You'll still need some kind of trusted list to make sure the font the URL points to matches the font thats on the disk though (otherwise i can still learn if the visitor / client has font X installed by i. precomputing the width of some text when rendered in X, ii. pointing the annotation-URL to some very very different looking font, and waiting a while and seeing if the rendered text looks more like X, or my fake, very-different-sized X). But, maintaining such a list seems very doable, and i think a useful direction for the group to dig in!
I understand this intuition, but in practice, unfortunately this is not a safe assumption (and part of what makes fingerprinting such a difficult problem in general). Fingerprinting bits that are rare are in someways less worrying (since they're less likely to occur), but in other ways they're far more worrying (since when they do occur, or are found by the attacker, they're highly identifying). In general, you need to protect against common and uncommon fingerprinting inputs alike |
For fingerprinting detection, you'd be making users download a large amount of fake fonts. They'd have to be big to effectively race with a delayed loading of a local font.
The intuition is not to leave these less-common fonts behind, but that a problem at this scale might require several solutions that cover different aspects it, rather than a single silver bullet. If we find a way to protect users of the more common fonts, and at the same time also make it much more difficult to detect users of the less common fonts (by making those requests async, and delay them if the font is not painted in a visible place), we can compartmentalise this problem further and perhaps make room for an additional solution in the future for the remaining aspects. |
Looking at #5421 and #11571, as well as the F2F, it seems like a lot of the discussion assumes that privacy and i18n needs are a zero-sum game when it comes to local fonts. Perhaps this is not a given, and we can find a win-win solution that doesn't compromise either?
The privacy issue around local fonts doesn't come from the fonts being renderable, but rather from the local availability of the fonts being detectable by the document. See https://github.com/jakesmo/fingerprintjs2/blob/master/fingerprint2.js for how this is used:
<span>
with that font<span>
's width with a known width<span>
so that it's not visible to the user (this is crucial)What if, instead of making those local fonts unavailable altogether, we'd make them indistinguishable from an equivalent cached web font?
Something along the lines of the following:
Looking at existing fingerprinting code that uses local fonts such as the above, a solution along these lines would make that style of fingerprinting very expensive and much less utilizable, and can we tweaked further for the more difficult cases in the future.
The text was updated successfully, but these errors were encountered: