Readit News logoReadit News
theandrewbailey · 5 months ago
CSP is really great at plugging these kinds of security holes, but it flummoxes me that most developers and designers don't take them seriously enough to implement properly (styles must only be set though <link>, and JS likewise exists only in external files). Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.
chrismorgan · 5 months ago
> Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.

I strongly disagree: inlining your entire CSS and JS is absurdly good for performance, up to a surprisingly large size. If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it, I would recommend deploying with only inline styles and scripts. The threshold where it becomes more subjective is, for most target audiences, possibly over half a megabyte by now.

Seriously, it’s ridiculous just how good inlining everything is for performance, whether for first or subsequent page load; especially when you have hundreds of milliseconds of latency to the server, but even when you’re nearby. Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.

It’s also a lot more robust. Fetching external resources is much more fragile than people tend to imagine.

theandrewbailey · 5 months ago
It's called Content Security Policy, not Content Performance Policy. My thoughts:

1. Inlining everything burns bandwidth, even if it's 100KB each. (I hope your cloud hosting bills are small.) External resources can be cached across multiple pageloads.

2. Best practice is to load CSS files as early as possible in the header, and load (and defer) all scripts at the end of the page. The browser can request the CSS before it finishes loading the page. If you're inlining scripts, you can't defer them.

3. If you're using HTTP/2+ (it's 2025, why aren't you?[0]), the connection stays open long enough for the browser to parse the DOM to request external resources, cutting down on RTT. If you have only one script and CSS, and they're both loaded from the same server as the HTML, the hit is small.

4. As allan_s mentioned, you can use nonce values, but those feel like a workaround to me, and the values should change on each page load.

> Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.

Source? I'd really like to know how and when slow caches can happen, and possibly how to prevent them.

[0] Use something like nginx, HAProxy, or Cloudflare in front of your server if needed.

bgirard · 5 months ago
> If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it

Do you have data to back this up? What are you basing this statement on?

My intuition agrees with you for the reasons you state but when I tested this in production, my workplace found the breakeven point to be at around 1KB surprisingly. Unfortunately we never shared the experiment and data publicly.

allan_s · 5 months ago
note that for inline style/script, as long as you're not using `style=''` or `onclick=''` , you can use `nonce=` to have a hash and to my understanding, newly added inline script will not be tolerated, allowing to have the best of both world
eru · 5 months ago
I think that's a limitation of our implementations. In principle, it's just bytes that we shoving down the pipe to the browser, so it shouldn't matter for performance whether those bytes are 'inline' or in 'external resources'.

In principle, you could imagine the server packing all the external resources that the browser will definitely ask for together, and just sending them together with the original website. But I'm not sure how much re-engineering that would be.

athanagor2 · 5 months ago
Honest question: I don't understand how forbidding inline scripts and style improves security. Also it would be a serious inconvenience to the way we distribute some of our software right now lol
theandrewbailey · 5 months ago
CSP tells the browser where scripts and styles can come from (not just inline, but origins/domains, too). Let's pretend that an attacker can inject something into a page directly (like a SQL injection, but HTML). That script can do just about anything, like steal data from any form on the page, like login, address, or payments, or substitute your elements for theirs. If inline resources are forbidden, the damage can be limited or stopped.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP

flir · 5 months ago
Cross-Site Scripting. If a user injects a malicious script into the page, it doesn't get run.
allan_s · 5 months ago
forbidding inline script protect you from

``` <h3> hello $user </h3> ```

with $user being equal to `<script>/* sending your session cookie out, or the value of the tag #credit-card etc. */</script>`

you will be surprised how many template library that supposedly escape things for you are actually vulnerable to this , so the "React escape for me" is not something you should 100% rely on. In a company I was working for the common vulnerably found was

`<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}` with unsafe deactivating the auto-escape because people wanted the feature to be released, and thinking of a way to translate the string intermixed with html was too time-consuming

for inline style, it may hide elements that may let you input sensitive value in the wrong field , load background image (that will 'ping' a recipient host)

with CSP activated, the vulnerability may exists, but the javascript/style will not be executed/applied so it's a safety net to cover the 0.01 case of "somebody has found an exploit in

bryanrasmussen · 5 months ago
sounds weird to me too, although I guess there could be a script that was not allowed to do CORS that then instead created an inline script and did its CORS stuff in that script - about the only way I can think of it being bad.
sebazzz · 5 months ago
> it flummoxes me that most developers and designers don't take them seriously enough to implement properly (styles must only be set though <link>, and JS likewise exists only in external files). Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.

At our place we do abide by those rules, but we also use 3rd party components like Telerik/Kendo which require both unsafe-inline for scripting and styling. Sometimes you have no choice laxing your security policy.

pocketarc · 5 months ago
> should be frowned upon as hard as table-based layouts

I absolutely agree with you. I've been very very keen on CSP for a long time, it feels SO good to know that that vector for exploiting vulnerabilities is plugged.

One thing that's very noticeable: It seems to block/break -a lot- of web extensions. Basically every error I see in Sentry is of the form of "X.js blocked" or "random script eval blocked", stuff that's all extension-related.

midtake · 5 months ago
Why? If you're the content owner, you should be able to. If you factor out inline code, you will likely just trust your own other domain. When everything is a cdn this can lead to less security not more.

Do you mean people should be banned from inlining Google Analytics or Meta Pixel or Index Now or whatever, which makes a bunch of XHRs to who knows where? Absolutely!

But nerfing your own page performance just to make everything CSP-compliant is a fool's errand.

myko · 5 months ago
I find not being able to use inline styles extremely frustrating
davidmurdoch · 5 months ago
Firefox really needs to fix their CSP for extensions before this kind of thing.

Here is the 9 year old bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1267027

And their extension store does not permit workarounds, even though they themselves have confirmed it's a bug.

evilpie · 5 months ago
While this is definitely annoying, most of the time this can be worked around by the extension without workarounds that themself weaken security.

For example I helped uBlock Origin out in 2022 when they ran into this: https://github.com/uBlockOrigin/uBlock-issues/issues/235#iss...

KwanEsq · 5 months ago
And it's worth noting that since your comment later in that thread about sandbox being an issue, that's been fixed too as of Firefox 128: https://bugzilla.mozilla.org/show_bug.cgi?id=1411641
davidmurdoch · 5 months ago
Thanks for this! I'll look into implementing it soon.
Semaphor · 5 months ago
Having fewer permissions for extensions than one might want seems fairly less important to making the browser more secure…
joshuaissac · 5 months ago
Arguably, it can make it less secure by reducing the user's control over what content the browser loads or what scripts it executes. For example, users may be using extensions to selectively replace harmful content (like intrusive JavaScript, tracking) with benign content. It is a balance between security for the user and security for the website owner.
raxxorraxor · 5 months ago
In the current browser landscape I would think not. Firefox is no less secure than Chrome or Safari and both are subject to economic incentives. You could even argue these issues negatively relate to security as well.
gear54rus · 5 months ago
One of the possible workarounds would be to just remove the damn header before it causes any further inconvenience. I think they do allow `webRequest` API usage in the store, don't they?
evilpie · 5 months ago
Removing security headers like Content-Security-Policy is forbidden by the addons.mozilla.org policy.

https://extensionworkshop.com/documentation/publish/add-on-p...

davidmurdoch · 5 months ago
We modified the CSP to inject a per user generated nonce that exempts it script from the policy.

They said this was not allowed and removed it from the extension store.

pama · 5 months ago
Wouldn’t fixing this bug reduce security?
shakna · 5 months ago
If you are using filter scripts, to block specific domains or script payloads, that extension can't load on a properly secured CSP page. And that page may be using CSP to protect throwing up ads... Or malware.
davidmurdoch · 5 months ago
No, it's explained more in the issue. An extension is a part of the "User Agent". The CSP header in FF is almost seemingly arbitrarily applied to extensions.
lol768 · 5 months ago
This is an entire class of vulnerabilities that would've never been possible with XUL, is that correct?

I appreciate they had to move for other reasons but I also really don't like the idea that the DevTools and browser chrome itself now has all of the same security issues/considerations as anything else "web" does. It was bad with Electron (XSS suddenly becoming an RCE) and makes me pretty nervous here too :(

emiliocobos · 5 months ago
Xul would've had the same issues.
WorldMaker · 5 months ago
XUL would have had worse issues because it could make arbitrary XPCOM calls to all sorts of native components and nearly the full gamut of native component issues written mostly in C/C++.

XUL was in many ways always a ticking time bomb.

sebazzz · 5 months ago
It still surprises me parts of Firefox still use XUL.
myfonj · 5 months ago
I am surprised there is no policy that would allow inline event handlers set in the initial payload (or stuff emitted by document.write), but neuter any done after initial render by `….setAttribute('on…', …)`.

That would keep "static form" helpers still functional, but disable (malicious) runtime templating.

yanis_t · 5 months ago
CSP is great in mitigating a whole bunch of security concerns, and it also forces some good practices (e.g. not using inline scripts).

I recently implemented a couple of tools to generate[1] and validate[2] a CSP. Would be glad if anybody tries it.

[1] https://www.csphero.com/csp-builder [2] https://www.csphero.com/csp-validator

CamouflagedKiwi · 5 months ago
I can't help but wonder if this HTML-based setup is actually more trouble than it's worth. It seems there's a very complex ecosystem in there that is hard to reason about in this way, and it's a top-level requirement for a browser to sandbox the various bits of code being executed from a web page.

Obviously hard to say what those tradeoffs are worth, but I'd be a bit nervous about it. The work covered by this post is a good thing, of course!

bbarnett · 5 months ago
Do this, and then use Firefox's profiles to have weaker instances without these configs.

Why? Some sites implement then break this, sadly.

I have extremely locked down instances for banks and so on. On Linux I have an icon which lets me easily launch those extra profiles.

I also use user.js, which means I can just drop in changes, and write comments for each config line, and keep it version controlled too. Great for cloning to other devices too.

SebFender · 5 months ago
CSP is a soothing cream but is most usually easily bypassed by other simple attacks relying on poor DOM management and security - to this day my team has never found so many web vulnerabilities just going into the DOM...
sixaddyffe2481 · 5 months ago
Their blog has a lot of posts on trying to attack Firefox. If it's so simple, why are you not in the bug bounty hall of fame? :)
h4ck_th3_pl4n3t · 5 months ago
The problem with CSP is that it's fixing the effect, not the cause.

It is also made in a way that it is optional (never break the web mentality), so what happens in practice is the same as with CORS: allow all, because web devs don't understand what to do, and don't have time to read the RFC.

For example: try getting a web page to run that uses a web assembly binary _and_ an external JS library. Come back after 2 weeks of debugging and let me know what your experience was like, and why you eventually gave up on it.

SebFender · 5 months ago
Professional limits...