Imagine just for one second that you are browsing the web at home through your nice router or at work through a set of firewalls, routers and switches. Imagine just for one second that you have some private service in this network like databases or some web server. Then the amazing remote site adds on their source html code a script tag or an image tag that targets some resources in your local network. What does the browser do? It just loads them.
Take this html as example:
Test
...
Which security level has been bypassed?
One day I heard that this is an intrinsic feature of the web.
If you think a little bit about the impact of that feature or you check some of the well known javascript attacks you will realize that you can do things like:
- Ask for a remote javascript, webservice or json configuration file
- DDOS a local network
- Map your local network infrastructure
And all those attacks could be executed even if you have a DMZ infrastructure.
Similarities
A number of engineers have been working on a similar issue but from another perspective. The result of all this work has been one of the best lines of investigation in web security in recent years. This work has been called the Content Security Policy.
This technology allows websites to protect against non desired content in their own context loaded from other resources. This is very useful to fight against XSS atacks, mixed content, etc.
My question here is, who protects the user from a malicious website that loads content from another non authorized resource? The browser is the best actor to manage this.
Solution
My proposal to fight against this issue is quite simple and can reuse a huge number of existing code in the current browsers. The idea is to have a default CSP at browser level refusing to load content from local networks.
This could be set by default in the browser or maybe at domain level if you’re in a corporation.
What are the main steps to have this technology working on your browser?
- Improve the request flow: set this new default CSP rule in the browser request flow
- Intersect CSP’s: we need an algorithm to intersect the website CSP with the default browser CSP giving more priority to the browser settings.
- Negative rules: to be able to reject non desired accesses we need to extend CSP especification to allow negative rules. Something like
!<domain>. Example:!192.168.*.*/
Hypothetical case
Imagine a website (Ex: example.com) responding to a request with this CSP header:
Content-Security-Policy: default-src 'self'; img-src * 127.0.0.1; script-src scripts.example.com
and imagine this default browser CSP setting:
Content-Security-Policy: img-src !127.*.*.* !192.168.*.* !10.*.*.*; script-src !127.*.*.* !192.168.*.* !10.*.*.*;
The final policy applied for the current website would be:
Content-Security-Policy: default-src 'self'; img-src !127.*.*.* !192.168.*.* !10.*.*.*; script-src scripts.example.com !127.*.*.* !192.168.*.* !10.*.*.*;
In this case the browser policy has more priority and overwrites whatever rule set by the current website. This behaviour lets the browser to reject suspicious accesses to local networks and trust other urls. Urls like below:
- http://127.0.0.1/test.png (Rejected)
- http://10.12.3.2/assets/javascript/app.js (Rejected)
- http://192.168.3.23:3306/test (Rejected)
- http://scripts.example.com/javascripts/application.js (Accepted)
- http://i.imgur.com/aBrAx.jpg (Accepted)
