How to prevent an external JS request - javascript

I'm using a service that automatically constructs and hosts launch pages. In the body of their code, they have a call to jquery:
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js?cache=2015-09-22-09">
Unfortunately, since I'm in China, googleapis.com is blocked and the page has to wait for this code to timeout before it will render. This portion is autogenerated as part of the template and I can't change it. However I can insert custom javascript in the head and the body.
Are there any ways I can prevent this code from making the request to googleapis.com or to force it to abort after it has already made the request?
-EDIT-
Here's a screen cap of the network tab when I try to load the page. As you can see, the call to googleapis.com hangs for 1.4 mins until it times out, at which point DomContentLoaded triggers and the entire page loads.

Right, if you are able to put html in the head of the document, not just execute javascript you could use a meta tag to block external script loading:
<meta http-equiv="Content-Security-Policy" content="script-src 'self'">
From the Mozilla Content-Security-Policy Meta Tag Docs:
Authors are strongly encouraged to place meta elements as early in the document as possible, because policies in meta elements are not applied to content which preceds them. In particular, note that resources fetched or prefetched using the Link HTTP response header field, and resources fetched or prefetched using link and script elements which precede a meta-delivered policy will not be blocked.
So the meta tag will only work in the head, certainly before the script which loads jQuery. You can whitelist URL's in the tag by adding them into the content parameter in the meta tag too.
If you can only execute javascript, you can add the meta tag dynamically. Unfortunately it is likely the browser has probably decided on it's policies by the time it is added. Nevertheless, it can be added with
var meta = document.createElement('meta');
meta.httpEquiv = "Content-Security-Policy";
meta.content = "script-src 'self'";
document.getElementsByTagName('head')[0].appendChild(meta);
More Interesting reading material homework for solving the 'Prevent an external js request' mystery:
Use JavaScript to prevent a later `<script>` tag from being evaluated?
Good Luck!

Consider using a Content Security Policy. It would be an unusual use case, but with the CSP you can tell the browser that it is not allowed to access googleapis.com before your government even gets a say in the matter. In this way, the browser won't even try to load it, and the page will not hang.

Yeah. #Niet's suggestion seems nice. To add to his answer, here's how you can block rendering of googleapi domain using CSP:
Content-Security-Policy: script-src 'self';
This code would instruct the browser to only execute YOUR own domain's scripts.

If you have access to the web server and its windows you could add an entry in the Hosts file to redirect the google address to the local server web application to download the javascript from there? (if you match the folder structure of the javascript link)
c:\windows\system32\drivers\etc\hosts
ajax.googleapis.com 127.0.0.1

Related

How do I ignore "Blocked loading mixed active content" [duplicate]

This morning, upon upgrading my Firefox browser to the latest version (from 22 to 23), some of the key aspects of my back office (website) stopped working.
Looking at the Firebug log, the following errors were being reported:
Blocked loading mixed active content "http://code.jquery.com/ui/1.8.10/themes/smoothness/jquery-ui.css"
Blocked loading mixed active content "http://ajax.aspnetcdn.com/ajax/jquery.ui/1.8.10/jquery-ui.min.js"`
among other errors caused by the latter of the two above not being loaded.
What does the above mean and how do I resolve it?
I found this blog post which cleared up a few things. To quote the most relevant bit:
Mixed Active Content is now blocked by default in Firefox 23!
What is Mixed Content?
When a user visits a page served over HTTP, their connection is open for eavesdropping and man-in-the-middle (MITM) attacks. When a user visits a page served over HTTPS, their connection with the web server is authenticated and encrypted with SSL and hence safeguarded from eavesdroppers and MITM attacks.
However, if an HTTPS page includes HTTP content, the HTTP portion can be read or modified by attackers, even though the main page is served over HTTPS. When an HTTPS page has HTTP content, we call that content “mixed”. The webpage that the user is visiting is only partially encrypted, since some of the content is retrieved unencrypted over HTTP. The Mixed Content Blocker blocks certain HTTP requests on HTTPS pages.
The resolution, in my case, was to simply ensure the jquery includes were as follows (note the removal of the protocol):
<link rel="stylesheet" href="//code.jquery.com/ui/1.8.10/themes/smoothness/jquery-ui.css" type="text/css">
<script type="text/javascript" src="//ajax.aspnetcdn.com/ajax/jquery.ui/1.8.10/jquery-ui.min.js"></script>
Note that the temporary 'fix' is to click on the 'shield' icon in the top-left corner of the address bar and select 'Disable Protection on This Page', although this is not recommended for obvious reasons.
UPDATE: This link from the Firefox (Mozilla) support pages is also useful in explaining what constitutes mixed content and, as given in the above paragraph, does actually provide details of how to display the page regardless:
Most websites will continue to work normally without any action on your part.
If you need to allow the mixed content to be displayed, you can do that easily:
Click the shield icon Mixed Content Shield in the address bar and choose Disable Protection on This Page from the dropdown menu.
The icon in the address bar will change to an orange warning triangle Warning Identity Icon to remind you that insecure content is being displayed.
To revert the previous action (re-block mixed content), just reload the page.
It means you're calling http from https. You can use src="//url.to/script.js" in your script tag and it will auto-detect.
Alternately you can use use https in your src even if you will be publishing it to a http page. This will avoid the potential issue mentioned in the comments.
In absence of a white-list feature you have to make the "all" or "nothing" Choice. You can disable mixed content blocking completely.
The Nothing Choice
You will need to permanently disable mixed content blocking for the current active profile.
In the "Awesome Bar," type "about:config". If this is your first time you will get the "This might void your warranty!" message.
Yes you will be careful. Yes you promise!
Find security.mixed_content.block_active_content. Set its value to false.
The All Choice
iDevelApp's answer is awesome.
Put the below <meta> tag into the <head> section of your document to force the browser to replace unsecure connections (http) to secured connections (https). This can solve the mixed content problem if the connection is able to use https.
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
If you want to block then add the below tag into the <head> tag:
<meta http-equiv="Content-Security-Policy" content="block-all-mixed-content">
Its given the error because of security.
for this please use "https" not "http" in the website url.
For example :
"https://code.jquery.com/ui/1.8.10/themes/smoothness/jquery-ui.css"
"https://ajax.aspnetcdn.com/ajax/jquery.ui/1.8.10/jquery-ui.min.js"
In the relevant page which makes a mixed content https to http call which is not accessible we can add the following entry in the relevant and get rid of the mixed content error.
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
If you are consuming an internal service via AJAX, make sure the url points to https, this cleared up the error for me.
Initial AJAX URL: "http://XXXXXX.com/Core.svc/" + ApiName
Corrected AJAX URL: "https://XXXXXX.com/Core.svc/" + ApiName,
Simply changing HTTP to HTTPS solved this issue for me.
WRONG :
<script src="http://code.jquery.com/jquery-3.5.1.js"></script>
CORRECT :
<script src="https://code.jquery.com/jquery-3.5.1.js"></script>
I had this same problem because I bought a CSS template and it grabbed a javascript an external javascript file through http://whatever.js.com/javascript.js. I went to that page in my browser and then changed it to https://whatever... using SSL and it worked, so in my HTML javascript tag I just changed the URL to use https instead of http and it worked.
To force redirect on https protocol, you can also add this directive in .htaccess on root folder
RewriteEngine on
RewriteCond %{REQUEST_SCHEME} =http
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
#Blender Comment is the best approach. Never hard code the protocol anywhere in the code as it will be difficult to change if you move from http to https. Since you need to manually edit and update all the files.
This is always better as it automatically detect the protocol.
src="//code.jquery.com
I've managed to fix this using these :
For Firefox user
Open a new TAB enter about:config in the address bar to go to the configuration page.
Search for security.mixed_content.block_active_content
Change TRUE to FALSE.
For Chrome user
Click the Not Secure Warning next to the URL
Click Site Settings on the popup box
Change Insecure Content to Allow
Close and refresh the page
I found if you have issues with including or mixing your page with something like http://www.example.com, you can fix that by putting //www.example.com instead
I have facing same problem when my site goes from http to https. We have added rule for all request to redirect http to https.
You needs to add the redirection rule for inter site request, but you have to remove the redirection rule for external js/css.
I just fixed this problem by adding the following code in header:
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
#if (env('APP_DEBUG'))
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
#endif
Syntax for Laravel Blade, Remember to use it for debugging only to avoid MITM attacks and eavs-dropping
Also using
http -> https
for Ajax or normal JS Scripts or CSS will also solve the issue.
If your app server is weblogic, then make sure WLProxySSL ON entry exists(and also make sure it should not be commented) in the weblogic.conf file in webserver's conf directory. then restart web server, it will work.

How to fetch JavaScript from server, track download progress, and not use `unsafe-eval` in Content-Security-Policy?

I have a heavy JavaScript file on the server (> 3MB). I want to load the page fast and show a loading progress bar to the user. Currently, I am using fetch and WritableStream to download the data and to track the download progress as:
let resource = await fetch('heavy_file.js')
resource.clone().body.pipeTo(new WritableStream({
write(t) { on_receive(t.length) }
}))
And then I am using the Function to evaluate it. This has several problems. How can I:
Load the script preferably using fetch (I'm using the same method to load WASM files, I want to track their download progress as well, and the WebAssembly.compileStreaming API requires the usage of fetch).
Track the download progress in a way that would work across modern browsers nowadays.
Be able to use this solution without enabling script-src 'unsafe-eval' in Content-Security-Policy?
PS.
Of course, currently, we need to use script-src 'wasm-eval' in Chrome when loading WASM files until the bug is fixed.
Option 1
If you can calculate the hash of the script you're loading in advance (if it's not something generated dynamically), then a simple option to avoid
enabling script-src 'unsafe-eval' in Content-Security-Policy
would be to add only a hash of that specific script into the CSP header - this will still ensure that you're not executing any untrusted code, while allowing you to load and execute script manually.
MDN has some more examples of implementing such CSP policies here.
As for loading itself, you have two different paths from here:
Option 1.1
Combine the hash with a CSP3 policy unsafe-hashes which will allow you to keep using Function or eval like you currently, while still limiting code only to trusted.
For example, if you have a script like
alert('Hello, world.');
then your CSP header should contain
Content-Security-Policy: script-src 'unsafe-hashes' 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng='
Unfortunately, CSP level 3, or, at least, this option is supported only in Chromium at the moment of writing.
Option 1.2
Instead of using Function or eval, you can dynamically create a script tag from JavaScript, populate its textContent with your response content and inject it into the DOM:
let resource = await fetch('heavy_file.js')
resource.clone().body.pipeTo(new WritableStream({
write(t) { on_receive(t.length) }
}));
resource.text().then(res => {
let s = document.createElement('script');
s.textContent = res;
document.head.appendChild(s);
});
In this case you only need to add the hash of the script to the CSP and it will work across all browsers:
Content-Security-Policy: script-src 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng='
Option 2
If you do want to support dynamically generated scripts, then your only option might be to move your progress-tracking code to a Service Worker, and then use Client.postMessage to communicate progress to a script on the page.
This will work for any content served from your origin, but only once Service Worker is installed - usually on a subsequent page load, and so might not help you if the large script you're loading is part of the page user visits first on your website.

Modify <meta> tag with JS (chrome extension) on response receiving

I have a Chrome extension that adds a panel to the page in the floating iframe (on extension button click). There's certain JS code that is downloaded from 3rd party host and needs to be executed on that page. Obviously there's XSS issue and extension needs to comply with content security policies for that page.
Previously I had to deal with CSP directives that are passed via request headers, and was able to override those via setting a hook in chrome.webRequest.onHeadersReceived. There I was adding my host URL to content-security-policy headers. It worked. Headers were replaced, new directives applied to the page, all good.
Now I discovered websites that set the CSP directives via <meta> tag, they don't use request headers. For example, app pages in iTunes https://itunes.apple.com/us/app/olympics/id808794344?mt=8 have such. There is also an additional meta tag with name web-experience-app/config/environment (?) that somewhat duplicates the values that are set in content of tag with http-equiv="Content-Security-Policy".
This time I am trying to add my host name into meta tag inside chrome.webNavigation.onCommitted or onCompleted events listeners (JS vanilla via chrome.tabs.executeScript). I also experimented with running the same code from the webrequest's onCompleted listener (at the last step of lifecycle according to https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/webRequest).
When I inspect the page after its load - I see the meta tags have changed. But when I click on my extension to start loading iframe and execute JS - console prints the following errors:
Refused to frame 'https://myhost.com' because it violates the following Content Security Policy directive: "frame-src 'self' *.apple.com itmss: itms-appss: itms-bookss: itms-itunesus: itms-messagess: itms-podcasts: itms-watchs: macappstores: musics: apple-musics:".
I.e. my tag update was not effective.
I have several questions: first, do I do it right? Am I doing the update at the proper event? When is the data content from meta tags being read in the page lifecycle? Will it be auto-applied after tag content change?
As of March 2018 Chromium doesn't allow to modify the responseBody of the request. https://bugs.chromium.org/p/chromium/issues/detail?id=487422#c29
"WebRequest API: allow extensions to read response body" is a ticket from 2015. It is not on a path of getting to be resolved and needs some work/help.
--
Firefox has the webRequest filter implementation that allows to modify the response body before the page's meta directives are applied.
https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/webRequest/filterResponseData
BUT, my problem is focused on fixing the Chrome extension. Maybe Chrome picks this up one day.
--
In general, the Chrome's extension building framework seems like not a reliable path of building a long term living software; with browser vendors changing the rules frequently, reacting to newly discovered threats, having no up-to-date supported cross-browser standard.
--
In my case, the possible way around this issue can be to throw all the JS code into the extension's source base. Such that there's no 3-rd party to connect to fetch and execute the JS (and conflict with/violate the CSP rules). Haven't explored this yet, as I expected to reuse the code & interactive components I am using in my main browser application.
I've been interested in the same things and here are a few aspects that perhaps could help:
chrome.debugger extension API with Fetch (or Network) domain can be used to modify responseBody: https://chromedevtools.github.io/devtools-protocol/tot/Fetch/
This is an example implementation: https://github.com/mr-yt12/Debugger-API-Fetch-example-Chrome-Extension
However, I'm facing a problem of Fetch.requestPaused not firing on the first page load: Chrome Extension Debugger API, Fetch domain attaches/enables too late for the response body to be intercepted
And I haven't found a solution to this yet, besides first redirecting the request to 'http://google.com/gen_204' and then updating the tab. But this creates flicker and also I'm not sure if it's possible to redirect the request like this with manifest v3.
When using debugger API, Chrome shows a warning at the top, which changes the page's size and doesn't go away (perhaps it goes away 5 seconds after the debugger is detached and also if the user clicks "cancel"). This means that it's mostly only good for personal use or when distributing as a developer mode extension. Using --silent-debugger-extension-api flag with Chrome disables this warning.
I've tried injecting my script at document_start (when the meta tag is not yet created). Then I used mutationObserver (also tried other methods) to wait for the meta tag, to modify it before it's applied or fully created. Somehow it succeeded once or a few times, but could be a coincidence or a wrong interpretation of results. Perhaps it's worth experimenting with it.
Another idea (but I think I didn't make it work, but perhaps it's possible) is to use window.stop() at document_start and then rewrite the html content programmatically. This needs more researching.
It seems that meta tag CSP is applied once the meta tag is created (or while it's being created), and then there is no way to cancel what's been applied. It should be researched more on how to prevent it from applying or modifying it before it's fully created or applied.

Mixed Content warning on Chrome due to iframe src

Somewhere in the code, over a secure site, the following snippet is used:
var iframe = document.createElement("IFRAME");
iframe.setAttribute("src", "pugpig://onPageReady");
document.documentElement.appendChild(iframe);
iframe.parentNode.removeChild(iframe);
iframe = null;
The iframe src attribute set here is actually triggering a callback but it's causing Chrome (version 54) to complain about "Mixed Content" as the src attribute is interpreted as a non-https url over an https:// domain and that version of Chrome is not presenting the users with an easy option to allow for mixed content to load anyway (e.g. shield icon in the address bar).
Changing the Chrome version / using a different browser / starting chrome with the --allow-running-insecure-content switch is not an option for certain reasons so my question is, is there a way to make the "pugpig://onPageReady" part be perceived as an https url?
You can try this:-
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests" />
Or
<meta http-equiv="Content-Security-Policy" content="block-all-mixed-content" />
Paste it in <head>...</head> tags.
The HTTP Content-Security-Policy (CSP) block-all-mixed-content directive prevents loading any assets using HTTP when the page is loaded using HTTPS.
All mixed content resource requests are blocked, including both active and passive mixed content. This also applies to <iframe> documents, ensuring the entire page is mixed content free.
The upgrade-insecure-requests directive is evaluated before block-all-mixed-content and If the former is set, the latter is effectively a no-op. It is recommended to set one directive or the other – not both.
As log as i know, no, ther's not. If there is, it can be considered a security flaw, and it will be fixed.
mixed content explaination

How to check if site has allowed me to inject Javascript with my bookmarklet?

I have a bookmarklet that injects Javascript into a page and opens up an iframe with an HTML page I have created that allows a user to subscribe to a page directly from my bookmarklet.
Issue is, certain domains (Twitter and Facebook being two) do not allow me to inject Javascript, so I have to pop up a window instead.
Javascript console when on Facebook:
Refused to load the script script name because it violates the following Content Security Policy directive: "script-src https://.facebook.com http://.facebook.com https://.fbcdn.net http://.fbcdn.net *.facebook.net *.google-analytics.com *.virtualearth.net .google.com 127.0.0.1: *.spotilocal.com:* chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl 'unsafe-inline' 'unsafe-eval' https://.akamaihd.net http://.akamaihd.net *.atlassolutions.com".
Right now in my bookmarklet I am just checking to see if the URL matches those domains before I try to inject JS, and if it does, I pop open a new window. For obvious reasons, this is not a good practice.
What is a good method of checking if a Javascript function was allowed to run on the current page or not, and if not, to open a new window?
There is no way to know with absolute certainty that an external script failed to load. Even when there is no security policy, an external script could fail to load because of other problems. The only One thing you can really do is set a timeout and if the script hasn't completed some action before the timeout expires, assume it has failed to load.
EDIT: I stand corrected by Sean below. His suggestion also worked in Chrome and Firefox on Windows. The solution is something like this:
newScript.addEventListener('error', function(){ console.log('script failed to load') });
For the specific question of how to check if the page header returns a Content Security Policy which will block your external script, the only solution I know of is to check the HTTP header using AJAX.
Here is some example code. I've tested this on Facebook.
req = new XMLHttpRequest;
req.onreadystatechange = function(){if (req.readyState==4) console.log(req.getResponseHeader('content-security-policy'))};
req.open("HEAD", document.location.href);
req.send();
Bookmarklets should be allowed to run whatever the security policy is. If not, it is a browser bug - at least if I understand this correctly. But the bookmarklet may be forbidden to do some things. If you use try-catch you can find out if an action was allowed or not.

Categories