"undefined" randomly appended in 1% of requested urls on my website since 12 june 2012 - javascript

Since 12 june 2012 11:20 TU, I see very weirds errors in my varnish/apache logs.
Sometimes, when a user has requested one page, several seconds later I see a similar request but the all string after the last / in the url has been replaced by "undefined".
Example:
http://example.com/foo/bar triggers a http://example.com/foo/undefined request.
Of course theses "undefined" pages does not exist and my 404 page is returned instead (which is a custom page with a standard layout, not a classic apache 404)
This happens with any pages (from the homepage to the deepest)
with various browsers, (mostly Chrome 19, but also firefox 3.5 to 12, IE 8/9...) but only 1% of the trafic.
The headers sent by these request are classic headers (and there is no ajax headers).
For a given ip, this seems occur randomly: sometimes at the first page visited, sometimes on a random page during the visit, sometimes several pages during the visit...
Of course it looks like a javascript problem (I'm using jquery 1.7.2 hosted by google), but I've absolutely nothing changed in the js/html or the server configuration since several days and I never saw this kind of error before. And of course, there is no such links in the html.
I also noticed some interesting facts:
the undefined requests are never found as referer of another pages, but instead the "real" pages were used as referer for the following request of the same IP (the user has the ability to use the classic menu on the 404 page)
I did not see any trace of these pages in Google Analytics, so I assume no javascript has been executed (tracker exists on all pages including 404)
nobody has contacted us about this, even when I invoked the problem in the social networks of the website
most of the users continue the visit after that
All theses facts make me think the problem occurs silently in the browers, probably triggered by a buggy add-on, antivirus, a browser bar or a crappy manufacturer soft integrated in browsers updated yesterday (but I didn't find any add-on released yesterday for chrome, firefox and IE).
Is anyone here has noticed the same issue, or have a more complete explanation?

There is no simple straight answer.
You are going to have to debug this and it is probably JavaScript due to the 'undefined' word in the URL. However it doesn't have to be AJAX, it could be JavaScript creating any URL that is automatically resolved by the browser (e.g. JavaScript that sets the src attribute on an image tag, setting a css-image attribute, etc). I use Firefox with Firebug installed most of the time, so my directions will be with that in mind.
Firebug Initial Setup
Skip this if you already know how to use Firebug.
After the installs and restarting Firefox for Firebug, you are going to have to enable most of Firebug's 'panels'. To open Firebug there will be a little fire bug/insect looking thing in the top right corner of your browser or you can press F12. Click through the Firebug tabs 'Console', 'Script', 'Net' and enable them by opening them up and reading the panel's information. You might have to refresh the page to get them working properly.
Debugging User Interaction
Navigate to one of the pages that has the issue with Firebug open and the Net panel active. In the Net panel there will be a few options: 'Clear', 'Persist', 'All', 'Html', etc. Make sure ALL is selected. Don't do anything on the page and try not to mouse over anything on it. Look through the requests. The request for the invalid URL will be red and probably have a status of 404 Not Found (or similar).
See it on load? Skip to the next part.
Don't see it on initial load? Start using your page and continue here.
Start clicking on every feature, mouse over everything, etc. Keep your eyes on the Net panel and watch for a requests that fail. You might have to be creative, but continue using your application till you see your browser make an invalid request. If the page makes many requests, feel free to hit the 'Clear' button on the top left of the Net panel to clear it up a bit.
If you submit the page and see a failed request go out really quick but then lose it because the next page loads, enable persistence by clicking 'Persist' in the top left of the Net panel.
Once it does, and it should, consider what you did to make that happen. See if you can make it happen again. After you figure out what user interaction is making it happen, dive into that code and start looking for things that are making invalid requests.
You can use the Script tab to setup breakpoints in your JavaScript and step through them. Investigate event handlers done via $(elemment).bind/click/focus/etc or from old school event attributes like onclick=""/onfocus="" etc.
If the request is happening as soon as the page loads
This is going to be a little harder to peg down. You will need to go to the Script tab and start adding break points to every script that runs on load. You do this by clicking on the left side of the line of JavaScript.
Reload your page and your break points should stop the browser from loading the page. Press the 'Continue' button on the script panel. Go to your net panel and see if your request was made, continue till it is found. You can use this to narrow down where the request is being made from by slowly adding more and more break points and then stepping into and out of functions.
What you are looking for in your code
Something that is similar to the following:
var url = workingUrl + someObject['someProperty'];
var url = workingUrl + someObject.someProperty;
Keep in mind that someObject might be an object {}, an array [], or any of the internal browser types. The point is that a property will be accessed that doesn't exist.
I don't see any 404/red requests
Then whatever is causing it isn't being triggered by your tests. Try using more things. The point is you should be able to make the request happen somehow. You just don't know yet. It has to show up in the Net panel. The only time it won't is when you aren't doing whatever triggers it.
Conclusion
There is no super easy way to peg down what exactly is going on. However using the methods I outlined you should be at least be able to get close. It is probably something you aren't even considering.

Based on this post, I reverse-engineered the "Complitly" Chrome Plugin/malware, and found that this extension is injecting an "improved autocomplete" feature that was throwing "undefined" requests at every site that has a input text field with NAME or ID of "search", "q" and many others.
I found also that the enable.js file (one of complitly files) were checking a global variable called "suggestmeyes_loaded" to see if it's already loaded (like a Singleton). So, setting this variable to false disables the plugin.
To disable the malware and stop "undefined" requests, apply this to every page with a search field on your site:
<script type="text/javascript">
window.suggestmeyes_loaded = true;
</script>
This malware also redirects your users to a "searchcompletion.com" site, sometimes showing competitors ADS. So, it should be taken seriously.

You have correctly established that the undefined relates to a JavaScript problem and if your site users haven't complained about seeing error pages, you could check the following.
If JavaScript is used to set or change image locations, it sometimes happens that an undefined makes its way into the URI.
When that happens, the browser will happily try to load the image (no AJAX headers), but it will leave hints: it sets a particular Accept: header; instead of text/html, text/xml, ... it will use image/jpeg, image/png, ....
Once such a header is confirmed, you have narrowed down the problem to images only. Finding the root cause will possibly take some time though :)
Update
To help debugging you could override $.fn.attr() and invoke the debugger when something is being assigned to undefined. Something like this:
​(function($, undefined) {
var $attr = $.fn.attr;
$.fn.attr = function(attributeName, value) {
var v = attributeName === 'src' ? value : attributeName.src;
if (v === 'undefined') {
alert("Setting src to undefined");
}
return $attr(attributeName, value);
}
}(jQuery));

Some facts that have been established, especially in this thread: http://productforums.google.com/forum/#!msg/chrome/G1snYHaHSOc/p8RLCohxz2kJ
it happens on pages that have no javascript at all.
this proves that it is not an on-page programming error
the user is unaware of the issue and continues to browse quite happily.
it happens a few seconds after the person visits the page.
it doesn't happen to everybody.
happens on multiple browsers (Chrome, IE, Firefox, Mobile Safari, Opera)
happens on multiple operating systems (Linux, Android, NT)
happens on multiple web servers (IIS, Nginx, Apache)
I have one case of googlebot following the link and claiming the same referrer. They may just be trying to be clever and the browser communicated it to the mothership who then set out a bot to investigate.
I am fairly convinced by the proposal that it is caused by plugins. Complitly is one, but that doesn't support Opera. There many be others.
Though the mobile browsers weigh against the plugin theory.
Sysadmins have reported a major drop off by adding some javascript on the page to trick Complitly into thinking it is already initialized.
Here's my solution for nginx:
location ~ undefined/?$ {
return 204;
}
This returns "yeah okay, but no content for you".
If you are on website.com/some/page and you (somehow) navigate to website.com/some/page/undefined the browser will show the URL as changed but will not even do a page reload. The previous page will stay as it was in the window.
If for some reason this is something experienced by users then they will have a clean noop experience and it will not disturb whatever they were doing.

This sounds like a race condition where a variable is not getting properly initialized before getting used. Considering this is not an AJAX issue according to your comments, there will be a couple of ways of figuring this out, listed below.
Hookup a Javascript exception Logger: this will help you catch just about all random javascript exceptions in your log. Most of the time programmatic errors will bubble up here. Put it before any scripts. You will need to catch these on the server and print them to your logs for analysis later. This is your first line of defense. Here is an example:
window.onerror = function(m,f,l) {
var e = window.encodeURIComponent;
new Image().src = "/jslog?msg=" + e(m) + "&filename=" + e(f) + "&line=" + e(l) + "&url=" + e(window.location.href);
};
Search for window.location: for each of these instances you should add logging or check for undefined concats/appenders to your window.location. For example:
function myCode(loc) {
// window.location.href = loc; // old
typeof loc === 'undefined' && window.onerror(...); //new
window.location.href = loc; //new
}
or the slightly cleaner:
window.setLocation = function(url) {
/undefined/.test(url) ?
window.onerror(...) : window.location.href = url;
}
function myCode(loc) {
//window.location.href = loc; //old
window.setLocation(loc); //new
}
If you are interested in getting stacktraces at this stage take a look at: https://github.com/eriwen/javascript-stacktrace
Grab all unhandled undefined links: Besides window.location The only thing left are the DOM links themselves. The third step is to check all unhandeled DOM links for your invalid URL pattern (you can attach this right after jQuery finishes loading, earlier better):
$("body").on("click", "a[href$='undefined']", function() {
window.onerror('Bad link: ' + $(this).html()); //alert home base
});
Hope this is helpful. Happy debugging.

I'm wondering if this might be an adblocker issue. When I search through the logs by IP address it appears that every request by a particular user to /folder/page.html is followed by a request to /folder/undefined

I don't know if this helps, but my website is replacing one particular *.webp image file with undefined after it's loaded in multiple browsers. Is your site hosting webp images?

I had a similar problem (but with /null 404 errors in the console) that #andrew-martinez's answer helped me to resolve.
Turns out that I was using img tags with an empty src field:
<img src="" alt="My image" data-src="/images/my-image.jpg">
My idea was to prevent browser from loading the image at page load to manually load later by setting the src attribute from the data-src attribute with javascript (lazy loading). But when combined with iDangerous Swiper, that method caused the error.

Related

Javascript running only once on Internet Explorer, runs properly when developer tools is open

We have an unusual problem with javascript running on IE 11. I tried it on one of our servers running IE8 and the same problem occurs. However, it runs fine on Chrome and Mozilla.
Here's the code in question:
SetGuideFatningCookie(fid); //set a cookie according to user choice
var validFatningCombo = ValidFatningCheck(); //ask server if user choice is valid using XMLHttpRequest GET request
if(validFatningCombo)
window.location.href = GetGuideTilbehoerURL(); //if valid redirect user to next page
else
popAutoSizeFancy("#GLfancy"); //if not show a fancybox with error text
The user chooses one of 7 choices. Then they click a button that runs the above code. The code sets a cookie containing the user's choice and asks the server if the choice is valid. If valid - we redirect the user and if not, we open a fancybox that contains some error text and two buttons - "Try again"(closes box and they can try again) and "Send us a message"(redirects user to our "ask us a question" page).
The code runs fine the first time the user goes to this process.
However, if they have chosen an invalid choice, they close the fancybox and try to choose another choice and continue -> then the fancy box appears ALWAYS, regardless of what the user chooses.
If they choose a valid choice and continue, get redirected to next page, then come back to this page and choose an invalid choice and press continue -> then they can continue to the next page without fancybox ever coming up.
However, if IE's developer tools are opened, the code runs correct every single time.
I have found many threads describing this is a problem with console.log. I have removed every single reference to console.log from all our .js files. It could be one of the external libraries that we are using, like jquery, modernizr, fancybox and menucool's tooltip library.
Therefore I tried including a console fallback function for IE, like this thread suggests:
Why does JavaScript only work after opening developer tools in IE once?
I am currently trying with this one, and I have tried every single other fallback console replacement from the thred I link to.
if (!window.console) window.console = {};
if (!window.console.log) window.console.log = function () { };
I tried including it:
Somewhere in our .js files
script element in head after loading all our .js files and all external libraries
script element in head before loading all our .js files and all external libraries
Inside $(document).ready(function() {}); , in a script element in head after loading all other js
So far, none of the fallback pieces of code I have tried in any of these 4 locations have solved the problem. It always behaves the same way in IE. I couldn't find another explanation than the "console" one for this problem so far, so if anyone got any insight on it, it would be greatly appreciated.
EDIT: I will include some more info:
The very act of opening Developer Tools removes the unwanted behaviour. No errors are ever shown in console.
I checked the server side to see if the server is getting the call from ValidFatningCheck(); It turns out that the call is made only the first time (or if Developer tools is open - every time) which is rather mysterious since the redirect/fancybox line comes after the server call and it doesn't fail to run, even if it runs wrong.
function ValidFatningCheck(){
var requestUrl = '/Tools.ashx?command=validscreen';
var req = new XMLHttpRequest();
req.open('GET', requestUrl, false);
req.send(null);
var res = "";
if (req.readyState==4)
res = req.responseText;
if(res == "true")
return true;
return false;
}
UPDATE : Problem solved by adding a timestamp to my XMLHttpRequest as multiple replies suggested. I didn't realize XMLHttpRequest uses AJAX so I overlooked it as a probable cause to the problem.
(I put in comments but will make this an answer now as it appears to have solved the problem) get requests are cached by IE but when the developer console is open it does not perform this cache.
three ways to fix:
add a timestamp to the request to trick the browser into thinking it is making a new request each time
var requestUrl = '/Tools.ashx?command=validscreen&time='+new Date().getTime();
set the response header to no-cache
make a POST request as these are not cached
(as pointed out by #juanmendes not ideal you are not editing a resource)

Unable to post message to http://www.youtube.com. Recipient has origin https://www.youtube.com

My app at http://beta.billboard.fm is producing errors in my normal browsing session after playing a single song.
If i reload the page in incognito, the app works fully. I only recently starting experiencing these issues. I have completed cleared all of the cache and it works again, but only temporarily before throwing the same errors.
Additionally I have disable all browser extensions.
But, no matter what I do I can't get this error from being thrown by the Youtube API:
Unable to post message to http://www.youtube.com. Recipient has origin https://www.youtube.com
It looks like there is a mismatch in the security protocols. I tried changing them to https or just removing "http:" all together on my side. But it did not resolve the issue.
Any one have an idea what is happening here?
It is quite clear to me at this point that this is a major bug in Google/YouTube's API. They have written some bad code somewhere. This bug is not a consistent thing. This is well documented by the fact that everybody's code works just fine for an extended period of time, and then they discover that all of a sudden their sites stop working properly. Additionally, all of my websites that had this problem last week are now working without a glitch - again, without me altering code.
So while it sucks to say this - the onus is on Google & YouTube to fix this and provide APIs that actually work as advertised... It doesn't look to me like there's anything we can do about it on our own :(
I am having the same problem - I also tried changing my links to http: to https: and vice-versa with no luck. I found this tread on Google Groups, but so far there has been no response. https://code.google.com/p/gdata-issues/issues/detail?id=4697
Clearing my cache allowed the player to work for a few videos, but after 3 or 4, the same error pops back up.
UPDATE 2 - Dec. 24, 2013: This solution has not actually fixed the problem at all:
After following a thread that poulified referred me to in his answer, a user in the forum posted the following solution which seems to be doing the trick for me (UPDATE: Still experiencing issues on random page loads :/):
Hi all,
It is working for replacing http:// with https://
example: http://jsfiddle.net/8tkgW/29/
Please make sure the following tips
load iframe api https://www.youtube.com/player_api
load iframe src path: https://www.youtube.com/embed/0GN2kpBoFs4?rel=0
If load player via new YT.Player, you must check the iframe src path:
setTimeout(function(){
var url = $('#iframe_youtube').prop('src');
if (url.match('^http://') {
$('#iframe_youtube').prop('src', url.replace(/^http:\/\//i, 'https://'));
}
}, 500);
Please refer my github project:
https://github.com/appleboy/js-video-player/blob/master/js/jsplayer.js#L120

Safari incorporating URL # fragment into its browser caching

I'm working on a solution to speed our website up. I'm having the client first ajax load the expected next page of the application:
$.ajax({url: '/some/real/path', ...});
The server responds to this and includes in the header:
Cache-Control => 'max-age=20'
which marks the response as being cachable.
The clientside application then waits to see if its prediction was correct, and upon finding that it was, transitions the browser to that same page, but adds a few bits of information into the URL as a # fragment, where this info is available to us only when the user has actually committed their action (i.e. not predictable):
location.href = '/some/real/path#additionalInfoInFragement';
When the browser transition to the page the additional info in the fragment is picked up by that page's javascript and worked to achieve some effect there.
For all browser, including Safari, the response to the starting ajax request IS properly inserted into the browser cache.
And then, for all browsers except Safari, the browser pulls that content out of the cache when we effect the location.href transition to that page. This avoids the server hit and is the basis for our speed-up.
Safari though is not using the cache to re-serve the content. It seems to get tripped up by the '#additionalInfoInFragment' part of the transition. It is including the fragment in its construction of the cache key it uses to check for existing cached content. Here are the entries from Safari's cache.db file, which I dumped via sqlite:
* ajax request: INSERT INTO "cfurl_cache_response" VALUES(3260,0,-1982644086,0,'http://localhost:8080/TomcatScratchPad/EmptyPage','2012-05-14 07:01:10');
* location.href transition: INSERT INTO "cfurl_cache_response" VALUES(3276,0,-230554366,0,'http://localhost:8080/TomcatScratchPad/EmptyPage#wtf','2012-05-14 07:01:20');
Also notable is the fact that Chrome is behaving correctly, even though both share a tremendous amount of WebKit code.
I would really appreciate any ideas the community has. Thanks!
I see only a couple of options:
File a bug report with Apple and don't worry about it. :-) Your caching stuff will still work for other browsers. Overall, Safari has a very small market share, although of course if your site is targeted at (say) iPad or iPhone users, that rather changes the nature of the stats for your specific site. :-) (You presumably know from your logs how big your Safari audience is.)
Sub-category: If Safari is a big part of your target market and this really bothers you, see if it's a bug in any of the open source parts of it and, if so, offer a patch.
Don't use the fragment identifier to pass the information, use something else (a cookie perhaps) instead.

Stop mobile network proxy from injecting JavaScript

I am using a mobile network based internet connection and the source code is being rewritten when they present the site to the end user.
In the localhost my website looks fine, but when I browse the site from the remote server via the mobile network connection the site looks bad.
Checking the source code I found a piece of JavaScript code is being injected to my pages which is disabling the some CSS that makes site look bad.
I don't want image compression or bandwidth compression instead of my well-designed CSS.
How can I prevent or stop the mobile network provider (Vodafone in this case) from proxy injecting their JavaScript into my source code?
You can use this on your pages. It still compresses and put everything inline but it wont break scripts like jquery because it will escape everything based on W3C Standards
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
On your server you can set the cahce control
"Cache-Control: no-transform"
This will stop ALL modifications and present your site as it is!
Reference docs here
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
http://stuartroebuck.blogspot.com/2010/08/official-way-to-bypassing-data.html
Web site exhibits JavaScript error on iPad / iPhone under 3G but not under WiFi
You're certainly not the first. Unfortunately many wireless ISPs have been using this crass and unwelcome approach to compression. It comes from Bytemobile.
What it does is to have a proxy recompress all images you fetch smaller by default (making image quality significantly worse). Then it crudely injects a script into your document that adds an option to load the proper image for each recompressed image. Unfortunately, since the script is a horribly-written 1990s-style JS, it craps all over your namespace, hijacks your event handlers and stands a high chance of messing up your own scripts.
I don't know of a way to stop the injection itself, short of using HTTPS. But what you could do is detect or sabotage the script. For example, if you add a script near the end of the document (between the 1.2.3.4 script inclusion and the inline script trigger) to neuter the onload hook it uses:
<script type="text/javascript">
bmi_SafeAddOnload= function() {};
</script>
then the script wouldn't run, so your events and DOM would be left alone. On the other hand the initial script would still have littered your namespace with junk, and any markup problems it causes will still be there. Also, the user will be stuck with the recompressed images, unable to get the originals.
You could try just letting the user know:
<script type="text/javascript">
if ('bmi_SafeAddOnload' in window) {
var el= document.createElement('div');
el.style.border= 'dashed red 2px';
el.appendChild(document.createTextNode(
'Warning. Your wireless ISP is using an image recompression system '+
'that will make pictures look worse and which may stop this site '+
'from working. There may be a way for you to disable this feature. '+
'Please see your internet provider account settings, or try '+
'using the HTTPS version of this site.'
));
document.body.insertBefore(el, document.body.firstChild);
}
</script>
I'm suprised no one has put this as answer yet. The real solution is:
USE HTTPS!
This is the only way to stop ISPs (or anyone else) from inspecting all your traffic, snooping on your visitors, and modifying your website in flight.
With the advent of Let's Encrypt, getting a certificate is now free and easy. There's really no reason not to use HTTPS in this day and age.
You should also use a combination of redirects and HSTS to keep all of your users on HTTPS.
You provider might have enabled a Bytemobile Unison feature called "clientless personalization". Try accessing the fixed URL http://1.2.3.50/ups/ - if it's configured, you will end up on a page which will offer you to disable all feature you don't like. Including Javascript injection.
Good luck!
Alex.
If you're writing you own websites, adding a header worked for me:
PHP:
Header("Cache-Control: no-transform");
C#:
Response.Cache.SetNoTransforms();
VB.Net:
Response.Cache.SetNoTransforms()
Be sure to use it before any data has been sent to the browser.
I found a trick. Just add:
<!--<![-->
After:
<html>
More information (in German):
http://www.programmierer-forum.de/bmi-speedmanager-und-co-deaktivieren-als-webmaster-t292182.htm#3889392
BMI js it's not only on Vodafone. Verginmedia UK and T-Mobile UK also gives you this extra feature enabled as default and for free. ;-)
In T-mobile it's called "Mobile Broadband Accelerator"
You can Visit:
http://accelerator.t-mobile.co.uk
or
http://1.2.3.50/
to configure it.
In case the above doesn't apply to you or for some reason it's not an option
you could potentially set-up your local proxy (Polipo w/wo Tor)
There is also a Firefox addon called "blocksite"
or as more drastic approach reset tcp connection to 1.2.3.0/24:80 on your firewall.
But unfortunately that wouldn't fix the damage.
Funny enough T-mobile and Verginmedia mobile/broadband support is not aware about this feature! (2011.10.11)
PHP: Header("Cache-Control: no-transform"); Thanks!
I'm glad I found this page.
That Injector script was messing up my php page source code making me think I made an error in my php coding when viewing the page source. Even though the script was blocked with firefox NoScript add on. It was still messing up my code.
Well, after that irritating dilemma, I wanted to get rid of it completely and not just block it with adblock or noscript firefox add ons or just on my php page.
STOP http:// 1.2.3.4 Completely in Firefox: Get the add on: Modify
Headers.
Go to the modify header add on options... now on the Header Tab.
Select Action: Choose ADD.
For Header Name type in: cache-control
For Header Value type in: no-transform
For Comment type in: Block 1.2.3.4
Click add... Then click Start.
The 1.2.3.4 script will not be injected into any more pages! yeah!
I no longer see 1.2.3.4 being blocked by NoScript. cause it's not there. yeah.
But I will still add: PHP: Header("Cache-Control: no-transform"); to my php pages.
If you are getting it on a site that you own or are developing, then you can simply override the function by setting it to null. This is what worked for me just fine.
bmi_SafeAddOnload = null;
As for getting it on other sites you visit, then you could probably open the devtools console and just enter that into there and wipe it out if a page is taking a long time to load. Haven't yet tested that though.
Ok nothing working to me. Then i replace image url every second because when my DOM updates, the problem is here again. Other solution is only use background style auto include in pages. Nothing is clean.
setInterval(function(){ imageUpdate(); }, 1000);
function imageUpdate() {
console.log('######imageUpdate');
var image = document.querySelectorAll("img");
for (var num = 0; num < image.length; num++) {
if (stringBeginWith(image[num].src, "http://1.1.1.1/bmi/***yourfoldershere***")) {
var str=image[num].src;
var res=str.replace("http://1.1.1.1/bmi/***yourfoldershere***", "");
image[num].src = res;
console.log("replace"+str+" by "+res);
/*
other solution is to push img src in data-src and push after dom loading all your data-src in your img src
var data-str=image[num].data-src;
image[num].src = data-str;
*/
}
}
}
function stringEndsWith(string, suffix) {
return string.indexOf(suffix, string.length - suffix.length) !== -1
}
function stringBeginWith(string, prefix) {
return string.indexOf(prefix, prefix.length-string.length) !== -1
}
An effective solution that I found was to edit your hosts file (/etc/hosts on Unix/Linux type systems, C:\Windows\System32\drivers\etc on Windows) to have:
null 1.2.3.4
Which effectively maps all requests to 1.2.3.4 to null. Tested with my Crazy Johns (owned by Vofafone) mobile broadband. If your provider uses a different IP address for the injected script, just change it to that IP.
Header("Cache-Control: no-transform");
use the above php code in your each php file and you will get rid of 1.2.3.4 code injection.
That's all.
I too was suffering from same problem, now it is rectified. Give a try.
I added to /etc/hosts
1.2.3.4 localhost
Seems to have fixed it.

Firefox Error: Permission Denied to get Window.Document

I have a standard 3-frame layout; "fnav" on the left, "fheader" at the top and "fcontent" below the header. All files are located locally on the hard drive.
This is the JS function that is throwing the error:
function writeHeaderFrame() {
try {
var headerFrame = window.top.frames['fheader'];
var headerTable = document.getElementById('headerTable');
if (headerFrame && headerTable) {
headerFrame.document.body.style.backgroundColor = "Black";
var headerFrameBody = headerFrame.document.documentElement.childNodes[1];
headerFrameBody.innerHTML = headerTable.innerHTML;
} else if (headerTable) {
// there is a headerTable, but no headerFrame
headerTable.style.display = 'inline' // show the headerTable
}
} catch (e) { alert('from header.js, writeHeaderFrame(): ' + e.message); }
}
Clicking on a link in fnav (or initially loading the frameset) loads content into fcontent, then a JS file in fcontent loads the "header" frame... or it is supposed to, anyway. The Javascript runs fine initially, but whenever a link is clicked I get the following error:
Permission Denied To Get Window.document
I am unable to determine why. Any and all suggestions would be appreciated.
First off, please post the code being run when you click those links, and their html.
secondly, did you have a typo there? Window.document should be window.document, should it? (lowercase w)
Edit response to changes in OP question
Without the html it's a little hard to say, but If I were taking a stab in the dark, I'd say this line:
headerFrame.document.body.style.backgroundColor = "Black";
is causing the error. It looks like headerFrame is on a different domain and you don't, for security reasons, have permission to modify the contents of that frame. Of course, some of the following lines will also have the same issue.
Also see http://userscripts.org/topics/25029 and http://www.webdeveloper.com/forum/showthread.php?t=189515 for similar cases.
Edit 2
From Mozilla Development Center
Note: Firefox 3 alters the security for windows' documents so that only the domain from which it was located can access the document. While this may break some existing sites, it's a move made by both Firefox 3 and Internet Explorer 7, and results in improved security.
(see https://developer.mozilla.org/En/DOM/Window.document)
I would guess you're trying to manipulate the window or document from a different origin. HTML5 (and all modern browsers, even IE :D ) enforce (or attempt to enforce) what is called "The Same-Origin Policy". Basically JS from one origin cannot interact with the DOM of a document or window from a different origin.
What is an origin? At a basic level you could substitute domain for origin and almost be right, but the full set of rules are
You must have the same domain
The same port (eg. code on example.com:80 cannot reference the DOM of a page a example.com:8080)
The same protocol (eg. http://example.com is a different origin from https://example.com)
lastly, redirects also matter so (http://example.com -> http://example.com/?redirect=http://evil.com with the server responding with a 3xx redirect to http://evil.com will result in a different origin)
In all liklihood firefox has merely tightened up one area where they did not apply the same origin policy in the past.
Apparently, the user in question updated his installation without changing the following setting to "false", which allows local documents to have access to all other local documents.
pref("security.fileuri.strict_origin_policy", true);
Which explains why I was unable to duplicate the error on my machine.
Many thanks to all for your assistance.
Have you tried installing Firebug and figuring out which line is throwing the error? I'm guessing that since the question is tagged Firefox you are seeing this occur in it.
It'd be most helpful if you could post a template HTML page using this Javascript.
Is the script/frame pages all on the same domain? If not, this is expected. You can't access window.document from another window if they are not on the same domain.

Categories