Tag Archives: javascript

No more jsonp for Google geocoding webservice?

I needed to do some reverse geocoding in a javascript webapp. The Google Maps API worked flawlessly, but it seemed overkill to load all that javascript just to do one lousy reverse geocoding lookup (esp. on a mobile device, my target platform).  I searched some more and found the Google geocoding webservice, which is invoked with a simple HTTP GET request and returns the response in JSON. Version 2 of this service works great, as you can specify a callback-function to do jsonp (a simple method to allow for cross domain ajax requests),

This example request for v2, http://maps.google.com/maps/geo?q=51,4&sensor=false&output=json&callback=parseme, results in a response containing a call to your own “parseme”-function, with the json-object as the payload;

parseme && parseme( {
"name": "51,4",
"Status": {
"code": 200,
"request": "geocode"
},
"Placemark": [ {
"id": "p1",
"address": "Brukkelen 191, 9200 Dendermonde, Belgium",
...
]})

But a month ago Google announced a new version of their geocoding webservice and the documentation for V2 mentioned that it was deprecated in favor of the Geocoding V3 Web Service. so I switched to the new API, only to discover that callbacks aren’t supported any more.

An example request for v3 http://maps.google.com/maps/api/geocode/json?latlng=51,4&sensor=false&callback=parseme results in nothing but the javascript-object:

{
"status": "OK",
"results": [ {
"types": [ "street_address" ],
"formatted_address": "Brukkelen 191, 9200 Dendermonde, Belgium",
...
]})

And that, my dear fellow travellers, sucks big time. JSON without the P means we’re back to useless proxy-scripts on our servers. So until Google lets Jason pee (sorry folks, I just had to write this) in the V3 geocoding webservice, I’ll continue using the deprecated V2 with sweet -but undocumented- callback!

Avoid iframe-scrollbars with squeezeFrame.js

I know, this seems to have become an obsession of mine, but here I am again with a follow-up on my iframes-tips blogpost. You might remember I advised against disabling scrollbars on iframes, because;

Disabling them will render the iframe partially inaccessible for some of your users, because the size your iframe-content needs, depends on things outside your control such as operating system & versions (e.g. font & screen resolution), browser (e.g. css-implementation) and browser configuration (e.g. non-default font-size).

But what if you could resize (generally: zoom out) the iframe-content to perfectly fit the available width and height, thus avoiding vertical and especially horizontal scrollbars? Well, that is exactly what squeezeFrame.js tries to do (using css zoom and -moz-transform:scale in Firefox)! Just include the javascript-file in the iframe-content page and set a few options if you want to change the default behavior (which is: zoom in/out for width only, max. + or – 5%).

squeezeFrame.js was tested successfully in Firefox 3.6, IE6, Safari4 and Chrome4, but does not work in Opera 10.5. More info (including some “known issues”) can be found on the demo-page and in the javascript-code off course.

As always; reply in the comments or contact me if you find bugs or have problems.

Fix iframe-positioning problem with frameMagic.js

A short followup on my previous post about iframes; as I happen to like simple drop-in solutions, I updated the javascript that handles the ‘blank 2nd page in an iframe bug’ to automagically work upon inclusion in the html.

So if you happen to have problems with the positioning of 2nd (or later) pages in iframes (due to the top part of the iframe not being visible in the ‘viewport’), just upload frameMagic.js to your webserver and add the following to the head of your html to ease your iframe-blues;

<script type="text/javascript" src="path/to/frameMagic.js"></script>

Optionally you can specify which iframes are to be treated this way (excluding the other ones) by doing

<script type="text/javascript">
var fM_conf="iFrame1,iFrame3";
</script>

You can find more information and examples on http://futtta.be/frameMagic.

Embedding YouTube HTML5-video with newTube

With all the discussions about the place of Flash on the ever-evolving web and the excitement following Google’s announcement about YouTube going HTML5, one would almost forget that YouTube is only at the very start of their “open video” endeavor. The limitations of the current implementations are numerous; there’s no OGG (damn), no ads (yeah!) and no embedding either (damn) for example.

After looking into ways to call the YouTube mp4-file from within a Video for Everybody html-block (which is not possible, Google protects raw video-files using what seems to be a session-based hash that has to be provided in the URL), I decided to take another (dirty) approach; faking it!

The solution is entirely javascript-based and is as un-elegant as it is simple; create a html-file with a script include of http://futtta.be/newTube/newTube.js and a div with “id=newTube” containing a link to a YouTube-page and the script automagically takes care of the rest. Check out http://futtta.be/newTube/ to see it in action.

The result is an embedded YouTube player which will display the HTML5-version if you’re running a browser which supports mp4/h264 playback (i.e. a recent version of Chrome or Safari) and if you enrolled in the beta. If either of these preconditions aren’t met, you’ll just see the plain old Flash-player.

Don’t get your hopes up too high,  newTube is probably not as obvious as normal YouTube embeds (for reasons I’ll get into in a follow-up post, when I have some time to spare that is). You’ll have to wait for someone (YouTube, Dailymotion, Vimeo, … are you listening?) to offer real embeddable html5-video (with support for both mp4/h264 and and ogg/theora).

But I did have fun creating the very first html5-capable embedded YouTube-player ;-)

New: cross-document messaging

With new versions of our trusted browsers coming out, web developers who like living on the edge can start  using some of the new features that are becoming available. One such goody is cross-document messaging, which is part of the HTML5 draft spec.

Cross-document messaging allows children of a window (think iframes, popups, …) to communicate using JavaScript, even if they originate from a different domain. This means that Instead of just iframing an external application, without being able to integrate further, your page can send and receive messages to/ from it. PostMessage could even be used to do cross-domain XHR (a hidden iframe on the same domain as a a remote datasource can be used as a proxy to perform XmlHttpRequests on that remote domain) untill the real thing hits the streets as well.

The two additions that allow you to perform such messaging, are window.postMessage and an eventlistener for events of the “message” type to handle the message. A pretty straightforward example of this can be found on JQuery’s John Resig’s site (see also his lastest blog entry about postMessage). As cross-domain javascript can be a potential big security risk, taking into account some precautions is really really really really really necessary. Really!

On the downside (as if security is not a problem); this brand new feature is only available in Firefox 3 for now. My own little test (a copy of John Resig’s example with some minor tweaks) worked in Opera 9.2x (and 9.5b) as well, but postMessage seems to have been dropped from the final Opera 9.5, as the tests on Opera Labs don’t seem to work any more either. Support for postMessage is also available in Webkit (Safari‘s backbone) nightly builds and in Microsoft’s IE 8 BETA (with the event being ‘onmessage’ instead of ‘message’ and some other quirks but hey, this is beta, no?).

So expect postMessage to be available in all major browsers by the end of the year. But why wait if you know that Facebook is already using postMessage in their chat application. I wonder what they fall back to if it is not available though …

Firefox 3rc1 shines in Javascript benchmark

blazing firefox3As the official release of Firefox 3 is getting closer, with Release Candidate 1 being available since May 17th, I decided to boldly go where codinghorror has gone before and do a quick-and-dirty Javascript-performance comparison of the different browsers I’ve got installed on my Dell Latitude D620 laptop, using Webkit’s Sunspider benchmark.

Let’s start with the results for the browsers on my Windows XP SP2 installation, ordered from slowest to fastest. Each test was executed 2 times, clicking on the results will teleport you to the detailed results where you can paste the URL’s of another test to compare.

The MSIE7-results are probably not entirely representative, as I use Tredosoft’s standalone IE7. This is a bit of a hack to have IE7 on my otherwise MSIE6-based system. Moreover my corporate Windows-installation is infested with crapware, notably McAfee OAS and Zonealarm seem to be slowing things down enormously. The codinghorror-tests indeed show significantly better results for this browser, although IE does have serious issues with string concatenation, which should be fixed in IE8.

On the same hardware, but booting in Ubuntu 8.04 (Linux) form my external USB HD (a.k.a. my ‘disktop‘), I got the following results:

Firefox 3 RC1 seems slightly slower then b5, but maybe the Ubuntu-b5-version is compiled with optimizations? Firefox is also faster on Ubuntu, but the anti-virus-bloat is probably messing with our heads here (although Opera is slower on Linux, go figure).

The general conclusion however; Firefox 3 is a huge step forward as far Javascript-performance is concerned. Users of javascript-heavy web-applications such as Gmail, Google Reader, Zoho Office and Zimbra should benefit enormously from this. It would however be very interesting to perform similar tests with regards to ‘normal page rendering’ (html/css). Does anyone know of such benchmarks?

Are you doing Web2.0 the wrong way?

gapingvoid cartoonAccording to Jakob Nielsen, jumping on the web2.0-bandwagon often implies adding unneeded complexity instead of getting the basics right. I tend to agree; in our quest for sexier, more responsive web-sites and -applications, we seem to easily forget boring stuff such as accessibility and sometimes even usability. How much time did it take you to find your way around the new LinkedIn-UI? Have you tried to use a web2.0-site on a PDA or smartphone? With your keyboard instead of your mouse? Using a screenreader instead of your normal display? Or with JavaScript disabled (I installed the Firefox NoScript-extension to guard against XSS and other JS-attacks)? And if you have ever tried loading Gmail on a slow connection, you’ll surely have noticed that they automatically fall back to their more accessible “Web 1.0”-version?

Lately I’ve been reading a number of interesting articles on this subject and at work we’re carefully applying some Web2.0-techniques as well. Based on that, here are a few suggestions on how to build better websites and -applications:

  1. Don’t use GWT or Flex unless you’re building a complex desktop-style webapp and if you’re willing to invest in a second “basic html”-version as e.g. Gmail does. Let’s face it, most websites and even -applications do not have the level of complexity that warrants the use these RIA-frameworks.
  2. Develop your core functionality in “web1.0”-style, using semantic HTML (structure) and CSS (presentation) only and apply your usability- and accessibility-wisdom here.
  3. Sprinkle JavaScript (behavior) over that static foundation using an established JavaScript-framework to define event-handlers for objects and to modify the content of divs (but test if you can already write to the object, or you’ll get those ugly “operation aborted” errors in MSIE).  Give users a chance to opt out of this enhanced version, even if they do have JavaScript enabled. With this progressive enhancement (a.k.a. hijax) you add extra functionality for those people who can make use of it, without punishing users who can’t because of physical or technological impairments. Read more about progressive enhancement on London-based Christian Heilmann’s site.
  4. Only load your JavaScript-functions when you really need them, creating a kind of stub for methods of an object and only load the real method when it is needed. This technique is dubbed “lazy loading” and can help making your pages load & initialize much faster. To learn more about the concept of “lazy loading” on digital-web.com.
  5. Use <noscript>-tags if you absolutely have to use JavaScript without having a meaningful object already in place in your static version. Explain what is going on and provide a link to a normal page where the same information or functionality can be found.

Off course these tips won’t guarantee you a perfectly usable and accessible website or -application, but when done right, this will help to get 80% of the way. A good functional analysis and thorough testing, both keeping usability and accessibility in mind, will push you towards 99,99%.