Firefox 3rc1 shines in Javascript benchmark

blazing firefox3As the official release of Firefox 3 is getting closer, with Release Candidate 1 being available since May 17th, I decided to boldly go where codinghorror has gone before and do a quick-and-dirty Javascript-performance comparison of the different browsers I’ve got installed on my Dell Latitude D620 laptop, using Webkit’s Sunspider benchmark.

Let’s start with the results for the browsers on my Windows XP SP2 installation, ordered from slowest to fastest. Each test was executed 2 times, clicking on the results will teleport you to the detailed results where you can paste the URL’s of another test to compare.

The MSIE7-results are probably not entirely representative, as I use Tredosoft’s standalone IE7. This is a bit of a hack to have IE7 on my otherwise MSIE6-based system. Moreover my corporate Windows-installation is infested with crapware, notably McAfee OAS and Zonealarm seem to be slowing things down enormously. The codinghorror-tests indeed show significantly better results for this browser, although IE does have serious issues with string concatenation, which should be fixed in IE8.

On the same hardware, but booting in Ubuntu 8.04 (Linux) form my external USB HD (a.k.a. my ‘disktop‘), I got the following results:

Firefox 3 RC1 seems slightly slower then b5, but maybe the Ubuntu-b5-version is compiled with optimizations? Firefox is also faster on Ubuntu, but the anti-virus-bloat is probably messing with our heads here (although Opera is slower on Linux, go figure).

The general conclusion however; Firefox 3 is a huge step forward as far Javascript-performance is concerned. Users of javascript-heavy web-applications such as Gmail, Google Reader, Zoho Office and Zimbra should benefit enormously from this. It would however be very interesting to perform similar tests with regards to ‘normal page rendering’ (html/css). Does anyone know of such benchmarks?

Are you doing Web2.0 the wrong way?

gapingvoid cartoonAccording to Jakob Nielsen, jumping on the web2.0-bandwagon often implies adding unneeded complexity instead of getting the basics right. I tend to agree; in our quest for sexier, more responsive web-sites and -applications, we seem to easily forget boring stuff such as accessibility and sometimes even usability. How much time did it take you to find your way around the new LinkedIn-UI? Have you tried to use a web2.0-site on a PDA or smartphone? With your keyboard instead of your mouse? Using a screenreader instead of your normal display? Or with JavaScript disabled (I installed the Firefox NoScript-extension to guard against XSS and other JS-attacks)? And if you have ever tried loading Gmail on a slow connection, you’ll surely have noticed that they automatically fall back to their more accessible “Web 1.0”-version?
Lately I’ve been reading a number of interesting articles on this subject and at work we’re carefully applying some Web2.0-techniques as well. Based on that, here are a few suggestions on how to build better websites and -applications:

  1. Don’t use GWT or Flex unless you’re building a complex desktop-style webapp and if you’re willing to invest in a second “basic html”-version as e.g. Gmail does. Let’s face it, most websites and even -applications do not have the level of complexity that warrants the use these RIA-frameworks.
  2. Develop your core functionality in “web1.0”-style, using semantic HTML (structure) and CSS (presentation) only and apply your usability- and accessibility-wisdom here.
  3. Sprinkle JavaScript (behavior) over that static foundation using an established JavaScript-framework to define event-handlers for objects and to modify the content of divs (but test if you can already write to the object, or you’ll get those ugly “operation aborted” errors in MSIE).  Give users a chance to opt out of this enhanced version, even if they do have JavaScript enabled. With this progressive enhancement (a.k.a. hijax) you add extra functionality for those people who can make use of it, without punishing users who can’t because of physical or technological impairments. Read more about progressive enhancement on London-based Christian Heilmann’s site.
  4. Only load your JavaScript-functions when you really need them, creating a kind of stub for methods of an object and only load the real method when it is needed. This technique is dubbed “lazy loading” and can help making your pages load & initialize much faster. To learn more about the concept of “lazy loading” on digital-web.com.
  5. Use <noscript>-tags if you absolutely have to use JavaScript without having a meaningful object already in place in your static version. Explain what is going on and provide a link to a normal page where the same information or functionality can be found.

Off course these tips won’t guarantee you a perfectly usable and accessible website or -application, but when done right, this will help to get 80% of the way. A good functional analysis and thorough testing, both keeping usability and accessibility in mind, will push you towards 99,99%.

Don’t let Google fool you; tame your Flash!

As I wrote some time ago, Google indeed does index Flash. Great? Well, maybe not: you might even lose valuable site visits this way! This is because Google does not direct users to the page on your site that contains the Flash (as it does for image search results), but to the standalone SWF-file. That means that people doing a normal search in Google, will in some circumstances get high-ranking links straight to your Flash-files instead of to your website and that these prospects will not be drawn into your website’s sales funnel at all. So much for SEO.
Solutions? Well, you could refrain from using Flash all together, there’s too much of that stuff around anyhow. Or you could prohibit googlebot (and others) from indexing swf-files by specifying this in your robots.txt-file.
These are both great remedies, but somehow I think not everyone will agree. So what if we could perform some ActionScript-magic to make a Flash-file force itself in its correct context? Although I am not a (Flash-)developer by any account, I hacked together a small demo (in ActionScript 2) to show how that could be done.

And indeed, if you point your browser to the standalone swf-file, you’ll see you are redirected to this page. How this is accomplished? Quite easily actually;

  1. add a flashvar (e.g. “embedded=true”) to the object/embed tags in the html
  2. check for the value of that flashvar in the first frame of your movie:
    var embedTrue=_root.embedded;
    if (embedTrue=="true") {
         // all normal flashyness goes here
         stop();
    }
    else {
         gotoAndPlay(2);
         stop();
    }
  3. and in frame 2 we do the redirect to the correct url:
    onEnterFrame = function() {
    // weird, stupid test to avoid this from being executed more than once
    if (!runOnce) {
    		var targetUrl = "http://blog.futtta.be/2008/05/06/";
    		geturl(targetUrl);
    		runOnce="1";
    	}
    }
    stop();

I’m sure the code stinks and it indeed is only actionscript2 and not 3, but maybe someone out there can build something usefull out of this general idea. And if not, I’ve had fun learning some (very) basic actionscripting. Now off to “Add/remove software” to get rid of that hog of an IDE! 🙂

No microformats (to be seen) in Firefox3!

microformats logoAlthough support for microformats in FF3 was talked about early 2007, no reference to them can be found in the recent beta’s user interface. As I think microformats could be an important part of the future semantic web, I mailed 2 Firefox microformats evangelists to ask what happend. Alex Faaborg, User Experience Designer at Mozilla, replied;

“The API for accessing microformatted content on a page will be included in Firefox 3, however a user interface for acting on microformatted content was unfortunately pushed back to the next release.

We are hoping that the API leads to several innovative Firefox extensions for microformats that we can use to help determine what the best user experience is for interacting with data on a page.  You can learn more about the API here: http://developer.mozilla.org/en/docs/Using_microformats

The reason for microformats not making it into the FF3 user interface apparently was a lack of agreement with regards to how they should be visualized. According to feedback received from Michael Kaply, author of the Operator microcards FF-extension, using a toolbar was too intrusive, they didn’t want to clutter the URL bar and they couldn’t come up with the right way to make the microcards-functionality extendable.
All in all a good thing microformats-functionality exists under the hood, but very unfortunate that Firefox3 will not allow ‘average users’ to see and interact with microformatted content. Maybe we should tell everyone to just install Operator for now?

Ubuntu Hardy upgrade a breeze!

I upgraded my Ubuntu “disktop” from 7.10 to the new Ubuntu 8.04 (aka Hardy Heron) today. The entire process took around 1h30 (download of packages was rather slow) and was incredibly straightforward (as shown in upgrade docs). Everything seems to work flawlessly as far as I can tell.
Hardy is running Firefox 3 beta 5, but Ubuntu/ Canonical will provide upgrades as FF3 goes through it’s final release-stages. Strange as including a Beta might seem, this actually is a wise thing. FF3b4 and b5 have proven to be quite stable (i’ve switched from FF2 approx. 2 months ago, haven’t looked back since). Moreover, Hardy is a “Long term support”-release, with the Desktop-version receiving support until 2011 and the Server-version until 2013 and using FF3 ensures Ubuntu of continued support (read: security updates) of the Mozilla-team in the years to come.
Check out the release notes for more details about Ubuntu 8.04 LTS.

hardy_disktop

Attentio tracking buzz, but language is a bitch

I am important! Or rather; some bloggers are important. Or better still; some advertisers, marketeers and PR-officers consider blogs to be an

attentio logoimportant channel to communicate with and through. High-profile blogs (which this one is not by any measure) can indeed be instrumental in launching geeky products, kick-starting viral campaigns and in some cases even influencing the public debate. But what you can’t measure doesn’t exist and that’s where buzz tracking tools such as the one from Brussels-based Attentio comes into play.
Attentio spiders blogs, forums and news-sites and indexes all that content in what must be a super-sized database. In front of that database sits a data-mining application annex website, which allows communication-pro’s to follow-up on the positive and negative buzz around their products, product features and competitors on the “Brand dashboard” in real-time.
As straight-forward as this may seem, collecting all that content, filtering out the garbage (e.g. splogs and attentio dashboardNSFW-content) and creating a blazingly-fast web-based application to publish these reports on-the-fly is quite a feat. The demo I got last week during the Emakina/Reference Academy by Amaia Lasa and Kalina Lipinska was impressive enough to make me want to try the application myself in between sessions. Attentio’s Linda Margaret patiently “tomtommed” me through the interface (thanks Linda!), giving me a better overview of all the available graphs and screens. All in all an impressive product with a lot of potential, especially for multinationals that have a lot of blog-visibility.
A lot of potential? Yes, because there is room for improvement (isn’t there always?). Attentio is great for buzz-quantification, for showing how many blogs discuss your products, but I had the impression that reports that try to extract more then these “simple” quantifications, were still rough around the edges. This seems largely due to what is the basic building block of a blog; language.
There is, for example, a report which allows you to see buzz per region or country. For this qualification the domain-name and/or geo-location of the IP-address are used. But as anyone can choose a TLD of their liking (lvb.net and blog.zog.org to name but two Flemish A-list bloggers) and as hosting abroad is no exception (lvb.net is hosted in the USA and this blog is on a server in Germany), a considerable amount of blogs in the reports I saw were not attributed to a country or region, but were instead classified by their language (Dutch/ French) in the same graph. Attentio intends to use information disclosed in the blog content itself to better pinpoint location.
Extracting non-quantitative information from blogs, forums and news-sites requires techniques from the fields of computational linguistics and artificial intelligence. One of the most exiting reports in the Brand Dashboard is the “sentiments”-report, which tries to categorize buzz as positive, neutral or negative. Up until now this is done using hard-coded rules which only allow content in English to be qualified (hence my writing this post in English, curious if this rings a bell on their own Brand Dashboard). Indeed Attentio is working at this, as witnessed by the description of the specialties of the smart Attentionistas on their “company info” page. They disclosed they’re working with the K.U. Leuven on new AI-based classification software (using Bayesian text classification one would suspect) which will be released into production later this year. I’m pretty sure this new software could be used for more then just extracting the “sentiment” of a blogpost, so I’ll certainly be keeping an eye on what these smart boys and girls are doing!
For those of you who would like to create some buzz-tracker graphs, Attentio offers basic functionality for free on http://www.trendpedia.com. Happy tracking!

Cache header magic (or how I learned to love http response headers)

Is your site dead slow? Does it use excessive bandwidth? Let me take you on a short journey to the place where you can lessen your worries: the http-response-headers, where some wee small lines of text can define how your site is loaded in cached.
So you’re interested and you are going to read on? In that case let me skip the foolishness (I’m too tired already) and move on to the real stuff. There are 3 types of caching-directives that can be put in the http response: permission-, expiry- and validation-related headers:

  1. Permission-related http response headers tell the caching algorithm if an object can be kept in cache at all. The primary way to do this (in HTTP/1.1-land) is by using cache-control:public, cache-control:private or cache-control:nocache. Nocache should be obvious, private indicates individual browser caches can keep a copy but shared caches (mainly proxies) cannot and public allows all caches to keep the object. Pre-http/1.1 this is mainly done by issuing a Last-Modified-date (Last-Modified; although that in theory is a validation-related cache directive) which is set in the future.
  2. The aim of expiry-related directives is to tell the caching mechanism (in a browser or e.g. a proxy) that upon reload the object can be reused without reconnecting to the originating server. These directives can thus help you avoid network roundtrips while your page is being reloaded. The following expiry-related directives exist: expires, cache-control:max-age, and cache-control:s-maxage. Expires sets a date/time (in GMT) at which the object in cache expires and will have to be revalidated. Cache-control:Max-age and Cache-control:s-maxage (both of which which take precedence if used in conjunction with expires) define how old an object may get in cache (using either the ‘age’ http response header or calculating the age using (Expires – Current date/time). s-maxage is to be used by shared caches (and takes precedence over max-age there), whereas max-age is used by all caches (you could use this to e.g. allow a browser to cache a personalised page, but prohibit a proxy from doing so). If neither expires, cache-control:max-age or cache-control:s-maxage are defined, the caching mechanism will be allowed to make an estimate (this is called “heuristic expiration“) of the time an object can remain in cache, based on the Last-Modified-header (the true workhorse of caching in http/1.0).
  3. Validation-related directives give the browser (or caching proxy) a means by which an object can be (re-)validated, allowing for conditional requests to be made to the server and thus limiting bandwith usage. Response-headers used in this respect are principally Last-Modified (date/timestamp the object was … indeed modified the last time) and ETag (which should be a unique string for each object, only changing if the object got changed).

And there you have it, those are the basics. So what should you do next? Perform a small functional analysis of how you want your site (html, images, css, js, …) to be cached at proxy or browser-level. Based on that tweak settings of your webserver (for static files served from the filesystem, mostly images, css, js) to allow for caching. The application that spits out html should include the correct headers for your pages so these can be cached as well (if you want this to happen, off course). And always keep in mind that no matter how good you analyze your caching-needs and how well you set everything up, it all depends on the http-standards (be it HTTP/1.0 or 1.1) the caching applications follows (so you probably want to include directives for both HTTP/1.0 and 1.1) and how well they adhere to those standards … Happy caching!
Sources/ read on:

(Disclaimer: there might well be some errors in here, feel free to correct me if I missed something!)

Wall Street Journal; more murders in Belgium than in US? Wrong!

On the 22nd of august, renowned journalist and neo-conservative columnist Bret Stephens wrote an editorial (“The Many Faces of Belgian Fascism”) for the Wall Street Journal in which he stated:

“Belgium’s per capita murder rate, at 9.1 per 100,000 is nearly twice that of the U.S.”.

After some websearching, this figure proved utterly wrong. An extract of the mail I sent Mr. Stephens in that respect:

The figure of 9,1 per 100.000 is not correct, as it is based on figures that include attempted -i.e. unsuccessful- murders. indeed, in 2005 the following statistics applied:

  • Total number of ‘successful’ murders: 174
  • Total attempted (i.e. ‘unsuccessful’) murders: 770
  • Total ‘successful and unsuccessful’ murders: 944

(source: figures in http://www.polfed-fedpol.be/crim/crim_statistieken/2005/reports/fr/2005/nat/fr_etats_2005_nat.pdf a French-language pdf from the site of the Belgian police. See page 2 under “Infr. contre l’integrite physique”, where ‘Acc.’ stands for successful and ‘Tent.’ for attempted, unsuccessful).
So when calculating the murder ratio based on approximately 10.500.000 inhabitants, this figure is not 9 but 1,7 per 100.000. And 1,7 is -not coincidently- the exact number a UN-report mentioned for homicide-ratio in 1996 in Belgium (as mentioned on http://www.haciendapub.com).
This means that the correct comparison between murder rates in Belgium and the USA (5,5/100.000 in 2004, cfr. http://www.disastercenter.com/crime/uscrime.htm) is not “Belgium’s per apita murder rate is nearly twice that of the USA” but “… is 3 times lower than that of the USA”, which off course places the pervasive and growing sense of lawlessness” you mention in an entirely different perspective.
I hope this information can be of further use for to and your sources. Do not hesitate to contact me in case you have further remarks or questions about this matter.
Kind regards,
Frank Goossens

I do not agree with most of what Mr. Stephens writes in the rest of his column, but this mainly boils down to a -vast- difference in political beliefs. It is a pity however that, being the high-profile journalist he is, Mr. Stephens did not check the ‘facts’ in his editorial better than he did. He may need to question the reliability of his sources, even if he shares their ideology …