WordPress 2.6 has been pushed out the door at Automattic and it contains some exiting new goodies as usual. So I fired up my trusty upgrade script, but got an ugly php-error when accessing the database update-pages:
Parse error: syntax error, unexpected T_SL in wp-includes/widgets.php on line 464
Turns out that the wp_widget_search-function in wp-includes/widgets.php included some remnants of an SVN-merge. Don’t know if it was a sync problem at my side or if the faulty code was on the SVN-server (it isn’t now), but I ended up copy/pasting the correct function from a fresh tar-ball I downloaded.
With new versions of our trustedbrowsers coming out, web developers who like living on the edge can start using some of the new features that are becoming available. One such goody is cross-document messaging, which is part of the HTML5 draft spec. Cross-document messaging allows children of a window (think iframes, popups, …) to communicate using JavaScript, even if they originate from a different domain. This means that Instead of just iframing an external application, without being able to integrate further, your page can send and receive messages to/ from it. PostMessage could even be used to do cross-domain XHR (a hidden iframe on the same domain as a a remote datasource can be used as a proxy to perform XmlHttpRequests on that remote domain) untill the real thing hits the streets as well. The two additions that allow you to perform such messaging, are window.postMessage and an eventlistener for events of the “message” type to handle the message. A pretty straightforward example of this can be found on JQuery’s John Resig’s site (see also his lastest blog entry about postMessage). As cross-domain javascript can be a potential big security risk, taking into account some precautions is really really really really really necessary. Really! On the downside (as if security is not a problem); this brand new feature is only available in Firefox 3 for now. My own little test (a copy of John Resig’s example with some minor tweaks) worked in Opera 9.2x (and 9.5b) as well, but postMessage seems to have been dropped from the final Opera 9.5, as the tests on Opera Labs don’t seem to work any more either. Support for postMessage is also available in Webkit (Safari‘s backbone) nightly builds and in Microsoft’s IE 8 BETA (with the event being ‘onmessage’ instead of ‘message’ and some other quirks but hey, this is beta, no?). So expect postMessage to be available in all major browsers by the end of the year. But why wait if you know that Facebook is already using postMessage in their chat application. I wonder what they fall back to if it is not available though …
[UPDATE june 2009: this is solved in WordPress 2.8] Having a fair amount of experience with WordPress installations and configuration, I wanted to install trusty old WP 2.5.1 on an idle desktop (winXP+xampp) at work to do some blogging on our intranet. The installation itself went smoothly (how hard can unpacking a zip-file be) but after some time the damn thing stopped working, producing nasty timeout-errors caused by a.o. wp-includes/update.php and wp-admin/includes/update.php. The problem is that WordPress tries to open an internet-connection (using fsockopen) to see if updates are available. Great, except when you’re trying to run WordPress on an intranet behind a proxy without a (direct) connection to the internet. After some unsuccessful fiddling in multiple WordPress php-files, I ended up disabling fsockopen in php.ini (disable_functions)!
Disabling! Fsockopen! In php.ini! Just to have a working WP?
I mean, come on guys, why doesn’t WordPress provide configuration options where you can specify if and how (what type of proxy, what address to find it on, …) it should try to connect to the internet? I even made this truly amazing UI mock-up which you guys can just like copy/paste straight into your code;
( ) No internet connection available (WordPress won't be able
to warn you about updates!)
________________________________________________________________________________
I’ll be at WebScene 2008 today and if all goes well, I’ll be bringing you live updates of the event (as I did last year). So watch this space if you’re interested! Being the commuter I am I took the train to Asse and rode my bike from Asse to Affligem (passing Asbeek and Edingen, very nice!) to arrive here at 9h00. So I’m at the conference center, scored Wifi-access and I’m ready to watch and learn. Bart Van Herreweghe (blog) kicked off with a talk about the new Belgium.be. The Kanselarij van de Eerste Minister worked together with Fedict for the production of the new portal, which was build by a multitude of companies such as IBM, Amplexor, Panoptic, Netway and Internet Architects. Because of the large amount of information that is published on the portal, Internet Architects and Netway played a very important role in information and user-centric interface design, introducing the idea of “doormat”-navigation which could be compared to a (part of a) sitemap being displayed on a (theme-)homepage. Technology-wise, belgium.be uses Tridion as WCMS with templates that contain validated XHTML, with a strong focus on accessibility which aims at Anysurfer plus compliance. The search-module, which will spider a great number of federal websites, is based on Lucene and developed by Panoptic (Cronos) with LBi. Panoptic’s Ines Vanlangendonck (blog) talked about the importance of usable web content management. Choosing a good foundation (WCM product) and customizing it to the (backend) users’ needs (e.g. adding or removing DAM-functionality, rich text editor functionality, online translation, …) should help get your users (content-owners, editors, …) on board. Looking at the poor adoption rate of the web content management tool chosen at a certain telco company a few years ago, she couldn’t be more spot-on. Ex-colleague Philip Achten from The Referencepresented the implementation of the new Thomas Cook-website. This travel website is an e-commerce business platform first and foremost, with on average 15000 unique visitors/day in 2007 and an estimated growth of 50% in 2008. One of the main goals of the new website was to allow the content team (15 people) and the travelling reporters to manage web-content decentralized. The Reference implemented Sitecore 5.3 for this purpose, a powerfull Microsoft ASP.NET-based WCM-solution, deployed on a loadbalanced environment (2 webservers with IIS and 1 MS SQL databasesserver). Next to the pure content management, a number of applications have been build like the destination search, newsletter, user registration and personalisation and off course the crucial booking application (connection to backend booking engine). In a next phase, building on the user authentication application, user generated content functionality will be added allowing regsitered visitors to add text, pictures and video. Ektron‘s Norman Graves held a talk titled “Key Technologies and how they impact some real world examples”. He talked about taxonomy and how it’s used in search, geomapping, personalisation in Ektron CMS 400.NET. Lunchtime has come and gone, time for the afternoon tracks. I started with the presentation about Arte+7, the Arte mediaportal. The website and presentation were done by CoreMedia, who also provided the CMS and DRM-infrastructure. Video’s are published in FLV and WMV-formats, with geolocalisation to limit the countries from which one can watch the content. The same technology is also used in the Arte VOD-site, for which Arte+7 is a teaser. Kinda nice, but lots of javascript and flash in that site, not really accessible. For the 2nd session I moved to track 5, where U-sentric‘s Tara Schrimpton-Smith talked about “Guerilla Usability Tests? User testing on a shoestring”. Her advise: use friends of friends, somewhere between 2 and 5 users (with 2 testers you should be able to find 50% of usabiltiy issues, with 5 users 85%) and limit the amount of tasks you’ll be testing. She concluded the session with a live example, someone shouted the name of her website, someone else volunteered and the task was ‘what is the address of the headquarters’. Judging the time it took the testperson to find this information, there are some usability issues on barry-callebaut.com. A fun session! Next up; Robin Wauters (blog) about “Social media is not an option”. Not much stuff to learn here (Robiin talked about technorati, attentio, involve ‘influential bloggers’, blog to showcase knowledge, “dell hell”, buzz, virals, …), but it’s nice to be able to put a face on the guy behind plugg and edentity. And we’ll finish off with AGConsult‘s Karl Gilis with “9 tips to help users find what they’re looking for on your website”. So let’s create an ordered list for that purpose:
ensure the accessibility of your site (should work on all common browsers/os’es, don’t misuse technology, make sure Google can crawl your site)
speed up page load times, the user decides in half a second if (s)he’ll stay or not
make navigation easy to use (structure, terminology, placement)
provide clear overview pages (example; belgium.be and it’s doormats)
your search should be as good as google (depends on technology and content!)
use an intuitive page lay-out
make your text legible (Verdana 10pt, Arial if you’re adventurous)
write for the web
make sure the info is there (do user needs analysis)
A fun session as well, those usability-guys and girls know how to entertain! My conclusion: this was not an uninteresting day, but the focus was clearly less technical then previous year’s edition. Content Management -around which much of this event was focused- is slowly but surely becoming a commodity and vendors are having a hard time differentiating themselves from their competitors. It is my feeling that the bigger changes and challenges with regards to “the web” are more on the application-front, where backend-integration (SOA, webservices, …) and RIA’s (using ajax, GWT, flex, …) are today’s hot topics. The fact that webscene2008 did not explore these new frontiers (and their implications with regards to business, marketing, usability, accesability) is a missed opportunity really. Let’s hope they reconnect with the webtech-trends next year! And maybe I’ll be there to let you know?
I went to a Dolmen-organized seminar about RIA‘s today, where somesmartpeople talked about GWT, Flex and JavaFX. I hooked up with an old acquaintance there, he was a customer of my previous employer actually, working in banking and finance. We exchanged ideas about when and more importantly when not to use RIA-technologies. I just now received a mail from him as well, in which he wrote (roughly translated from Dutch);
I’ll keep you posted on our findings concerning RIA as well, but when I tried to visit www.parleys.com at work just now, all I saw was a black screen. In that case I prefer those PIA’s; they might not be that fancy, but they do work.
Let’s start with the results for the browsers on my Windows XP SP2 installation, ordered from slowest to fastest. Each test was executed 2 times, clicking on the results will teleport you to the detailed results where you can paste the URL’s of another test to compare.
The MSIE7-results are probably not entirely representative, as I use Tredosoft’s standalone IE7. This is a bit of a hack to have IE7 on my otherwise MSIE6-based system. Moreover my corporate Windows-installation is infested with crapware, notably McAfee OAS and Zonealarm seem to be slowing things down enormously. The codinghorror-tests indeed show significantly better results for this browser, although IE does have serious issues with string concatenation, which should be fixed in IE8.
On the same hardware, but booting in Ubuntu 8.04 (Linux) form my external USB HD (a.k.a. my ‘disktop‘), I got the following results:
konqueror 4: not tested yet, results will follow later today can’t get test to completely run, any KDE-user want to give this a try?
Firefox 3 RC1 seems slightly slower then b5, but maybe the Ubuntu-b5-version is compiled with optimizations? Firefox is also faster on Ubuntu, but the anti-virus-bloat is probably messing with our heads here (although Opera is slower on Linux, go figure).
The general conclusion however; Firefox 3 is a huge step forward as far Javascript-performance is concerned. Users of javascript-heavy web-applications such as Gmail, Google Reader, Zoho Office and Zimbra should benefit enormously from this. It would however be very interesting to perform similar tests with regards to ‘normal page rendering’ (html/css). Does anyone know of such benchmarks?
According to Jakob Nielsen, jumping on the web2.0-bandwagon often implies adding unneeded complexity instead of getting the basics right. I tend to agree; in our quest for sexier, more responsive web-sites and -applications, we seem to easily forget boring stuff such as accessibility and sometimes even usability. How much time did it take you to find your way around the new LinkedIn-UI? Have you tried to use a web2.0-site on a PDA or smartphone? With your keyboard instead of your mouse? Using a screenreader instead of your normal display? Or with JavaScript disabled (I installed the Firefox NoScript-extension to guard against XSS and other JS-attacks)? And if you have ever tried loading Gmail on a slow connection, you’ll surely have noticed that they automatically fall back to their more accessible “Web 1.0”-version? Lately I’ve been reading a number of interesting articles on this subject and at work we’re carefully applying some Web2.0-techniques as well. Based on that, here are a few suggestions on how to build better websites and -applications:
Don’t use GWT or Flex unless you’re building a complex desktop-style webapp and if you’re willing to invest in a second “basic html”-version as e.g. Gmail does. Let’s face it, most websites and even -applications do not have the level of complexity that warrants the use these RIA-frameworks.
Develop your core functionality in “web1.0”-style, using semantic HTML (structure) and CSS (presentation) only and apply your usability- and accessibility-wisdom here.
Sprinkle JavaScript (behavior) over that static foundation using an established JavaScript-framework to define event-handlers for objects and to modify the content of divs (but test if you can already write to the object, or you’ll get those ugly “operation aborted” errors in MSIE). Give users a chance to opt out of this enhanced version, even if they do have JavaScript enabled. With this progressive enhancement (a.k.a. hijax) you add extra functionality for those people who can make use of it, without punishing users who can’t because of physical or technological impairments. Read more about progressive enhancement on London-based Christian Heilmann’s site.
Only load your JavaScript-functions when you really need them, creating a kind of stub for methods of an object and only load the real method when it is needed. This technique is dubbed “lazy loading” and can help making your pages load & initialize much faster. To learn more about the concept of “lazy loading” on digital-web.com.
Use <noscript>-tags if you absolutely have to use JavaScript without having a meaningful object already in place in your static version. Explain what is going on and provide a link to a normal page where the same information or functionality can be found.
Off course these tips won’t guarantee you a perfectly usable and accessible website or -application, but when done right, this will help to get 80% of the way. A good functional analysis and thorough testing, both keeping usability and accessibility in mind, will push you towards 99,99%.
As I wrote some time ago, Google indeed does index Flash. Great? Well, maybe not: you might even lose valuable site visits this way! This is because Google does not direct users to the page on your site that contains the Flash (as it does for image search results), but to the standalone SWF-file. That means that people doing a normal search in Google, will in some circumstances get high-ranking links straight to your Flash-files instead of to your website and that these prospects will not be drawn into your website’s sales funnel at all. So much for SEO. Solutions? Well, you could refrain from using Flash all together, there’s too much of that stuff around anyhow. Or you could prohibit googlebot (and others) from indexing swf-files by specifying this in your robots.txt-file. These are both great remedies, but somehow I think not everyone will agree. So what if we could perform some ActionScript-magic to make a Flash-file force itself in its correct context? Although I am not a (Flash-)developer by any account, I hacked together a small demo (in ActionScript 2) to show how that could be done.
And indeed, if you point your browser to the standalone swf-file, you’ll see you are redirected to this page. How this is accomplished? Quite easily actually;
add a flashvar (e.g. “embedded=true”) to the object/embed tags in the html
check for the value of that flashvar in the first frame of your movie:
var embedTrue=_root.embedded;
if (embedTrue=="true") {
// all normal flashyness goes here
stop();
}
else {
gotoAndPlay(2);
stop();
}
and in frame 2 we do the redirect to the correct url:
onEnterFrame = function() {
// weird, stupid test to avoid this from being executed more than once
if (!runOnce) {
var targetUrl = "http://blog.futtta.be/2008/05/06/";
geturl(targetUrl);
runOnce="1";
}
}
stop();
I’m sure the code stinks and it indeed is only actionscript2 and not 3, but maybe someone out there can build something usefull out of this general idea. And if not, I’ve had fun learning some (very) basic actionscripting. Now off to “Add/remove software” to get rid of that hog of an IDE! 🙂
“The API for accessing microformatted content on a page will be included in Firefox 3, however a user interface for acting on microformatted content was unfortunately pushed back to the next release.
We are hoping that the API leads to several innovative Firefox extensions for microformats that we can use to help determine what the best user experience is for interacting with data on a page. You can learn more about the API here: http://developer.mozilla.org/en/docs/Using_microformats“
The reason for microformats not making it into the FF3 user interface apparently was a lack of agreement with regards to how they should be visualized. According to feedback received from Michael Kaply, author of the Operator microcards FF-extension, using a toolbar was too intrusive, they didn’t want to clutter the URL bar and they couldn’t come up with the right way to make the microcards-functionality extendable. All in all a good thing microformats-functionality exists under the hood, but very unfortunate that Firefox3 will not allow ‘average users’ to see and interact with microformatted content. Maybe we should tell everyone to just install Operator for now?
I upgraded my Ubuntu “disktop” from 7.10 to the new Ubuntu 8.04 (aka Hardy Heron) today. The entire process took around 1h30 (download of packages was rather slow) and was incredibly straightforward (as shown in upgrade docs). Everything seems to work flawlessly as far as I can tell. Hardy is running Firefox 3 beta 5, but Ubuntu/ Canonical will provide upgrades as FF3 goes through it’s final release-stages. Strange as including a Beta might seem, this actually is a wise thing. FF3b4 and b5 have proven to be quite stable (i’ve switched from FF2 approx. 2 months ago, haven’t looked back since). Moreover, Hardy is a “Long term support”-release, with the Desktop-version receiving support until 2011 and the Server-version until 2013 and using FF3 ensures Ubuntu of continued support (read: security updates) of the Mozilla-team in the years to come. Check out the release notes for more details about Ubuntu 8.04 LTS.