spring + groovy + grails = ?

Just read that Springsource (“Weapons for the War on Java Complexity”!), Rod Johnson’s company, has acquired g2one, the company behind Groovy and Grails. Looks like there’ll be 2 major Java development stacks; all things JSR/J2EE on one hand and Spring/Groovy/Grails on the other? What train should one hop onto, when having to choose a new web development framework?

Het deredactie-Journaal ook in uw mediaplayer?

Er kwamen links en rechts wat positieve reactie op mijn ‘deredactie journaalplayer’. Laatste in de rij was Wouter, die op zijn blog een perl-script deelde dat op basis van de atomfeed een m3u-playlist genereert om het VRT nieuws in VLC te bekijken.
Fantastisch idee van Wouter, ik heb dat dan ook snel in mijn atom-parsend scriptje gepropt. Vandaar; vanaf nu kun je het deredactie Journaal niet enkel in je browser, maar ook in je favoriete mediaplayer bekijken;

Ik heb één en ander zelfs (oppervlakkig) getest en dat lijkt correct te werken in Windows Media Player, VLC, Totem en Winamp. Apple Quicktime daarentegen lijkt het niet te doen; wel geluid maar geen beeld met de mp4/m3u-versie, terwijl de individuele mp4’s wel correct worden afgespeeld. M3U is oorspronkelijk natuurlijk audio-geörienteerd, misschien valt QT daarover en moet ik er nog een extra playlist-formaat tegenaan gooien? Er zijn wel minstens evenveel playlist-formaten dan dat er videocodes zijn, maar SMIL ligt voor de hand?
Voor de web-versie heb ik JWFLV (de flash video player) geupgrade van naar de nieuwe 4.2-versie, wat voornamelijk de inhoudstafel ten goede komt; de verhoudingen van thumbnails worden nu gerespecteerd en die visuele playlist scrolt nu mee terwijl je kijkt. Nifty jongen, die JW!

Is the web too fat for your IPhone?

So you have a spiffy mobile phone with a top notch browser that does a decent job at displaying “desktop-oriented” websites and you use it to surf the web regularly, visiting some of the bigger news-sites in Belgium. What does that mean, from the point of view of data transfer and bandwidth usage?

data usage for 4 pages on 5 sites (click on image for more, methodology see below)

That sure is a lot of data, Captain! What does that mean?

  1. You will have to be patient, because downloading 1 or 2 Mb for that initial page will probably be gruesomely slow (especially if you’re on EDGE because there’s no 3G-coverage)
  2. You will end up paying good money for all that data transfer, because data is money when you’re on mobile time
  3. You might even curse your handset or crashing browser (more on google), because all that data will end up in RAM and these devices do not come with tons of that.

In these broadband-times, website builders seem to have completely forgotten about best practices for download size of complete web pages (html + all js/css/images/…). This means that a lot of websites should be considered non-accessible on mobile devices.
If you want your normal website to be usable on IPhone’s, HTC’s and other Nokia’s, you’ll have to start taking download size into account again. That means taking some technical measures (using mod_deflate and mod_expires for example) and making hard functional choices to remove some stuff (on this blog dropping the rather useless mybloglog-widget saved me 210Kb, going from 10 to 7 posts per page another 200). And if you want to target mobile users specifically, you’d better invest in a mobile-specific version of your site!


The methodology followed to measure these download sizes;

  • disable flash (there’s no such thing on mobiles, with flash these figures would have been even far worse)
  • disable memory cache (in about:config), because it can’t be cleared easily
  • clear disk cache
  • open up firebug and click on ‘net’ to monitor downloads
  • download homepage, random 2nd page, random 3th page and the homepage again

The spreadsheet (on google docs) contains more data (compare above results with those for 2 mobile-specific sites)

Fun with caching in PHP with APC (and others)

After installing APC, I looked through the documentation on php.net and noticed 3 interesting functions with regards to session-independent data caching in PHP;

When talking about caching, apc_delete might not be that important, as apc_store allows you to set the TTL (time to live) of the variable you’re storing. If you try to retrieve a stored variable which exceeded the TTL, APC will return FALSE, which tells you to update your cache.
All this means that adding 5 minutes worth of caching to your application could be as simple as doing;

if (($stringValue=apc_fetch($stringKey)) === FALSE) {
$stringValue = yourNormalDogSlowFunctionToGetValue($stringKey);
apc_store($stringKey,$stringValue,300);
}

From a security point-of-view however (esp. on a shared environment) the APC-functions should be considered extremely dangerous. There are no mechanisms to prevent a denial of service; everyone who “does PHP” on a server can fill the APC-cache entirely. Worse yet, using apc_cache_info you can get a list of all keys which you in turn can use to retrieve all associated values, meaning data theft can be an issue as well. But if you’re on a server of your own (and if you trust all php-scripts you install on there), the APC-functions can be sheer bliss!
And off course other opcode caching components such as XCache and eAccelerator offer similar functionality (although it’s disabled by default in eAccelerator because of the security concerns).

Trading eAccelerator for APC

Yesterday I somewhat reluctantly removed eAccelerator from my server (Debian Etch) and installed APC instead. Not because I wasn’t satisfied with performance of eAccelerator, but because the packaged version of it was not in the Debian repositories (Andrew McMillan provided the debs), and those debs weren’t upgraded at the same pace and thus broke my normal upgrade-routine. Moreover APC will apparently become a default part of PHP6 (making the Alternative PHP Cache the default opcode cache component). Installation was as easy as doing “pecl install apc” and adding apc to php.ini. Everything seems to be running as great as it did with eAccelerator (as most test seem to confirm).

Eindelijk gevonden: het journaal op deredactie.be

Ik heb een alternatieve “journaal-player” voor de VRT bij elkaar gehackt, want op deredactie.be vind ik mijn gading niet. Pas op, ik ben een fan van de VRT nieuwsdienst. Echt! Maar op deredactie.be staat er echt te veel om mijn aandacht te schreeuwen. Te veel video, te veel nieuwsgeticker op elke pagina, op heel de site. Als ik een gewoon artikeltje zou zijn, ik zou me ook bedeesd in een hoekje van de pagina terugtrekken, stilletjes hopend dat iemand me toch zou opmerken.
En het Journaal, dat komt er vreemd genoeg dus ook amper aan bod. Het Journaal, ge weet wel, dat programma op televisie waarin ze al die clipjes uit die videoband aan elkaar plakken? Een zekere “Sponzen Ridder” wilde onlangs online naar het VRT nieuws kijken (iets over banken ofzo?) en dat ging als volgt;

Dus ging ik naar www.vrtnieuws.net. Daar vond ik een band met voorbijzwevende flash-filmpjes, echter geen journaal. Ik klikte helemaal bovenin de pagina op “nieuws”. Niets. Ik klikte vijf centimeter lager drie centimeter meer naar rechts op “journaal 7”. Clipjes. Geen overzicht, niks. Voorbijschuivende filmpjes, zonder inzicht in de structuur, of een soort inhoudstafel. Toen ging ik naar www.vtm.be, klikte op “nieuwsuitzendingen” en kon mooi en selectief de verschillende onderdelen bekijken van de afgelopen week vol nieuwsuitzendingen.

Volledig mee eens! Op deredactie.be hoort een grote knop “Bekijk Het Journaal” en als je daarop klikt, dan kom je op een pagina waar niets anders op moet staan dan zo een player en een inhoudtafel. Maar omdat dat er dus niet staat, ben ik zelf aan de slag gegaan.
Wat extra info voor de “technisch begiftigden”;

  • die verschrikkelijk opdringerige videoband bovenaan deredactie.be haalt een xml-bestand (atom) af om te weten wat er beschikbaar is voor publicatie (dank U firebug)
  • dat atom-bestand verwijst naar programma-specifieke atom-files met daarin titel, linken naar flv, mp4 en wmv-bestanden in lage en hoge kwaliteit en naar een thumbnail
  • met dat 2de atom-bestand maak ik een media-rss-bestand met per item de titel, link naar image en link naar één video-bestand (ik koos voor de mp4, standaard in lage kwaliteit)
  • dat media-rss bestand wordt dan zonder verpinken ingelezen als playlist door de magnifieke JW FLV-player, die op die manier volautomatisch de flash-interface inclusief de “inhoudstafel” opbouwt.

Whack your Flash-crazy boss on the head with his iPhone3G!

Whatever you may think about the iPhone-hype, you’ll have to admit that the fact that it doesn’t do Flash makes for great ammunition in the discussion against developing your site’s core functionality in Flash.
Next time your CEO or marketing manager wants a Flex-only website, you won’t have to talk about some obscure geek who doesn’t want to install the Flash plugin, about that poor blind woman who is not able to “read” those Flash animations or about how Google indexing SWF-files might be more of a problem then a solution. No, instead, you’ll only have to point out it won’t work on his iPhone (*). Period.

(*) It won’t work on other mobile devices either; Flash Lite, which ships on e.g. Symbian and Windows Mobile powered devices, is not able to display those millions of fancy animations out there on the WWW either.

.

Live from WebScene 2008

webscene logoI’ll be at WebScene 2008 today and if all goes well, I’ll be bringing you live updates of the event (as I did last year). So watch this space if you’re interested!
Being the commuter I am I took the train to Asse and rode my bike from Asse to Affligem (passing Asbeek and Edingen, very nice!) to arrive here at 9h00. So I’m at the conference center, scored Wifi-access and I’m ready to watch and learn.
Bart Van Herreweghe (blog) kicked off with a talk about the new Belgium.be. The Kanselarij van de Eerste Minister worked together with Fedict for the production of the new portal, which was build by a multitude of companies such as IBM, Amplexor, Panoptic, Netway and Internet Architects. Because of the large amount of information that is published on the portal, Internet Architects and Netway played a very important role in information and user-centric interface design, introducing the idea of “doormat”-navigation which could be compared to a (part of a) sitemap being displayed on a (theme-)homepage. Technology-wise, belgium.be uses Tridion as WCMS with templates that contain validated XHTML, with a strong focus on accessibility which aims at Anysurfer plus compliance. The search-module, which will spider a great number of federal websites, is based on Lucene and developed by Panoptic (Cronos) with LBi.
Panoptic’s Ines Vanlangendonck (blog) talked about the importance of usable web content management. Choosing a good foundation (WCM product) and customizing it to the (backend) users’ needs (e.g. adding or removing DAM-functionality, rich text editor functionality, online translation, …) should help get your users (content-owners, editors, …) on board. Looking at the poor adoption rate of the web content management tool chosen at a certain telco company a few years ago, she couldn’t be more spot-on.
Ex-colleague Philip Achten from The Reference presented the implementation of the new Thomas Cook-website. This travel website is an e-commerce business platform first and foremost, with on average 15000 unique visitors/day in 2007 and an estimated growth of 50% in 2008. One of the main goals of the new website was to allow the content team (15 people) and the travelling reporters to manage web-content decentralized. The Reference implemented Sitecore 5.3 for this purpose, a powerfull Microsoft ASP.NET-based WCM-solution, deployed on a loadbalanced environment (2 webservers with IIS and 1 MS SQL databasesserver). Next to the pure content management, a number of applications have been build like the destination search, newsletter, user registration and personalisation and off course the crucial booking application (connection to backend booking engine). In a next phase, building on the user authentication application, user generated content functionality will be added allowing regsitered visitors to add text, pictures and video.
Ektron‘s Norman Graves held a talk titled “Key Technologies and how they impact some real world examples”. He talked about taxonomy and how it’s used in search, geomapping, personalisation in Ektron CMS 400.NET.
Lunchtime has come and gone, time for the afternoon tracks. I started with the presentation about Arte+7, the Arte mediaportal. The website and presentation were done by CoreMedia, who also provided the CMS and DRM-infrastructure. Video’s are published in FLV and WMV-formats, with geolocalisation to limit the countries from which one can watch the content. The same technology is also used in the Arte VOD-site, for which Arte+7 is a teaser. Kinda nice, but lots of javascript and flash in that site, not really accessible.
For the 2nd session I moved to track 5, where U-sentric‘s Tara Schrimpton-Smith talked about “Guerilla Usability Tests? User testing on a shoestring”. Her advise: use friends of friends, somewhere between 2 and 5 users (with 2 testers you should be able to find 50% of usabiltiy issues, with 5 users 85%) and limit the amount of tasks you’ll be testing. She concluded the session with a live example, someone shouted the name of her website, someone else volunteered and the task was ‘what is the address of the headquarters’. Judging the time it took the testperson to find this information, there are some usability issues on barry-callebaut.com. A fun session!
Next up; Robin Wauters (blog) about “Social media is not an option”. Not much stuff to learn here (Robiin talked about technorati, attentio, involve ‘influential bloggers’, blog to showcase knowledge, “dell hell”, buzz, virals, …), but it’s nice to be able to put a face on the guy behind plugg and edentity.
And we’ll finish off with AGConsult‘s Karl Gilis with “9 tips to help users find what they’re looking for on your website”. So let’s create an ordered list for that purpose:

  1. ensure the accessibility of your site (should work on all common browsers/os’es, don’t misuse technology, make sure Google can crawl your site)
  2. speed up page load times, the user decides in half a second if (s)he’ll stay or not
  3. make navigation easy to use (structure, terminology, placement)
  4. provide clear overview pages (example; belgium.be and it’s doormats)
  5. your search should be as good as google (depends on technology and content!)
  6. use an intuitive page lay-out
  7. make your text legible (Verdana 10pt, Arial if you’re adventurous)
  8. write for the web
  9. make sure the info is there (do user needs analysis)

A fun session as well, those usability-guys and girls know how to entertain!
My conclusion: this was not an uninteresting day, but the focus was clearly less technical then previous year’s edition. Content Management -around which much of this event was focused- is slowly but surely becoming a commodity and vendors are having a hard time differentiating themselves from their competitors. It is my feeling that the bigger changes and challenges with regards to “the web” are more on the application-front, where backend-integration (SOA, webservices, …) and RIA’s (using ajax, GWT, flex, …) are today’s hot topics. The fact that webscene2008 did not explore these new frontiers (and their implications with regards to business, marketing, usability, accesability) is a missed opportunity really. Let’s hope they reconnect with the webtech-trends next year! And maybe I’ll be there to let you know?

RIA or POIA?

I went to a Dolmen-organized seminar about RIA‘s today, where some smart people talked about GWT, Flex and JavaFX. I hooked up with an old acquaintance there, he was a customer of my previous employer actually, working in banking and finance. We exchanged ideas about when and more importantly when not to use RIA-technologies. I just now received a mail from him as well, in which he wrote (roughly translated from Dutch);

I’ll keep you posted on our findings concerning RIA as well, but when I tried to visit www.parleys.com at work just now, all I saw was a black screen. In that case I prefer those PIA’s; they might not be that fancy, but they do work.

I couldn’t agree more, Poor Plain Old Internet Applications for president!

Are you doing Web2.0 the wrong way?

gapingvoid cartoonAccording to Jakob Nielsen, jumping on the web2.0-bandwagon often implies adding unneeded complexity instead of getting the basics right. I tend to agree; in our quest for sexier, more responsive web-sites and -applications, we seem to easily forget boring stuff such as accessibility and sometimes even usability. How much time did it take you to find your way around the new LinkedIn-UI? Have you tried to use a web2.0-site on a PDA or smartphone? With your keyboard instead of your mouse? Using a screenreader instead of your normal display? Or with JavaScript disabled (I installed the Firefox NoScript-extension to guard against XSS and other JS-attacks)? And if you have ever tried loading Gmail on a slow connection, you’ll surely have noticed that they automatically fall back to their more accessible “Web 1.0”-version?
Lately I’ve been reading a number of interesting articles on this subject and at work we’re carefully applying some Web2.0-techniques as well. Based on that, here are a few suggestions on how to build better websites and -applications:

  1. Don’t use GWT or Flex unless you’re building a complex desktop-style webapp and if you’re willing to invest in a second “basic html”-version as e.g. Gmail does. Let’s face it, most websites and even -applications do not have the level of complexity that warrants the use these RIA-frameworks.
  2. Develop your core functionality in “web1.0”-style, using semantic HTML (structure) and CSS (presentation) only and apply your usability- and accessibility-wisdom here.
  3. Sprinkle JavaScript (behavior) over that static foundation using an established JavaScript-framework to define event-handlers for objects and to modify the content of divs (but test if you can already write to the object, or you’ll get those ugly “operation aborted” errors in MSIE).  Give users a chance to opt out of this enhanced version, even if they do have JavaScript enabled. With this progressive enhancement (a.k.a. hijax) you add extra functionality for those people who can make use of it, without punishing users who can’t because of physical or technological impairments. Read more about progressive enhancement on London-based Christian Heilmann’s site.
  4. Only load your JavaScript-functions when you really need them, creating a kind of stub for methods of an object and only load the real method when it is needed. This technique is dubbed “lazy loading” and can help making your pages load & initialize much faster. To learn more about the concept of “lazy loading” on digital-web.com.
  5. Use <noscript>-tags if you absolutely have to use JavaScript without having a meaningful object already in place in your static version. Explain what is going on and provide a link to a normal page where the same information or functionality can be found.

Off course these tips won’t guarantee you a perfectly usable and accessible website or -application, but when done right, this will help to get 80% of the way. A good functional analysis and thorough testing, both keeping usability and accessibility in mind, will push you towards 99,99%.