Category Archives: performance

Warning WordPress plugin users about their old PHP

After my initial disbelief about the amount of WordPress installations still on the slow and vulnerable PHP 5.2.17 (or older), I decided to warn users of my plugin with an non-dismissable warning on the plugin’s settings-page (and only there, so it’s not a default WordPress admin notice) cluttering the entire backend):

php52_warning_aoThis is going in AO 2.0.2 (out later today) and will in the future also be added to WP YouTube Lyte and WP DoNotTrack (both of which have a smaller reach).

If you’re a plugin or theme developer and want to warn your users as well (without blocking them), here’s the code I used (do change the translation-domain from “autoptimize” into one that is applicable to your plugin):

<?php if (version_compare(PHP_VERSION, '5.3.0') < 0) { ?>
    <div class="notice-error notice">
        <?php _e('<strong>You are using a very old version of PHP</strong> (5.2.x or older) which has <a href="http://blog.futtta.be/2016/03/15/why-would-you-still-be-on-php-5-2/" target="_blank"> serious security and performance issues</a>. Please ask your hoster to provide you with an upgrade path to 5.6 or 7.0','autoptimize'); ?>
    </div>
<?php } ?>

Why would you still be on PHP 5.2?

For Autoptimize 2.0.1 I declared a pretty complex regex to extract font-face’s from CSS using the nowdoc-syntax which is supported from PHP 5.3 onwards. Taking into account that the first PHP 5.2 release was over 9 years ago and support ended with the release of 5.2.17, over 5 years ago I assumed using a nowdoc would not be a problem for anyone. How naive I was; several people contacted me with this ugly error-message PHP 5.2 throws;

Parse error: syntax error, unexpected T_SL in /wp-content/plugins/autoptimize/classes/autoptimizeStyles.php on line 396

There is a workaround and even a more fundamental fix for that already, but who would still want to run PHP 5.2, which has this huge list of security issues? Moreover PHP 5.5 and 5.6 seem approximately twice as fast as 5.2 according to these test results and PHP 7.0 is even over three times as fast as 5.2! And still almost 9% of all WordPress sites are running on that old version (so I could have known this was coming really, bugger).

I you are one of those, do urge your hosting company to urgently provide you with an upgrade path to PHP 5.6 (or even 7.0)!

Hyper Cache hooking up with Autoptimize

satollo.netStefano Lissa, the developer of Hyper Cache, just released a version which hooks into Autoptimize (the autoptimize_action_cachepurged action) to automatically purge the page-cache if Autoptimize’s cache gets cleared. Thanks Stefano, it’s no coincidence Hyper Cache is one of my favorite page-caching plugins!

The Gator Cache-developer is also working on a new version which will do the same by the way.

HTTP/2 & JS/CSS optimization: eBay’s approach

Quick follow-up to my previous post about HTTP/2 and Autoptimize; I just read an “Packaging for Performance”, an interesting article on Performance Calendar by eBay’s Senthil Padmanabhan. Well worth the read, but the summary; their research confirms bundling of JS/CSS still has clear performance benefits, but they did stop bluntly aggregating all in one file to improve cache-ability. This leaves them with;

  • one optimized JS and one optimized CSS file for the core libraries, used throughout eBay, high cache-ratio & payload
  • one optimized JS and one optimized CSS file for the “domain constants”, used on specific eBay segments, medium cache-ratio & payload
  • one optimized JS and one optimized CSS file for the “domain variables” containing fast changing code for specific segments, having lowest cache-ratio and payload

So yeah, I see a bright future for Autoptimization in the coming age of HTTP/2! :–)

Making Autoptimize faster

One of the big changes in Autoptimize 2.0 (estimated released between Christmas & New Year) is a significant improvement in the minification speed (30% faster should be no exception). As a quick reminder, this is what Autoptimize did until now;

  1. extract code from HTML & remove original references
  2. aggregate all code into one string
  3. check if a minified version of that string exists in cache
  4. if not in cache;
    1. minify that string
    2. store the result in cache
  5. inject reference to cached autoptimized code in HTML

It is the actual minification in step (4) which can slow Autoptimize down (hence the importance of making sure your cached files are reusable). In Autoptimize 2.0 above logic was changed to improve performance;

  1. extract code from HTML & remove original references
  2. aggregate all unminified code into one string, but only put a reference to already minified files (*min.css and *min.js)
  3. check if a minified version of that string exists in cache
  4. if not in cache;
    1. minify that string
    2. replace references to minified files with (slightly optimized) contents
    3. store the result in cache
  5. inject reference to cached autoptimized code in HTML

As the to-be-minified string is smaller, the JS- & CSS-minifiers have less code to optimize, indeed speeding up the process significantly. Additionally this also reduces the chances of problems with the re-minification of already minified code (e.g. p. So nothing but advantages, right?

Now this was tested rather thoroughly and all known kinks have been ironed out, but If this “injected minified code late”-approach does not work in your context, you can simply disable it by hooking into the API and setting the autoptimize_filter_js_inject_min_late and/ or autoptimize_filter_css_inject_min_late filters to false (use code snippets rather then adding it to your functions.php);

add_filter('autoptimize_filter_js_inject_min_late','no_late_inject');
add_filter('autoptimize_filter_css_inject_min_late','no_late_inject');
function no_late_inject() {
	return false;
}

HTTP/2, CSS/JS concatenation and Autoptimize

The web performance world is abuzz with HTTP/2, which should (among other improvements) do away with the latency that each separate HTTP-request introduces, thus rendering aggregation of e.g. CSS & JS an anti-pattern. But there’s at least one in depth facts and figures based article that is not ready to dismiss “packaging” just yet. So: testing, testing, testing!

Autoptimize will in the not too distant future very likely have a “don’t aggregate, just minimize”-option, but the proof of the pudding will always be in the eating testing; sometimes it will be better to aggregate and minify as we do now, sometimes only minifying will be the better approach. And maybe (often?) a combination of those will make most sense: suppose you have a site on which 90% of pages share 90% of JS code. In that case it will likely (testing, testing, testing!) help performance to aggregate & minify the 90% of JS while excluding all other JS from aggregation (and minifying that). Sounds like the new whitelist-filters in Autoptimize’s API will come in handy no? ;-)