Clam AV flagging CSS as Html.Exploit.CVE_2016_0108

So I had a bit of a scare yesterday, when a couple of users posted on the Autoptimize support forum that their hoster warned them about malware in autoptimized CSS-file. ClamAV flagged those files as being infected with Html.Exploit.CVE_2016_0108, which turned out to be a MS IE 11 specific memory corruption issue.
As Autoptimize only aggregates CSS and never adds any in and of it’s own and I was not too worried, but set out to investigate anyway (I’m curious like that). I soon found similar reports of users that were not on Autoptimize and some people kindly copy/pasted their “infected” CSS on pastebin. A quick inspection showed no signs of abnormal things going on and I submitted the files as false positives on Clam AV’s site. This evening I got a (vague) automated mail from ClamAV confirming that my

submissions have been processed and published

I just reached out to a user on AskUbuntu who had the same issue to test if his CSS was now not flagged any more, upon which he replied;

I can confirm that the CSS files no longer trigger a false positive!

So all’s well that ends well. I’m convinced ClamAV is doing a great job, but boy do I hate false positives!

Heads up: Autoptimize minor release

Autoptimize 2.0 was a pretty successful release, if only because there were no major defects that forced me to quickly follow up with a bugfix release. That, off course, does not mean there were no issues at all or that no further improvements were possible. Hence Autoptimize 2.0.1 will be released within the next 2 weeks (or so), with the following changes:

  • Autoptimize now also tries to purge WP Engine cache when AO’s cache is cleared
  • Bail for AMP pages (which are pretty optimized anyway) to avoid issues with “inline & defer” and with AO adding attributes to link-tags that are not allowed in AMP HTML
  • Better support for renamed wp-content directories
  • Improvements to the page cache purging mechanism
  • Multiple fixes for late-injected CSS/ JS (changes in those files not being picked up, fonts or background images not beind CDN’ed, …)
  • Re-enable functionality to move non-aggregated JS  if “also aggregate inline JS” is active
  • Misc. other fixes & improvements, go read the commit-log on GitHub if you’re that curious

If you want to test this release out, you can download the beta from wordpress.org. Do ping me here if you think you’ve stumbled across a bug or simply to confirm all works just fine (esp. the WP Engine cache purge is a hard one to test for me, as I’m not hosted there)  🙂

Hyper Cache hooking up with Autoptimize

satollo.netStefano Lissa, the developer of Hyper Cache, just released a version which hooks into Autoptimize (the autoptimize_action_cachepurged action) to automatically purge the page-cache if Autoptimize’s cache gets cleared. Thanks Stefano, it’s no coincidence Hyper Cache is one of my favorite page-caching plugins!
The Gator Cache-developer is also working on a new version which will do the same by the way.

Autoptimize Power-Up sneak peek; Noptimize

In case you’re wondering if those Autoptimize Power-Ups are still coming and if so how they’ll look like and what they’ll do;
AO powerup sneak peak: noptimize
So some quick pointers;

  • Power-Ups will be installed as separate plugins and will obviously require some sort of registration, payment and license key activation (still to be developed, will either be EDD or Freemius-based)
  • Once installed, they will appear as tabs on Autoptimize’s settings page (no clutter in your menu’s!)
  • You can actually see 2 Power-Up-tabs in this screenshot;
    1. the active one is “Noptimize” and will allow you to configure which URL’s shouldn’t be optimized (either entirely or just CSS or JS).
    2. The inactive tab is for “SpeedUp” which … speeds up the creation of uncached autoptimized files immensely.
  • Other Power-Ups that are on the table are
    1. Critical CSS” to enable you to define “above the fold CSS” for different types of pages (posts, pages, homepage, archives, …)
    2. Whitelist” which lets you specify what JS or CSS to optimize (“known good”), making sure unknown code is never autoptimized
    3. HTTP/2” which will have logic to take the most advantage of what HTTP/2 has to offer, although (part of) this might go into AO proper.

Next steps for me; register my “secondary activity as independent” (as I still have an official day-time job), get in touch with an accountant, decide on EDD vs Freemius, set up shop on optimizingmatters.com (probably including bbpress for support) and determine pricing (thinking a Euro/month actually for each PowerUp, what do you think?).
Exiting times ahead …

Autoptimize: why are .php,jquery excluded from JS optimization?

If you see .php,jquery added to the JS optimization exclusion list and wonder why that is the case; these are added by WP Spamshield to ensure optimal functioning of the that plugin and when you remove them they will be re-added automatically by WP Spamshield.
I am currently exchanging information with Scott (WP Spamshield’s developer) to potentially improve this (as jquery causes all JS with jquery to be excluded from optimization) which is sub-optimal from a performance point-of-view.
If you know what you are doing (i.e. if you are willing to do the trial-and-error dance) you can trick WP Spamshield into leaving the JS optimization exclusion list as is by entering:

.php,jquery.js

which reduces the exclusion to just jquery.js (and .php, but that doesn’t really matter) which based on a (too) quick (a) test could enough for WP Spamshield to function.
.php,jquery.IknowwhatImdoing

which invalidates the exclusion of any jquery (which might work with WP Spamshield if you ‘force JS in head’).
If you do decide to trick WP Spamshield this way, this is entirely and only your responsibility and you should test this thoroughly. Don’t ask Scott (or me) for support for such ugly hacks 😉

WordPress Plugin releases: who needs a big bang anyway?

On January 1st Mika Epstein blogged about releasing/ updating software for large projects, advising against releasing software during the festive season;

With the increasing provenance of online stories and websites for everyone, pushing a change when we know that the majority of the world is celebrating something between Nov 15th and January 15th is reckless. […] picture what happens when an update has a small bug that takes down […] 1/1000 of 1/4th of the entire Internet. […] It may be time to call a year end moratorium on updates to our systems and apps.

Working in corporate IT myself I could only agree. In theory that is, because a couple of days before I had purposely pushed out a major Autoptimize release in the last week of December, on a Saturday. Why? While inching closer to Autoptimize 2.0’s release, I was becoming worried of the impact some of the bigger changes could have. Impact as in “what if this breaks half of the sites AO is installed on“. One way to limit such impact, I thought, is by releasing on a moment people are bound to be less busy with their websites. So by releasing on Boxing Day, I assumed less people were bound to see & install the update on day 0, limiting the damage a major bug could do.
Now I do agree this approach is very clumsy, but being able to limit the amount of people seeing/ installing a plugin (or theme) update on day 0 could help prevent disasters such as the ones that plagued for example Yoast SEO. The idea of “throttled releases” is not new, it already exists for Android apps, with Google allowing developers to flag an update for a “staged rollout:

You can release an app update to production using a staged roll-out, where you release an app update to a percentage of your users and increase the percentage over time. New and existing users are eligible to receive updates from staged roll-outs. […] During a staged roll-out, it’s a good idea to closely monitor crash reports and user feedback.

Pushing an update to a percentage of users and monitoring feedback, allowing you to catch problems without the risk of impacting your entire install base? I want that for my WordPress plugins! So how could we get that to work?
What if an extra header were included in readme.txt, e.g. an optional “throttled release” flag. With that flag set, the percentage of people seeing the update in their wp-admin screens would be low on day one and increasing every day, for example;

Day after release% of people seeing release in dashboard
day 05%
day 110%
day 220%
day 340%
day 480%
day 5100%

This could be accomplished by having https://api.wordpress.org/plugins/update-check/ (against which WordPress installs check for updates) “lie” about updates being available if the “throttled release”-flag is set by e.g. simply introducing randomness in plugins/update-check/;

$showupdate = false;
$randomness = mt_rand(1,40);
if ( ($throttledrelease === true) && ($datenow === $pluginreleasedate) && ($randomness < 2) ) {
    $showupdate = true;
    }

(The “magic” in above code is in the random value between 1 and 40 which has a 1 in 40 (or 2.5%) chance of being smaller than 2 (i.e. 1), so in 2.5% of requests $showupdate would be true. This translates to 5% of requesting WordPress instances per day, as there are checks for updates every 12h, so 2 per day. Obviously on $pluginreleasedate+1d the condition would change, with the random value having to be smaller than 3 (so being either 1 or 2, i.e. approx. 5% of cases X2 =10%), on +2d smaller than 5 (1, 2, 3 or 4 = 10% X 2 = 20%) and so on. This randomness-based approach allows for plugins/update-check not having to keep tabs of how many people saw/ did not see the update at a given date.)
This obviously is just a simplistic idea that does not take into account any of the smart stuff undoubtedly going on in plugins/update-check/ (such as caching, most likely), but I’m pretty sure the wordpress.org-people who are responsible for that code could implement something along these lines. And I do think this would very much be worth the trouble, as It would allow Yoast & other major plugins developers to release without the fear of breaking hundreds-of-thousands WordPress sites within a couple of hours. And I would not have to release on Boxing Day, leaving me and the users of my plugins the time to digest that Christmas-dinner peacefully. What’s not to like?
Blogpost updated (code example + explanation) on 13/01/2016 to reflect the fact that a WordPress instance checks for updates every 12 hours, which impacts the randomness.

Crunching 2015’s numbers

So this was 2015 in numbers:

  1. blog;
  2. OptimizingMattersAvatarMy WordPress plugins:
    • wp-youtube-lyte: pushed out 3 minor and 1 major release, getting 40264 downloads pushing the total to 250545 and having +10000 active installs
    • wp-donottrack: no releases for this one (except for some small readme.txt changes),
      downloaded 2355 times bringing the total to 14364 and +2000 active installs.
    • autoptimize: 2 minor and 1 major release, downloaded 265299 times this year, bringing the total to 506930 and +100000 active installs

That was 2015. For 2016 my main goal is to work on Optimizing Matters.
 

HTTP/2 & JS/CSS optimization: eBay’s approach

Quick follow-up to my previous post about HTTP/2 and Autoptimize; I just read “Packaging for Performance“, an interesting article on Performance Calendar by eBay’s Senthil Padmanabhan. Well worth the read, but the summary; their research confirms bundling of JS/CSS still has clear performance benefits, but they did stop bluntly aggregating all in one file to improve cache-ability. This leaves them with;

  • one optimized JS and one optimized CSS file for the core libraries, used throughout eBay, high cache-ratio & payload
  • one optimized JS and one optimized CSS file for the “domain constants”, used on specific eBay segments, medium cache-ratio & payload
  • one optimized JS and one optimized CSS file for the “domain variables” containing fast changing code for specific segments, having lowest cache-ratio and payload

So yeah, I see a bright future for Autoptimization in the coming age of HTTP/2! :–)

Making Autoptimize faster

One of the big changes in Autoptimize 2.0 (estimated released between Christmas & New Year) is a significant improvement in the minification speed (30% faster should be no exception). As a quick reminder, this is what Autoptimize did until now;

  1. extract code from HTML & remove original references
  2. aggregate all code into one string
  3. check if a minified version of that string exists in cache
  4. if not in cache;
    1. minify that string
    2. store the result in cache
  5. inject reference to cached autoptimized code in HTML

It is the actual minification in step (4) which can slow Autoptimize down (hence the importance of making sure your cached files are reusable). In Autoptimize 2.0 above logic was changed to improve performance;

  1. extract code from HTML & remove original references
  2. aggregate all unminified code into one string, but only put a reference to already minified files (*min.css and *min.js)
  3. check if a minified version of that string exists in cache
  4. if not in cache;
    1. minify that string
    2. replace references to minified files with (slightly optimized) contents
    3. store the result in cache
  5. inject reference to cached autoptimized code in HTML

As the to-be-minified string is smaller, the JS- & CSS-minifiers have less code to optimize, indeed speeding up the process significantly. Additionally this also reduces the chances of problems with the re-minification of already minified code (e.g. p. So nothing but advantages, right?
Now this was tested rather thoroughly and all known kinks have been ironed out, but If this “injected minified code late”-approach does not work in your context, you can simply disable it by hooking into the API and setting the autoptimize_filter_js_inject_min_late and/ or autoptimize_filter_css_inject_min_late filters to false (use code snippets rather then adding it to your functions.php);

add_filter('autoptimize_filter_js_inject_min_late','no_late_inject');
add_filter('autoptimize_filter_css_inject_min_late','no_late_inject');
function no_late_inject() {
	return false;
}

HTTP/2, CSS/JS concatenation and Autoptimize

The web performance world is abuzz with HTTP/2, which should (among other improvements) do away with the latency that each separate HTTP-request introduces, thus rendering aggregation of e.g. CSS & JS an anti-pattern. But there’s at least one in depth facts and figures based article that is not ready to dismiss “packaging” just yet. So: testing, testing, testing!
Autoptimize will in the not too distant future very likely have a “don’t aggregate, just minimize”-option, but the proof of the pudding will always be in the eating testing; sometimes it will be better to aggregate and minify as we do now, sometimes only minifying will be the better approach. And maybe (often?) a combination of those will make most sense: suppose you have a site on which 90% of pages share 90% of JS code. In that case it will likely (testing, testing, testing!) help performance to aggregate & minify the 90% of JS while excluding all other JS from aggregation (and minifying that). Sounds like the new whitelist-filters in Autoptimize’s API will come in handy no? 😉