Tag Archive | "Optimisation"

Why you need to de-fluff your Javascript


I don’t know if it’s the same for you, but in recent times I’m finding that the number of javascript files my projects rely on is growing arms and legs. It’s not uncommon to discover that you’re soon using maybe a dozen jQuery components, the excellent shadowbox plugin (complete with multiple javascript files) and more often than not, some mapping (cue the very handy geoxml project). That’s before we even start with the code we will be writing. Things got so bad on a recent project (23 – yes TWENTY THREE included javascript files) that I decided it really was time to get a grip on things.

The mission was on to a) cut the bandwidth taken up in delivering these files to the users’ browsers b) reduce the number of HTTP requests and thereby improve page loading times and c) potentially help out with SEO by getting content closer to the top of the page.  That last one is a bit of a reach, but it never does any harm for a page to have a higher proportion of real content nearer the top.

The strategy was;

1) Ensure Javascript was being served compressed (g-zip/deflate)

2) Minify / Pack / otherwise compress the source javascript

3) Suture the compressed javascript files together into one large file

1) The starting point was to ensure that Apache was correctly serving javascript in compressed form to compatible browsers.  This can bring dramatic benefits and research shows an average 75% SAVING in bandwidth for html/css and javascript when delivered in compressed form. It’s easy, it’s an instant fix and you really ought to be doing it. Sadly some hosting companies out there still won’t turn on compression for you and use some lame excuse or another – find a better host I say!

2) The next step was to reduce the source file sizes before they are served using various compression techniques. You have probably come across this before – either as minified jQuery files, or perhaps as javascript that has been packed using the dean edwards tool, but because I wanted to script the conversion on my own server, I chose the excellent Javascript::Packer CPAN module by Merten Falk. The nice thing about this module is that I can choose from different sorts of compression – minify and pack being but two of them.

OK so by now we’re making improvements, but not actually addressing the main problem. Our files are much shorter (90% shorter in some cases) having been minified. They are reaching the user faster that before as we have tuned our webserver to deliver the javascript in compresed form, but we still have a huge number of files being downloaded and that means additional HTTP requests and THAT is the real performance killer. Remember, between 40% and 60% of visitors to your site are likely to have an empty cache, so this is a very important issue – in fact it is the most important issue affecting page performance.

So onto;

3) Suture the files together into 1.

And this is where all my problems began.

The site stopped working.  Or at least various bits of it did. I could often control which bits worked by swapping the order of the stitched files and then I twigged that perhaps the minimising or packing was failing in some way and that when the files were all together in one file, this was being highlighted.

And this is the crux of the matter and the point of my post. NOT ALL FILES WILL SURVIVE being minified or packed. The process of removing white space which is inevitably part of the compression process can semantically change the meaning of a poorly written javascript and result in it breaking, despite it running as desired on the target browser.

So what to do … ?

This is what led me to discovering the joys of JSLint, a tool that you can run over your javascript to give it a clean bill of health – or otherwise. Indeed it’s the “or otherwise” most of the time to be honest as the default options are rather strict, but the great thing about JSLint is that it will tell you exactly what coding rules are being broken AND (usually) how to fix them.

After working through all the files and correcting all the faults as reported by JSLint (barring some stylistic ones I chose to ignore), the files are now all minified / packed and correctly sutured together into one large file. The site works as intended and page loading times have been plunged by almost 70% for new visitors on a decent connection.

Conclusion

JSLint might seem a bit pointless at first, all it does is nag you about lots of seemingly petty coding rules, but I would say BEAR WITH IT, learn why it is making the suggestions it is making and if needs be, turn off some of the stricter rules if you can justify why. Your patience will be rewarded as your files will now survive minifying and packing and you will be able to stitch them all together into one large file to dramatically improve your website performance.

Roger

Posted in Development, JavascriptComments (0)

Memcached and Clouds


I’ve been spending a lot of time with Amazon’s cloud database SimpleDB recently and was invited to give a presentation in July on how Memcached and SimpleDB work really well together. We’ve made a copy of the presentation available online and you can view it below. If you want a PDF copy of the presentation, you can order it for free on the same page.

www.mindsizzlers.com/presentations/amazon_simpledb/

Posted in Cloud Computing, Development, Featured, News, PresentationsComments (0)

Boost performance with LibXML


We’re working on a project at the moment that has a lot of XML flying about, for example we wrap data coming out of Amazon SimpleDB in XML and then consume that data in the rest of the program.

I’ve been using XML::XPath to extract the data from the xml, so I can write this sort of thing;

my $xp = XML::XPath->new( xml => $xml );
foreach my $walk ($xp->findnodes('/walks/walk'))
{
my $walkid = $walk->findvalue('./@itemname');
etc ...
}


It’s easy to write, easy to read and works well. However recently I’ve begun noticing that the project has become a bit, well, sluggish. I was kind of hoping that XPath would be using the C (and hence very fast) LibXML under the hood since I had recently installed that parser on the system, however the lack of speed led me to think this might not be the case.

Reading around, I discovered that there already is XPath support built in to LibXML and so I was able to rewrite my code as follows;
my $parser = XML::LibXML->new();
my $doc = $parser->parse_string($xml);
my $xp = XML::LibXML::XPathContext->new($doc->documentElement());

foreach my $walk ($xp->findnodes('/walks/walk'))
{
my $walkid = $walk->findvalue('./@itemname');
etc ...
}


Note how it is just the setup that has changed, the actual data processing stays the same (in most cases).

This makes things *MUCH* speedier as you would expect. My perception is perhaps as much as 10 times faster for large XML files, but I haven’t done any quantitative analysis.

BEWARE though, it’s not a completely transparent drop-in as the parser in LibXML has some quirks. For example if there is a namespace declared in the xml file, then you will get no data returned unless you correctly attach this to the context.

For example, when writing an Atom parser, note the registerNs line

$PARSER = XML::LibXML->new();
$DOC = $PARSER->parse_string($xml);
$XP = XML::LibXML::XPathContext->new($DOC->documentElement());
$XP->registerNs( xatom => "http://www.w3.org/2005/Atom" );

foreach my $data ($XP->findnodes('//xatom:entry/xatom:content[@type="text/xml"]'))

This despite the fact that inside the atom feed, NO namespace is explicitly used in elements. The atom file contains <entry> and NOT <xatom:entry> but you MUST attach a namespace to be able to read the data. You could choose any namespace, I picked xatom but it could just as well have been fred. Go figure …

Posted in DevelopmentComments (0)


Advert

For more information about our services…

Contact Us

Wordle of the Day

Image from software at http://wordle.net
Data by Web Trends Now

Categories