Usability - Productivity - Business - The web - Singapore & Twins

Broadband and web2.0 applications

Singapore is one of the most wired companies in the world with a Broadband penetration exceeding 100% (Korea take that ). I recently switched to Starhub from SingTel on one of their midrange plans (not the alledged 100 MBPS one), so I was curious and headed over to SpeedTest to check how I'm doing. Locally I got 8.42 Mbps and overseas 5.37 (Boston), 4.76 (San Franscisco) and 5.3 (Texas) Mbps.
Local broadband results Result to California
Should be ample power for any web based application isn't it? Not so fast. Just recall how most of the web 2.0 applications with their Ajax goodness actually work:
Ajax sequence
(Image shamelessly borrowed from JustGoodDesign.com

You will realize that while you want all the speed you can get for watching video or download files, you want fast reaction to your requests too. Reaction time in the internet is called latency. How long does it take the other side to react. There are many factors influencing latency. Physical distance, quality of lines, number of nodes to traverse, things happening on that nodes (packet inspections, firewall activities) etc. In a local network you can expect latency in the range of less than a milisecond up to 3-4ms. Once firewalls get inbetween figures get higher. Here the Singapore figures (mesured using ) look different:
East coast US ping West coast US
Locally I got 31ms (just 15x slower than a 2ms local network), overseas between 200ms and 350ms (just 100x and 150x slower than the lan). You can imagine what that does to your "chatty" web2.0 application. While 100 calls in the lan take just a fifth of a second you would need to wait 20sec on the slow connection. Now travel to places sourrounded with great walls or exotic destinations and your app will suck big time. My recommendation for all web2.0 developers: Schedule some time in a remote development facility (Yes people actually write code there) but leave your servers at home.

Posted by on 14 March 2010 | Comments (2) | categories: Buying Broadband


  1. posted by Erik Brooks on Sunday 14 March 2010 AD:
    Very excellent points. This is actually a very significant challenge for us with respect to Xpages.

    In 8.5.1 a bare Xpage with a date picker results in 60+ hits of CSS files:

    { Link }

    Even on broadband it takes about 20 seconds to open the example page if your latency is > 100ms.

    Thankfully the JS files seem to be consolidated (during the 8.5.0 beta they weren't, and there were 60+ JS files too). I remember somebody at Lotusphere mentioning that for 8.5.2 CSS would be merged also.
  2. posted by Tony Austin on Monday 15 March 2010 AD:
    When I was at IBM, in the a previous life, we were talking about issues like this way back 20 or 30 years ago, Stephan. The only thing that has changed is that the various infrastructure elements have become faster: faster processors, faster disks, faster communications links, etc. But "all things are relative" and the principle that you're highlighting is still more or less the same.

    A transaction is made up of a chain of actions, some are relatively slower than others, add them all up (including all the various queuing times while you're waiting for resources to become freed up at the various stages) to come up with the overall transaction time. Then carefully analyze them to see where the major slow steps are. You'll get by far the best returns from speeding up the slowest/lengthiest steps.

    You mentioned all this in the context of "Web 2.0" (which means different things to different people), and which doesn't seem to be getting the same exposure these days as it was a year or two ago. It's "cloud computing" that's getting more mention these days, and it's salutary to keep the same performance principles in mind when considering cloud computing performance. I reckon that unless there is widespread very-high-speed broadband available wherever you are, then cloud computing will be very disappointing. Microsoft, fro example, hasn't committed to an Azure data centre in Australia, so I wouldn't be too keen on relying heavily on good Azure performance right across Australia (especially in the outback).