wissel.net

Usability - Productivity - Business - The web - Singapore & Twins

How much bandwidth does a Domino server need?


We get this question quite often lately obviously driven by who-does-not-need-to-be-named claiming lesser bandwidth requirement. Of course that is utter nonsense. There is no instrinct bandwidth requirement in Domino. After all Notes servers were happily running on 4800 Baud modem connections. Bandwidth is the speed, so when you rephrase the question, you see it is missing half of it: "How fast should [insert-your-favorite-land-transport-vehicle] be?" The logical reply: to do *what*? So when looking for bandwidth requirements you need to know: how much data do I have and in what amount of time do I need (or want) this data to be delivered. So step 1 is to compute these values:
  • Average requirements:
    [Average/Median number of messages per hour] *[Average/Median size of message] / (360* [Acceptable average delivery time in seconds])
    The 360 is to adjust hour to seconds.
  • Peak requirements: [Peak number of messages per hour] *[Peak size of message] / (360* [Expected maximum delivery time in seconds])
These formulas are product independent. Now you can apply additional factors. E.g. when messages are transmitted using SMTP/MIME attachments swell by 34% due to the mime encoding. Notes compresses documents, data and network communication and can save 5-70% of transmission size. Why this big spread? Well when you transmit a compressed archive file, there is little you can squeeze out it, a old MS-Office document on the other hand can loose 80% of its size when compressed. There are a few caveats of course:
  • Corporate habit: we see very often that 80-90% of messages are retrieved in just 2 of 24 hours. so when you calculate 24000 messages/day you mis- calculate your average to be 1000 messages/hour while your true average is 9600 in the relevant hours.
  • You underestimate your growth. What might have been enough 3 month ago might not be good enough one year in the future. (IBM internally seems to be a big difference. Using Lotus Quickr and Lotus Connections we actually see a decline in message volume)
  • Management (or user) expectation: They would expect prompt delivery event for the biggest messages at peak time (ties a little toward the first point)
  • Bandwidth availability. This is mostly an issue on VPN connections. The nominal speed my ISP bills me is far higher that what I ever able to get.
What's your bandwidth experience?

Posted by on 23 July 2009 | Comments (5) | categories: Show-N-Tell Thursday

Comments

  1. posted by Bill on Thursday 23 July 2009 AD:
    Ohhh, good question.

    back in 1997, when doing a very large scale notes consolidation, we found that using a '6k' per user figured. Near the end of the project (2001 or so), this had already climbed to '12k'.

    We used this figure to work out the minimum network pipe for a fully interactive 'normal' notes user (and there's several PAGES of assumptions there to start with!).

    So an office with 100 users would need a pipe of approx 100 * 12k = 1.2m. So far so good.

    However, mail attachment size has balooned, and notes ability to compress mail, and compress network traffic has helped. All variables pushing up or pulling down this figure.

    So after 15+ years corporate experience of doing very large enterprise consolidations I have one and only one recommendation.

    Go talk to Wouter Aukema of Trust Factory, and have them come in, do an audit of your existing environment, and he will TELL you what size of pipes you need. He'll also tell you where your environment works and doesnt work.

    He helped a multinational customer decide WHERE to site application servers, for instance, so that the 100k users accessing these servers got best use of their existing network.


    { Link }

    Disclaimer: I've worked for them, I went to Wouters wedding. Its still an awesome product.

    ---* Bill

  2. posted by Stephan H. Wissel on Thursday 23 July 2009 AD:
    @ Bill, thx for the comment. We actually found in a recent analysis of another multi-national that 2days averages are between 170k and 600k. Of course that's inaccurate since it doesn't distinguish between: messages with and without attachments.
    Emoticon smile.gif stw
  3. posted by Charles Robinson on Thursday 23 July 2009 AD:
    Stephan, you're only taking into consideration e-mail. Since Domino is an application platform that needs to be taken into account, too. Anyone considering an alternate platform also needs to consider application bandwidth. The bandwidth issue could be worse than it is with Domino since separate applications that are unaware of each other will usually end up stepping on each other. Or so I hear. Emoticon rolleyes.gif

    Anyway, could you offer some guidance for doing that? In the past I've tried using Wireshark and the Notes platform stats but I never was able to come up with anything meaningful. It would be good to hear how others have tackled this.
  4. posted by Keith Brooks on Thursday 23 July 2009 AD:
    You forgot to include Sametime as well encourages less email.
    But Domino will use as little or as much as your network guys let it. IF you think Domino is using all your bandwidth, fire your network monkess because they never did their job and managed the bandwidth.
    I have had Domino replicate for 3 days on one link without a drop at a paltry 1200baud(if that).
    Notes clients however do require more bandwidth to pull down emails, but even then one could enable replication to only pull down 2k or 5k(like my phone) and only pull more when required.
    Lots of ways to answer this but usually I put it back in the client's court and ask for their network team to show proof.
    Yet to receive any in over 15 years that Domino or Notes was a bandwidth hog.

  5. posted by Wouter Aukema on Wednesday 19 August 2009 AD:
    Indeed an interesting question Stephan (and thanks for the nice compliments Bill!).

    I couldn't resist placing a rather large comment/answer:

    We analyze Domino environments from mainly multi-national customers, where we collect data about all activities performed by all end users from a 7 day period, and process that inside our data warehouse in The Hague. This allows us to perform in-depth analysis and compare customers with each other. So far, we've analyzed the behavior of more than half a million end users.

    1. In my experience there is no 'normal' user. Every user has it's own footprint and corresponding network demand. Customer averages range from as low as 1.89 kilobits per second to as high as 24.8, per user. So it really is not a good idea to use averages (or assumptions) for planning network capacity. Better measure and analyze the actual traffic, fact-based.

    2. Network compression really works! That is, if you enable it. We see many companies that did not enable compression on their desktops. At most companies we observe scores between 35-40%, without significant cost to the cpu. As with many nice features that IBM keeps adding to Notes & Domino, many customers fail to make use of them. This makes them an easy target for that company who-does-not-need-to-be-named.

    3. Another question is how Notes clients are/will be deployed. Typically, we see that remote users (with local mail file replicas) score far higher numbers than online users. Their sessions are much shorter and so is the load on the mail server. Downside to local mail file replicas is the famous morning peaks in network traffic, because all clients start replicating at the same time. And not to forget the additional load from using the Mobile Directory Catalog.

    4. Strange users, bad applications and interesting mis-configurations: we often see things out there that blow your mind. In line with Bill's famous 'Worst Practices' talk, we often see how very few users destroy the picture for the entire company. We've developed a method to identify such culprits, using cluster analysis { Link } . Typical savings between 30-80% (!)

    5. Application versus mail access: perhaps an open door, but they behave very differently especially in network traffic and bandwidth and very much depending on the developer. Imagine this application performing uncached dblookup calls to obtain a list of country keywords, 30 times in one form. Add some more latency to the network link and the poor user is waiting ages for the form to appear. (a very effective way to reduce network bandwidth consumption)

    6. Network monitoring (eg. HP Openview / MRTG): sometimes customers tell me they already have an in-depth view of what's happening on tcpip port 1352 (Notes protocol). While this is very good, it unfortunately is not going to help you much. The traffic you'll see on a switch is server-to-server traffic such as mail routing and replication. Traffic from end users -especially in an unconsolidated environment- takes place primarily on the local area network and therefor remains 'unseen'. Consolidate an office server towards a data center and that server-to-server traffic ceases to exist. It is the end user traffic that now needs to go across the corporate network. Furthermore, although it may be interesting to see the amount of traffic taking place today, you'll need to know in detail what's happening inside (who's doing what, how and on which databases) in order to successfully reduce the network traffic footprint.

    7. I am currently producing a comparison between the two competing mail platforms, using a research paper published by this company 'who-does-not-need-to-be-named'. Applying the metrics/assumptions & formulas recommended in their document to the real world (I mean the statistics we have on real customers), turns out to be extremely dangerous. You'll end up everely over-sizing or under-sizing your network links.

    Feel free to drop me a note if you're interested in the results (I expect to have it ready by September), or would like to discuss this network bandwidth topic in more detail (wouter dot aukema at trustfactory dot com).

    My advise is to not use formulas and assumptions to predict existing or future network load. Instead take a good look at what, when and how your end users, applications and servers are doing today.