Domino Upgrade

VersionSupport end
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)


Other languages on request.


Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop


Last usage of a mail file - for all users

My admin is getting a little rusty. When I was asked: "How can I indentify a dormant mailbox?" I couldn't name a place in admin where to look. Of course, there is the NotesDatabase.LastModified property, but that would get updated on a design refresh too. So I asked innocently: "dormant, how?" After an amusing 2 hour discussion between the stake holders (only the popcorn was missing) we concluded that it is either
  • No document has been created for a while (including incoming)
  • No email has been sent for a while
Calendar entries or todo were not part of the consideration, but that's just a question in what view you look. For "any" document it is ($All) and for sent documents ($Sent). The LotusScript code is just a few lines. I needs to be run, so it can access all mailboxes.
Read More


Providing user information in JSON

In the MUSE project we encountered the need to retrieve user information in JSON format. Easy done one would think. The trouble starts, when you have multiple directories and you need reasonable speed. Sometimes falling back to @Fomulas gives you what you need, fast and easy. @NameLookup knows where to look and you don't need any extra configuration. A simple call to an form will give you all you need: https://yourserver/somedb.nsf/namelookup?Readform for yourself or add &User=John Doe for any other user. This will return:
"QueryName": "John Doe",
"NotesName": "CN=John Doe/OU=ThePitt/O=GIJoe",
"AllNames": [
"CN=John Doe/OU=ThePitt/O=GIJoe",
"John Doe/ThePitt/GIJoe",
"John Doe",
"eMail": "",
"MailDomain": "SACMEA",
"MailServer": "CN=PittServer42/OU=ThePitt/O=GIJoe",
"MailFile": "mail/jdoe.NSF",
"Empnum": "0815",
"Empcc": "4711">

The form makes extensive use of @NameLookup and looks in DXL quite simple.
Read More


Building a shared approval frontend in XPages

The saying goes: "God would not have been able to create the world in 7 days, if there would have been an installed base to take care of". As much as we wish to have divine powers, we need to make with less and look after an installed base. Point in case: You have a set of approval applications in classic Notes client code, like: Travel, Leave, Expenses, Gifts, Training, BoM changes etc. You are challenged to provide a web and/or mobile interface for them.
Considering your options you have to decide between options:
  1. Dump the applications and rebuild them on a new platform. Hope that it will take long enough, so nobody will ask you to migrate any of the existing data
  2. Enable them one-by-one
  3. Hire Redpill to do an asymmetric modernization
  4. Build a lateral approval screen across your applications and leave the full web enablement for later
While #1 would allow you to silently dump all your technical debt, it will take too long and a potential data migration monster would lurk in the dark. #2 is what non-techies would expect, but again you spend too much time and you will carry forward new technical debt. After all the applications are more similar than different. Hiring Redpill would be a sensible option, but that requires board approval or is against your "we-do-it-inhouse" policy or you simply want to expand your skill horizon.
This leaves you with #4. To do this successful, you need to define your scope clearly. In the rest of the article I will work with the following assumptions, if they don't fit your situation, adjust your plan:
  • The goal is to provide the decision making screen only, not a complete workflow engine (it could evolve to that)
  • Request submission happens in the existing application, providing a new interface for that is outside the current scope
  • The flow logic (how many approvers per level etc.) is determined on submission by the original application
  • Someone from a different department indicated interest to use the approval screen from a complete different system (that is the "scope creep" that happens in any project)
When you look at workflow systems, while planning your application, you will notice, that there are 2 types of data: the flow state/sequence and the specific request data. The flow state is stuff like: what's the current status, what are the potential decisions (usually: Yes, No, More info please), who are the past, current, future approvers. The specific request data is stuff like: duration and type of leave, price of items to purchase, customer involved etc.
In your existing applications the two are stored together (and will stay like that), but you need to extract the flow data to your new approvalCentral application. On the back of a napkin you draw this sequence:
Your approval workflow
(Image created with JS Sequence Diagrams and postprocessed with InkScape)

The green part of the application exists, the first question is: how to design the submission. There are a number of options you could choose from:
  • Have the Approval Central application poll participating applications on schedule. That option is least efficient, but might be your only choice if you can't touch the existing apps at all (happens more often than you think, when admins are paranoid policies are set very cautious
  • Have each application create a flow document directly in the workflow central application. You could use a inherited LotusScript library for that. While that might have been the prefered choice a decade ago, it has two problems: first you limit your participating applications to Notes only (and there are other applications with decision requirements) and you would expose the inner workings of your application prematurely
  • Use a web service interface to implement Contract first development. While REST is the current IT fashion, you pick SOAP. The simple reason: LotusScript can talk SOAP quite well via the Webservice consumer design element, while REST would require your notification code being written in Java. So a SOAP service it is

Read More


{{Mustache}} Helper for Domino and XPages

Previously we had a look, how to use Mustache with the CKEditor to have an editing environment for templates. I glossed over the part where to store and how to use these templates. Let me continue there.
I'll store the template in Notes documents and use an application managed bean to transform a document (and later other things) using these templates.
I have 2 use cases in mind: one is to allow the configuration of HTML notification messages in applications and the other to configure the display of workflow details (more on Workflow in a later post). The approach has two parts:
  1. loading (or reloading) the existing templates
  2. using the templates
The first one should need to run only once in the application lifecycle. Since managed beans can't have constructors with parameters, I created a seperate load function, so inside the bean there is nothing that depends on the XPages runtime environment. You therefore can use that bean in an agent, application or a plug-in.
I also opted to load all templates into memory at once, but only compile them when actually needed. If you plan to have lots of them, you want to look for a different solution.
The class that does the work is the To make it easy to use, I wrap it into a managed bean and a small SSJS Helper function:
<?xml version="1.0" encoding="UTF-8"?>
    <!--  In a production system that should be application -->
  <!--AUTOGEN-START-BUILDER: Automatically generated by IBM Domino Designer. Do not modify.-->
  <!--AUTOGEN-END-BUILDER: End of automatically generated section-->

function applyTemplate(doc, templateName) {
	try {
		// Check for init
		if (!Mustache.isInitialized()) {
			var templateView:NotesView = database.getView("templates");
			Mustache.initFromView(session, templateView);
		return Mustache.renderDocumentToString(templateName,doc);
	} catch (e) {
		return e.message;

With this done transforming a document becomes a one liner in SSJS: return applyTemplate(document1.getDocument(),"Color");. To demo how it works, I created a sample database.
Read More


Karma and Wealth

Karma has gotten some attention lately, while I muse it for some time. The common short explanation is "What goes around comes around" or in other words: "Every action (or inaction) you take has consequences, you ultimately will be confronted with". We are reminded by various spiritual traditions , that nothing good can come from a bad deed and that good deeds will yield (in mysterious ways, somewhen) good results.
This is where the trouble starts. Our and the mystic perceptions what is "good" and what is "bad" differ greatly. Pop literature links wealth, affluence and influence to "good Karma". Someone poor or suffering easily gets dismissed as "(s)he has bad Karma".
I think that misses the point completely.
Taking a step back: Buddhist (and others in similar ways) believe we are trapped in a cycle of Samsara that leads us through many lifetimes. Ending Samsara and suffering is the goal of enlightenment (I'm simplifying here). The main force holding us in Samsara is Maya: the illusion of existence. Now adding nice things to our life binds us deeper to Maya, making liberation more remote, so I doubt that this is a good thing per se. And happiness somehow works differently anyway.
If luxury would be the answer, the road to enlightenment would lead through the god realm, which Buddhists believe is a detour. Some scholars argue that our western civilization is god realm like (I'll add a link when I rediscover it) and there's leisure displaying compassion and joyful effort.
Looking at it from a different angle might explain it better: The currency of Karma is compassion. Compassion for all living beings. That includes yourself, so no point moving under the bridge since "it isn't real". Looking after yourself is a requirement, so you can sustainable look after others. IMHO Good Karma is what makes it easier to be compassionate. Your good deeds will make it easier for you in the future to make good deeds. Any ulterior motive might disqualify your actions as good deeds. So if you think improvements in your financial situation are the result of good Karma, you mix cause and effect (which anyway only exist interdependent). An improvement in your personal situation isn't the reward, but the enablement for greater compassion - and it makes you happy for a while, happy people are contagious.
This also reconciles Karma and free will: Contrary to the common perception "It was Karma, that this happened" you end up with "Life offered a situation, I made a decision, now I'm presented with the consequences". Of course all consequences turn into offerings of situations. I think it is a folly to conclude: hardship is automatically an indication of bad Karma (it might).
The best analogy: there is a weight of 100kg you are supposed to lift (quite a hardship for most of us)! So what's the conclusion? Bad Karma? Nope: if you have trained hard, that might be the final test and reward that you mastered your training and you will lift it. Same with life: a difficult situation could be anything: a result of a bad deed or an invitation to show your skilfulness: maintain compassion no matter what.
In the words of beloved teacher: "Life dealt you cards, you make your Karma how you play them"


A peek in my JavaScript Toolbox

Every craftsman has a toolbox, except developers: we have many. For every type of challenge we use a different box. Here's a peek into my web front-end programming collection. It works with any of your favorite backends. In no specific order:
  • AngularJS

    one of the popular data binding frameworks, created by Google engineers. With a focus on extensibility, testability and clear seperation of concerns it allows to build clean MVC style applications
  • Data Driven Documents

    short: D3JS. If anything needs to be visualized d3js can deliver. Go and check out the samples. There are a set of abstractions on top of it that make things simpler. I consider d3js the gold standard of what is possible in JS visualizations
  • Mustache

    Logicless templating for any language. I use it where I can't / wont use AngularJS' templating
  • PivotTable.js

    We love to slice and dice our data. Instead of downloading and spreadsheet processing them, I use this JavaScript library.
  • Angular-Gantt

    Timeline bound data loves gantt charts. This components makes it easy to visualize them
  • TemaSYS

    A wrapper around WebRTC. It allows to add voice and video to your application in an instant, no heavy backend required

    PredictionIO is an open source machine learning server for software developers to create predictive features, such as personalization, recommendation and content discovery. Competes with IBM's Watson
  • Workflow

    I'm not a big fan of graphical workflow editors. You end up spening lots of time drawing stuff. I'd rather let the system do the drawings
    • Sequence Diagrams
      Visualize how the flow between actors in a system flows. Great to show who did what to whom in Game of Thrones
    • JS Flowchart
      Visualize a flow with conditional branches. I contributed the ability to color code the diagram, so you can show: current path, branches not taken, current step and undecided route. (there are others)
  • Reporting

    Reports should be deeply integrated into the UI and not being standalone.
  • Card UI

    While not exactly JavaScript, designing with cards is fashionable. I like Google's material design explaining cards
    • Bootcards
      Twitter Bootstrap meets cardUI. Lots of quality details to generate a close to native experience
    • Swing
      Swipe left/right for Yes/No answers
  • Tools

    I haven't settled for an editor yet. Contestants are Geany, Eclipse (with plug-ins), Webstorm, Sublime or others. Other tools are clearer:
    • JSHint
      Check your JavaScript for good style
    • Bower
      JavaScript (and other front-end matters) dependency management. It is like mvn for front-ends
    • Grunt
      A JavaScript task runner. It does run a preview server, unit tests, package and deployment tasks. Watching its competitor Gulp closely
    • Yeoman
      Scaffolding tool to combine, grunt, bower and typical libraries
    • Cloudant
      NoSQL JSON friendly database. Works with for offline first and a JavaScript browser database
    • GenyMotion
      Fast Android emulator


Custom experience for IBM Connections Cloud - Project Muse

The old saying goes: "You can't have your cake and eat it too". When organisations move computing from servers they control into a SaaS (the artist formerly known as ASP) environment, they swap customisability for configurability and standardisation. The idea is, that a vendor controlled cloud environment benefits from both the economy of scale as well as the frequent updates the claimed DevOps model brings.
But you can have both. One of IBM's best secret are the assets that the IBM Software Services for Collaboration (ISSC) has been and is building. One of my all time favourites had been Atlas for Connections which predated Watson Analytics by half a decade.
Now I have a new darling: IBM ISSC Project Muse. This is the internal code name, no official name has been set or any decision been made to make this an official offering. However you can ask ISSC nicely, and they will use Muse technology in your project (that you awarded to them/us).
What does it do?
IBM Connections, in both on-premises and cloud is build around a set of APIs. These https APIs give and take XML and/or JSON. On top of them sits the regular UI. That UI in the cloud is only to a small extend customizable or extendable. The Muse engine therefore talks directly to the API and renders an alternate user experience. This alternate experience can include custom application data or (what I liked a lot) a blend of your activity stream with your messaging. This is how it works:
Muse in a Public Cloud setting
Of course the devil sits in the details: script libraries, UI components, authentication and application engine need to be tuned to work together with proper caching and a scalable (both in device and user base) manner.
Your average IBM seller will not know about the offering, you need yo find the right Distinguished Engineer and his Wing man.


LifeBEAM helmet first impressions

After quite a wait (they were sold out) my LifeBEAM helmet arrived today. The helmet is based on a Lazer design. Here are my unpacking impressions:
A big black box
There are no experiments on the box. Black always has been the new black.
Read More


Enterprise architecture - from Silos to Layers

In a recent discussion with a client the approaches and merits of Enterprise Architecture took center stage. IBM for a very long time proposed SOA (service oriented architecture) which today mostly gets implemented in a cloud stack. While it looks easy from high enough above, the devils is in the details. Mostly in the details how to get there. The client had an application landscape that was segmented along a full stack development platform with little or no interaction between the segments or silos:
Silo based Enterprise Architecture
The challenges they are facing are:
  • No consistency in user experience
  • Difficult to negotiate interfaces point-to-point
  • No development synergies
  • Growing backlog of applications
In the discussion I suggested to first have a look at adoption all principles to successfully pass the Spolsky test. Secondly transform their internal infrastructure to be cloud based, so when need arises workloads could easily be shifted to public cloud providers. The biggest change would be to flip the silos and provide a series of layers that are used across the technologies. A very important aspect in the layer architecture is the use of Design by contract principles. The inner workings of the layer are, as much as sensible, hidden by an API contract. So when you e.g. retrieve a customer information, it could come from SAP, Notes, RDBMS or NoSQL, you wouldn't know and you wouldn't care.
Read More


Custom REST service in XPages using a service bean

Talking to your backend using JSON and REST is all the rage for contemporary development. Domino has supported, at least reading, this access for quite a while using ?ReadViewEntries[&OutputFormat=JSON]. Using Domino Access Services (DAS) this has been extended to read/write support for documents as well.
However, as a result, your front-end application now needs to deal with the Domino way to present data, especially the odd use of @ in JSON keys (which e.g. jquery isn't fond of). Contemporary approaches mandate that you minimize the data you send over the wire and send data in your business structure, not in your database format. Furthermore, when sending data back, you want to validate and act on the data.
In the Extension Library there is the REST control, that you can use instead of the DAS service. It allows you to define what you want to expose as XML or JSON. There are a number of predefined service, but my favorite is the customRestService. When you use the custom service, you can write JavaScript for all events happening: doGet, doPost, doPut and doDelete, but you also can use a service bean. A service bean is not a managed bean, so you don't need to specify anything in your faces-config.xml. However it is a little special. A sample XPage could look like this:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp=""
	<h1>This is the landing page of the orgSearch Service</h1>
	<p>Please use "search.xsp/json" for the actual query</p>

	<xe:restService id="JSONSearch" pathInfo="json" state="false">
			<xe:customRestService contentType="application/json"
if your page name is demo.xsp then the access to the service based on the pathInfo property is demo.xsp/json.
Read More


This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2014 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.