Domino Upgrade

VersionSupport end
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)


Other languages on request.


Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop


It's just HTML, CSS and JavaScript isn't it?

Web development is easy isn't it? After all it is just the open standards of HTML, CSS and JavaScript.
That's roughly the same fallacy as to say: It's easy to write a great novel, since it is just words and grammar.
The simple reason: It is hard.
I'll outline my very opinonate version of what skills you need for mastery of front-end programming. The list doesn't include the skills to figure out what to program, just how. So here it goes:


Jeffry Veen, who'm I had the pleasure to meet in Hong Kong once, summarized it nicely in his book: The Art and Science of Web Design already in 2000: "HTML provides structure, CSS layout and JavaScript behavior". Like English development needs a style guide (and the mastery of it), web development has several guides, besides the plain definitions mentioned in the beginning.


  • The standard today is HTML5. So ignore any screams "But it must run in IE6" and see what HTML5 can do. There's a handy list of compatible elements you can check for reference
  • The predominant guideline for structure is Twitter bootstrap. It defines a layout and structure language, so your page structures become understandable by many developers. Twitter bootstrap contains CSS and JavaScript too, since they are intertwined
  • If you don't like Bootstrap, there are other options, but you have been warned
  • The other HTML structure to look at is the Ionic framework. Again it has more than HTML only


Most frameworks include CSS, so when you picked one above, you are pretty much covered. However understanding it well takes a while, so you can modify the template you started with. My still favourite place to get a feel for the power of CSS is CSS Zengarden. The W3C provides a collection of links to tutorials to deepen your knowledge. My strong advice to the developers: don't touch it initially. Use a template. Once your application is functional, then revisit the CSS (or let a designer do that).


Probably the most controversial part. IBM uses Dojo big time, ReactJS is gaining traction and there is Aurelia up and coming. So there's lot to watch out for. But that is not where you get started.
  • Start learning JavaScript including ES6. Some like CoffeeScript too, but not necessarily for starters
  • The most popular core library is jQuery, so get to know it. The $ operator is quite convenient and powerful
  • To build MVC style single page applications I recommend to use AngularJS. It has a huge body of knowledge, a nice form module and an active community. Version 2.0 will have a huge boost in performance too. Make sure you know John Papa's style guide and follow it
  • And again: have a look at Ionic and Cordova (used in Ionic) for mobile development. It uses AngularJS under the hood
There are tons of additional libraries around for all sorts of requirements, which probably warrants another article.


With all the complexity around, you don't have to start from scratch. There are plenty of templates around, that, free or a little fee, give you a head start. Be very clear: they are an incredible resource when you know how all is working together, but they don't relieve you from learning the skill. Here are my favourites

Tools and resources

Notepad is not a development tool. There are quite some you need if you want to be productive
  • You need a capable editor, luckily you have plenty of choices or you opt for one or the other IDE
  • Node.js: a JavaScript runtime based on Google's V8 engine. It provides all runtime for all the other tools. If you don't have a node.js installation on your developer workstation, where were you hiding the last 2 years?
  • Bower: a dependency manager for browser files like CSS, JS with their various frameworks. You add a depencency (e.g. Bootstrap) to the bower.json file and Bower will find the right version for you
  • Grunt: a task runner application, used for assembly of web sites/applications (or any other stuff). Configured using a Gruntfile.js: it can do all sorts of steps like: copy files, combine and minify, check code quality, run tests etc.
  • Yeoman: A application scaffolding tool. It puts all the tools and needed configurations together. It uses generators to blueprint different applications from classic web, to reveal application to mobile hybrid apps. The site lists hundreds of generators and you are invited to modify them or roll your own. I like generator-angularand mcfly
  • GIT: the version control system. Use it from the command line, your IDE or a version control client
  • Watch NPM for new development

Those are the tools to mastery, when you just want HTML, CSS and JavaScript


Multitenancy - a blast from the past?

Wikipedia defines multitenancy as: "Software Multitenancy refers to a software architecture in which a single instance of a software runs on a server and serves multiple tenants ... Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants "
Looking at contemporary architectures like Docker, OpenStack, Bluemix, AWS or Azure, I can't see any actual payloads being multi-tenant. Applications by large run single tenant. The prevalent (and IMHO only current valid use case) is the administrative components that manages the rollout of the individual services. So instead of building that one huge system, you "only" need to have a management layer (and enough runtime).
Multi-Tenant architecture
The typical argument for a multitenancy deployment is rooted in the experience that platforms are expensive and hard to build. So when one platform is ready, it is better everybody uses it. With Virtual Machines, Containers and PaaS this argument becomes weaker and weaker. The added complexity in code outweighs the cost for spinning up another instance (which can be done in minutes).
A multi-instance architecture mitigates these efforts:
Multi-Instance architecture
The burden to manage the tenants lies with the Identity and Entitlement management. Quite often (see Bluemix) those are only accessed by admin and development personnel, so their load is lighter. So next time someone asks for multitenancy, send them thinking.


The lowdown on Notes application web and mobile enablement

There are millions (if not billions) of lines of code written for the Notes client in small, large, simple, complex, epidermal and business critical applications. With the rise of browsers, tablets and mobile devices there is a need to web and mobile enable these applications, not at least since IBM won't let the C code of the Notes client run on iOS (which it could).
Whether we like it or not, that body of code can be considered technical debt.
  The strategy I see is mostly the same: classify applications by size, business criticality and perceived complexity. Then retire as much as possible and start a pilot for one (either insignificant or overly complex - mostly both) application using XPages. From there it goes nowhere.
Guess what. There's a better way.
Keep the part where you decide if an application can be retired. If you need to retain read access you could check out LDC Via, who remove the need to maintain a Domino infrastructure for those applications.
Then go and collect evidence (I know evidence based management isn't in fashion right now). There are elements in Notes that translate easily into web or mobile, parts that are hard and some stuff browsers can't do.
Collect how much you use of those, it makes decisions easier.

Design element and difficulty

  • View navigation:
    generally easy, its just data and Domino makes an excellent server for JSON data (I would render my own style, not the ?ReadViewEntries&Outputformat=JSON, but that's another story)
  • Categorised views:
    medium. The twisties we got so used to in Notes are not a native construct on the web (show me one that was web native). But with a little effort you can devise interaction patterns, go peek here.
  • View actions:
    HARD. They can be anything. Worst when code for front-end and back-end classes get mixed
  • Forms (read mode):
    Easy to medium. You could simply open a form using a browser and see something - or craft some XPage to show
  • Forms (edit mode):
    medium to hard (see below): You need to get your validation right and all the nice Hide-when conditions
  • RichText:
    HARD to impossible. RichText != HTML (and that's not even !==). For rendering you want to have Ben's enhancements as a start. RichText can contain attachments, wild formatting, OLE objects, Sections, Hidden stuff, tabbed or timed tables etc. All that doesn't really translate to HTML. To really know how RichText was used in your application and organisation you need to scan the actual data and analyse how RichText was used in each application
  • Code in forms:
    HARD. There is no LotusScript running in your browser, so you need to rewrite that logic. So better to know how many functions and lines of code you are looking at. Do your CoCoMo analysis. Code in fields (on enter/exit) needs to be re-though (nice word for retired)
  • Attachments:
    TEDIOUS. HTML doesn't know attachments, so you have to bridge between user expectations: can put an attachment anywhere in the text and the storage reality of HTML: simulate this by placing an image and a link. You might consider adding webDAV for a round trip editing experience
  • Reader & Author fields:
    Easy. Works
  • Encryption and Signature:
    (popular in approval applications) wait for Domino 9.0.2 - it supposedly will work there
  • Business Logic in hide-when formula:
    hard to find in Domino designer, DXL to the rescue
This list is probably not complete, but you get the gist

The approach

I would advocate (after the retirement of old applications):
  • Look at the whole application portfolio, not just one pilot. The best analogy: remember how long it took to drive your first 100m in a car (and I'm not talking about the 18 years wait to come of age). You had to take driving lessons (at least in civilized countries you have to), learn for the test, get a car (bargin with a parent?). So it took you around 20-40h, lets say: 20h. So you conclude to get to the supermarket, 3 km away it will take you 600h - so driving is out, you rather walk. Same applied to the "pilot". The first application is the hardest, so your experience wih it will screw the estimations.
    Furhermore looking at all applications gives you new synery potential and efficiency of scale
  • Create one module that handles view navigation for all applications (that probably would be a OSGi plug-in)
    When initially opening a document, it would point to a notes:// URL (you could have a little icon indicating that). Within a few weeks you have a working application giving some access to your Notes data (I would use JSON, AngularJS and Ionic for that app)
  • Next generate the forms using angular-formly or some {{mustache}} templates, so you have read-only access (DXL and XSLT skills come in handy)
  • Add a "request" functionality that lists the functions (Action buttons) an application once had, let users propose "I want this", so you get some priorities sorted out
  • Transit to contemporary development with a Domino backend
  • Do not fall into the trap: must look and work like a Notes client (we had that with the Java applets in R5 - last century). If someone is happy with the look of the Notes client - give them the IBM Client access plugin (PC only)

Why no magic button?

You could ask: why doesn't IBM provide this as an automatic module? After all they wrote LotusScript, the formula language and the Notes client. The short answer: IBM tried, we even had a prototype of LotusScript running on the JVM. The Notes client is such an incredible forgiving environment, any quality and mix of code "simply runs"™. I've seen code happily running (yes I'm talking about you 8000-lines-function) that was a melange of unspeakables.
Browsers on the other hand are fragile and unforgiving (not talking about quirks mode here). Any attempt to automate that never reached the stage of "point-and-shoot" - so it will not become an offering.

Why not just default and rewrite?

Good idea. Until you look closer. Domino's declarative security (ACL, Reader and Author fields anyone) is hard to replicate. The schema free Domino NoSQL store does neither map to Sharepoint nor an RDBMS (CouchDB could work), so you suddenly deal with data migration - a nightmare that can eat 80% of your total project cost. Lastly, the incremental approach outlined here allows to continue to use the applications (and eventually enhance them along the way), so disruption is minimised. Also the working Notes client provides a fallback for missing or malfunctioning features, this substantially lowers application risks.

Business partners to the rescue

You are not alone on this journey. IBM business partners and IBM Software Services for Collaboration are here to help. If you like the approach "I see Notes as one huge pool", you might want to talk to Redpill, they perfected this approach.

As usual YMMV.


Domino, Extlib, GRUNT, JSON and Yeoman


With a few tweaks and clever setup, you can have web developers deliver front-ends for Domino without ever touching it.
Contemporary web development workflows separate front-end and back-end through a JSON API and HTTP (that's 21st century 3270 for you). The approach in these workflows is to treat the webserver as source of static files (HTML, CSS, JS) and JSON payload data being shuffled back and forth. This article describes how my development setup makes all this work with Domino and Domino designer.

The full monty

The front-end tools I use are:
  • Node.js: a JavaScript runtime based on Google's V8 engine. It provides all runtime for all the other tools. If you don't have a node.js installation on your developer workstation, where were you hiding the last 2 years?
  • Bower: a dependency manager for browser files like CSS, JS with their various frameworks. You add a depencency (e.g. Bootstrap) to the bower.json file and Bower will find the right version for you
  • Grunt: a task runner application, used for assembly of web sites/applications (or any other stuff). Configured using a Gruntfile.js: it can do all sorts of steps like: copy files, combine and minify, check code quality, run tests etc.
  • Yeoman: A application scaffolding tool. It puts all the tools and needed configurations together. It uses generators to blueprint different applications from classic web, to reveal application to mobile hybrid apps. The site lists hundreds of generators and you are invited to modifiy them or roll your own
  • GIT: the version control system
The list doesn't include one or the other editor, one or the other IDE or version control client, since theses choices don't influence the workflow steps or setup. Pick what you like.
My general directory layout starts with a project folder that has entries for code, documentation, nsf and a misc. folder. I only put the code folder into the GIT version control.
Seperation of concerns for frontend and backend development
The front-end developer doesn't need any of the Domino components available to create user interface and interaction models. The back-end developer will use exclusively the REST controls from the Domino XPages Extension library (which is part of a default current Domino server install). The two directory structures are linked in a GIT project, but that isn't mandatory. One step that made my live much easier: take the dist directory and link it to WebContent. This saves me the step to copy things around. The fine-tuning of the setup is where the fun starts.
Read More


There's a plug-in for that! Getting started with Cloud Foundry plug-ins

Since IBM Bluemix is build on top of Cloud Foundry, all the knowhow and tooling available for the later can be used in Bluemix too.
One of the tools I'm fond of is the Cloud Foundry command line cf. The tool is a thin veneer over the Cloud Foundry REST API and is written in GO. "Thin veneer" is a slight understatement, since the cf command line is powerful, convenient and - icing on the cake - extensible.
A list of current plug-ins can be found in the CF Plug-in directory. Now the installation instructions haven't kept up with the cf releases, so here are the steps you need:
  1. Head to the CF Command Line release page and make sure you have the latest release installed.
    At time of this writing that would be 6.12.2
  2. Add the community repository to your installation using this command:
    cf add-plugin-repo CF-Community
    This command is only available in cf versions > 6.10. A lot of blog entries or even the github documentation suggest downloads or even golang installs. With the availability of the repository these steps are no longer necessary (but you are free to use them)
  3. Now listing all available plug-ins is as simple as cf repo-plugins
  4. To install a specific plug-in you issue the command:
    cf install-plugin '[name of plugin as listed]' -r CF-Community
    If the name doesn't contain white space, the quotes can be omitted
  5. After installation cf help provides the short instructions on how to use the modules
The interesting question now is, what are the plug-ins worthwhile looking at.
Read More


Let there be a light - Angular, nodeRED and Websockets

NodeRED has conquered a place in my permanent toolbox. I run an instance in Bluemix, on my local machine and on a Raspberry PI. I build a little demo where a light connected to a Particle lights up based on an event reaching a NodeRED instance. However I don't carry my IoT gear every time (got lots of funny looks at airport security for it), but I still want to demo the app. The NodeRED side is easy. I just added a websocket output node and the server side is ready to roll.
Web socket in NodeRED
On the browser side I decided to use angular.js and one of its web socket libraries ng-websocket. The application code is just about 50 lines, so here it goes:
'use strict';

var websocketEndpoint = 'wss://'+window.location.hostname+'/ws/bulb';

console.log('Application loading ...');
// Declare app level module which depends on views, and components
var myApp = angular.module('myApp', ['ngWebsocket','ngRoute']);

myApp.config(['$routeProvider', function($routeProvider) {
    console.log('Routes loading... ');
    $routeProvider.when('/bulbon', {
        templateUrl: 'bulbs/bulb-on.html'
    }).when('/bulboff', {
        templateUrl: 'bulbs/bulb-off.html'
    }).when('/bulbunknown', {
        templateUrl: 'bulbs/bulb-unknown.html'
    }).otherwise({redirectTo: '/bulbunknown'});
}]); ($websocket, $location) {
    var ws = $websocket.$new({
        url: websocketEndpoint,
        reconnect: true
    }); // instance of ngWebsocket, handled by $websocket service

    ws.$on('$open', function () {
        console.log('Websocket connection open');

    ws.$on('$message', function (data) {
        console.log('data arrived');
        var newlocation = '#/bulbunknown';
        if (data.bulb === 1) {
            newlocation = '#/bulbon';
        } else if (data.bulb === 0) {
            newlocation = '#/bulboff';

        window.location = newlocation;

    ws.$on('$close', function () {
        console.log('Websocket connection closed');


The HTML is simple. I split it into the main file and 3 status files. One could easily put the statuses into a script template section or inside the app.
Read More


Identiy in the age of cloud

Who is using your application and what they can do has evolved over the years. With cloud computing and the distribution of identity sources, the next iteration of this evolution is pending. A little look into the history:
  • In the beginning was the user table (or flatfile) with a column for user name and (initially unencrypted) password. The world was good, of course only until you had enough applications and users complaint bitterly
  • In the next iteration access to the network or intranet was moved to a central directory. A lot of authentication became: if (s)he can get to the server, we pick up the user name
  • Other applications started to use that central directory to look up users, either via LDAP or propriety protocols. So while the password was the same, users entered it repeatedly
  • Along came a flood of single sign-on frameworks, that mitigated identity information between applications, based on a central directory. Some used an adapter approach (like Tivoli Access Manager), others a standardised protocol like LTPA or SPNEGO
All of these have a few concepts in common:
  • All solutions are based on a carefully guarded and maintained central directory (or more than one). These directories are under full corporate control and usually defended against any attempt to alter or extend the functionality. (Some of them can't roll back schema changes and are build on fragile storage)
  • Besides user identity they provide groups and sometimes roles. Often these roles are limited to directory capabilities (e.g. admin, printer-user) and applications are left to their own devices to map users/groups to roles (e.g. the ACL in IBM Notes)
  • The core directory is limited to company employees and is often synchronised with a HR directory. Strategies for inclusion of customers, partners and suppliers tend to be point solutions
In a cloud world this needs to be re-evaluated. The first stumbling block are multiple directories that are no longer under corporate control. Customer came to expect functionality like "Login with Facebook" and security experts are fond of two factor authentication - something the LDAP protocol has no provision for. So a modern picture looks more like this:
Who and What come from different sources <

Core tenants of Identity

  • An Identity Provider (IdP) is responsible to establish identity. The current standard for that is SAML, but that's not the only way. The permission delegation known as OAuth, to the confusion of many architects, can be used as authentication substitute (I allow you to read from my LinkedIn profile my name and email via OAuth and you then take that values as my identity). In any case, the cloud applications shouldn't care, they just ask the respective service "Who is this person?". Since the sources can be very different, that's all he SSO will (and shall) provide
  • The next level is the basic access control. I would place that responsibility on the router, but depending on cloud capability each application needs to check itself. Depending on the security need an application might show a home page with information how to request access or deflect to an error (eventually hide behind a 404). In a micro service world an application access service could provide this information. A web UI could also use that service to render a list of available applications for the current user
  • It is getting more interesting now. In classical application any action (including reading and rendering data) is linked to a user role or permission. These permissions live in code. In a cloud micro service architecture the right to action is better delegated to a rule service. This allows for more flexibility and user control. To obtain a permission a context object is handed to the rule service (XML or JSON), that in its bare minimum contains just the user identity. In more complex cases it could be value, project name, decision time etc. The rule engine then approves or denies the action
  • A secondary rule service can go an lookup process information (e.g. who is the approver for a 5M loan to ACME Inc). Extracting the rules from code into a Rule Engine makes it more flexible and business accessible (You want to be good at caching)
  • The rule engine or other services use a profile service to lookup user properties. This is where group memberships and roles live.
    One could succumb to the temptation to let that service depend on an LDAP corporate directory, only to realise that the guardians will lock him down (again). The better approach is the profile service synchronises properties that are maintained in other systems and presents the union of them to requesting applications
  • So a key difference between the IdP and the profile service: the IdP has one task and one only, it performs that through lookups. The profile service provides profile information (in different combinations) that it obtained through direct entry or synchronisation
  • The diagram is a high level one, the rule engine and profile service might themselves be composed of multiple micro services. This is beyond this article
Another way to look at it is the distinction between Who & What
The who and what of identity
As usual YMMV


Adding Notes data to Websphere content manager via RSS

Websphere Content Manager (WCM) can aggregate information from various sources in many formats. A quick and dirty way to add Domino data is to use RSS. The simplest way is to add a page (not an XPage, a Page), define its content type as application/rss+xml and add a few lines to it:
<rss version="2.0">
		<title><Computed Value></title>
		<link><Computed Value>feed.xml</link>
		<description>Extraction of data from the Audience Governance database</description>
		<lastBuildDate><Computed Value></lastBuildDate>
		[Embedded view here]

Thereafter create a view with passthrou HTML with all the values for an item element. Of course that is super boring, therefore you can use the following code to speed this up.
Read More


Midget Imaginaire - Scheinzwerg

The imaginary midget

This is an attempt to transpose a concept deeply rooted in the German cultural context into another language. Bear with me.
Over at Omnisophie Professor Dueck has a column titled "Scheinzwerge" (loosely translated: imaginative dwarfs|midgets|gnomes), dealing, besides others, with the Greek crisis.
It draws heavily on a very German childhood classic "Jim Button and Luke the Engine Driver". In this famous children book and string puppet play we meet Mr. Tur Tur who is a imaginative giant (Scheinriese). From a distance Mr. Tur Tur looks like a huge giant, but the closer you come, the more normal he appears until you close enough to see that he's a normal person.
Now the "Scheinzwerg" in Dueck's article is just the opposite: the further away you are the smaller it appears. Once you are close, you see the real dimension, which tends to be way bigger than estimated, imagined or even feared.
In real life that doesn't refer to people but rather tasks, problems or missions.
We are all familiar with "Scheinriesen", that impossible huge looking task (learn to swim, to cycle, to play an instrument, or ask for permission), that shrank when we got close.
The other type is as common, but hidden in plain sight. So I shall name it "Midget Imginaire", short  MI - which is the accepted abbreviation for what it turns into when getting close enough: "Mission impossible". Now if you happen to be Ethan Hawke, all is good. For the rest of us some samples:
  • We will grow double digits, faster than the market
  • Just change the application architecture the week before life
  • The [insert-crisis] can be easily solved by [insert 140 characters or less]
  • They are just 5 little changes, the deadline must not  be moved
  • Become world champion, we know how: run 100m in 8 sec
  • Hire 9 women to give birth to one child in a month
The MI are the single biggest source of eternal tension between management (corporate and political) and executing experts (anyone: "I don't want to hear problems, I want solutions, you have 10 minutes").
Since management has (necessarily?) distance to operations (the big picture needs a vantage point to be seen), a lot of MI appear really tiny (just lets hire the right talent, never mind that pay, reputation and markets that don't have them available to us) and stuttering in execution is interpreted as incompetence or defiance.
In return the "Gods from Olympus" are seen as living in heavenly spheres (also known as management reality distortion field).

The solution is simple (I hope you can see the irony in this statement): We need to add "watching out for Midgets Imaginaire" to our professional portfolio of conduct.

Read More


Validating JSON object

One of the nice tools for rapid application development in Bluemix is Node-RED which escaped from IBM research. One passes a msg JSON object between nodes that process (mostly) the msg.payload property. A feature I like a lot is the ability to use a http input node that can listen to a POST on an URL and automatically translates the posted form into a JSON object.
The conversion runs non-discriminatory, so any field that is added to the form will end up in the JSON object.
In a real world application that's not a good idea, an object shouldn't have unexpected properties. I had asked before, so it wasn't too hard to derive a function I could use in Node-RED:
Cleaning up an incoming object - properties
this.deepclean = function(template, candidate, hasBeenCleaned) {
			var cleandit = false;
			for (var prop in candidate) {
				if (template.hasOwnProperty(prop)) {
					// We need to check strict clean and recursion
					var tProp = template[prop];
					var cProp = candidate[prop];
					// Case 1: strict checking and types are different
					if (this.strictclean && ((typeof tProp) !== (typeof cProp))) {
						delete candidate[prop];
						cleandit = true;
					// Case 2: both are objects - recursion needed	
					} else if (((typeof tProp) === "object") && ((typeof cProp) === "object")) {
						cleandit = node.deepclean(tProp, cProp, (hasBeenCleaned || cleandit));
						candidate[prop] = cProp;
				// Case 3: property is not there	
				} else {
					delete candidate[prop];
					cleandit = true;
			return (hasBeenCleaned || cleandit);			
The function is called with the template object and the incoming object and the initial parameter false. While the function could be easily used inside a function node, the better option is to wrap it into a node of its own, so it is easy to use anywhere. The details how to do that can be found on the Node-RED website. The easiest way to try the function: add your Node-RED project to version control, download the object cleaner node and unzip it into the nodes directory. Works in Bluemix and in a local Node-RED installation.

Read More


This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2015 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.