Search

Twitter

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Visitors

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop

20/05/2015

Your API needs a plan (a.k.a. API Management)

You drank the API Economy cool aid and created some neat https addressable calls using Restify or JAX-RS. Digging deeper into the concept of micro services you realize, a https callable endpoint doesn't make it an API. There are a few more steps involved.
O'Reilly provides a nice summary in the book Building Microservices, so you might want to add that to your reading list. In a nutshell:
  • You need to document your APIs. The most popular tool here seems to be Swagger and WSDL 2.0 (I also like Apiary)
  • You need to manage who is calling your API. The established mechanism is to use API keys. Those need to be issued, managed and monitored
  • You need to manage when your API is called. Depending on the ability of your infrastructure (or your ability to pay for scale out) you need to limit the rate your API is called by second, hour or billing period
  • You need to manage how your API is called. In which sequence, is the call clean, where does it come from
  • You need to manage versions of your API, so innovations and improvements don't break existing code
  • You need to manage grouping of your endpoints into "packages" like: free API, fremium API, partner API, pro API etc. Since the calls will overlap, building code for the bundles would lead to duplicates
And of course, all of this need statistics and monitoring. Adding that to you code will create quite some overhead, so I would suggest: use a service for that.
In IBM Bluemix there is the API Management service. This service isn't a new invention, but the existing IBM Cloud API management made available in a consumption based pricing model.
Your first 5000 calls are free, as is your first developer account. After that is is less than 6USD (pricing as of May 2015) for 100,000 calls. This provides a low investment way to evaluate the power of IBM API Management.
API Management IBM Style
The diagram shows the general structure. Your APIs only need to talk to the IBM cloud, removing the headache of security, packet monitoring etc.
Once you build your API you then expose it back to Bluemix as a custom service. It will appear like any other service in your catalogue. The purpose of this is to make it simple using those APIs from Bluemix - you just read your VCAP_SERVICES.
But you are not limited to use these APIs from Bluemix. You can call the IBM API management directly (your API partners/customers will like that) from whatever has access to the Intertubes.
There are excellent resources published to get you started. Now that you know why, check out the how: If you not sure about that whole micro services thing, check out Chris' example code.
As usual YMMV

09/05/2015

The Rise of JavaScript and Docker

I loosely used JavaScript in this headline to refers to a set of technologies: node.js, Meteor, Angular.js ( or React.js). They share a communality with Docker that explains their (pun intended) meteoric rise.
Lets take a step back:
JavaScript on the server isn't exactly new. The first server side JavaScript was implemented 1998 and the Union mount, that made Docker possible, is from 1990. Client side JavaScript frameworks are plenty too. So what made the mentioned ones so successful?
I make the claim that it is machine readable community. This is where these tools differ. node.js is inseparable from its packet manager npm. Docker is unimaginable without its registry and Angular/React (as well as jquery) live on cushions of myriads of plug-ins and extensions. While the registries/repositories are native to Docker and node.js, the front-ends take advantage of tools like Bower and Yeoman, that make all the packaged feel native.
These registries aren't read-only, which is a huge point. Providing the means of direct contribution and/or branching on GitHub the process of contribution and consumption became two way. The mere possibility to "give back" created a stronger sense of belonging (even if that sense might not be fully concious).
machine readable community is a natural evolution born out of the open source spirit. For decades developers have collaborated using chat (IRC anyone), discussion boards, Q & A sites and code sharing places. With the emergence of GIT and GitHub as de facto standard for code sharing the community was ready.
The direct access from scripts and configurations to source repository replaced the flow of "human vetting, human download, human unpack and copy to the right location" with: "specify what you need and the machine will know where to get it". Even this idea wasn't new. In the Java the Maven plug-in provided that functionality since 2002.
The big difference now: Maven wasn't native to Java, as it required a change of habit. Things are done differently with it than without. npm on the other hand is "how you do things in node.js". Configuring a docker container is done using the registry (and you have to put in extra effort if you want to avoid that).
So all the new tooling use repositories as "this is how it works" and complement human readable community with machine readable community. Of course, there is technical merit too - but that has been discussed elsewhere in great length.

12/04/2015

Cloud with a chance of TAR balls (or: what is your exit strategy)

Cloud computing is here to stay, since it does have many benefits. However even unions made "until death do us part" come with wagers these days. So it is prudent for your cloud strategy to contemplate an exit strategy.
Such a strategy depends on the flavour of cloud you have chosen (IaaS, PaaS, SaaS, BaaS) and might require to adjust the way you on-board in the first place. Let me shed some light on the options:

IaaS

When renting virtual machines from a book seller, a complete box from classic hosting provider or a mix of bare metal and virtual boxes from IBM, the machine part is easy: can you copy the VM image over the network (SSH, HTTPS, SFTP) to a new location? When you have a bare metal box, that won't work (there isn't a VM after all), so you need a classic "move everything inside" strategy.
If you drank the Docker cool aid, the task might be just be broken down into managable junks, thanks to the containers. Be aware: Docker welds you to a choice of host operating systems (and Windows isn't currently on the host list).
There are secondary considerations: how easy is it, to switch the value-added services like: DNS, CDN, Management console etc. on/off or to another vendor?

PaaS

Here you need to look separately at runtime and the services you use. Runtimes like Java, JavaScript, Phython or PHP tend to be offered by almost all vendors. dotNet and C# not so much. When your cloud platform vendor has embraced an open standard, it is most likely, that you can deploy your application code elsewhere too, including back into your own data center or a bunch of rented IaaS devices.
It get a little more complicated when you look at the services.
First look at persistence: is your data stored in a vendor propriety database? If yes, you probably can export it, but need to switch to a different database when switching cloud vendors. This means you need to alter your code and retest (but you do that with CI anyway?). So before your jump onto DocumentDB or DynamoDB (which run in a single vendor's PaaS only), you might want to checkout MongoDB, CouchDB (and its commercial siblings Cloudant or Couchbase) , Redis or OrientDB which run in multiple vendor environments.
The same applies to SQL databases and blob stores. This is not a recommendation for a specific technology (SQL vs. NoSQL or Vendor A vs. Vendor B), but an aspect you must consider in your cloud strategy.
The next check point are the services you use. Here you have to distinguish between common services, that are offered by multiple cloud vendors: DNS, auto scaling, messaging (MQ and eMail) etc. and services specific to one vendor (like IBM's Watson).
Taking a stand "If a service isn't offered by multiple vendors, we won't use it" can help you avoid a lock-in and will ensure that you stifle your innovation too. After all, you use a service, not for the sake of the service, but to solve a business problem and to innovate.
The more sensible approach would be to check if you can limit your exposure to a vendor to that special services only, should you decide to move on. This gives you the breathing space to then look for alternatives. Adding a market watch to see how alternatives might evolve improves your hedging.
Services are the "Damned if you do, damned if you don't" area of PaaS. All vendors scramble to provide top performance and availability for the common platform and distinction in the services on top of that.
After all one big plus of the PaaS environment are the services that enable "composable businesses" - and save you the headache to code them yourself. IMHO the best risk mitigation, and incidentally state of the art, is a sound API management a.k.a Microservices.
Once you are there, you will learn, that a classic Monolithic Architecture isn't cloud native (Those architectures survive inside of Virtual Machines) - but that's a story for another time.

SaaS

Here you deal with applications like IBM Connections Cloud S1, Google Apps for Work, Microsoft Office 365, Salesforce, SAP SaaS but also Slack, Basecamp,Github and gazillions more.
Some of them (e.g. eMail or documents) have open standard or industry dominating formats. Here you need to make sure, you get the data out in that format. I like the way Google is approaching this task. They offer Google Takeout, that tries to stick to standard formats and offers all data, any time for export.
Other have at least machine readable formats like CSV, JSON, XML. The nice challenge: getting data out is only half the task. Is your new destination capable of taking them back in?

BaaS

In a business process as a service (BaaS) the same considerations as the SaaS environment come to play: can I export data in a machine-readable, preferably industry standard format. E.g. you used a payroll service and want to bring it back inhouse or move to a different service provider. You need to make sure your master data can be exported and that you have the reports for historical records. When covered in reports, you might get away without transactional data. Typical formats are: CSV, JSON, XML

As you can see, not rocket science, but a lot to consider. For all options the same: do you have what it takes to move? Is there enough bandwidth (physical and mental) to pull it off? So don't get carried away with the wedding preparations and check your prenuptials.

12/04/2015

email Dashboard for the rest of us - Part 2

In Part 1 I introduced a potential set of Java interfaces for the dashboard. In this installment I'll have a look on how to extract this data from a mail database. There are several considerations to be taken into account:
  • The source needs to supply data only from a defined range of dates - I will use 14 as an example
  • The type of entries needed are:
    • eMails
    • replies
    • Calendar entries
    • Followups I'm waiting for
    • Followups I need to action
  • Data needs to be in details and in summary (counts)
  • People involved come in Notes addresses, groups and internet addresses, they need to be dealt with
Since I have more than a hammer, I can split the data retrieval into different tooling. Dealing with names vs. groups is something best done with LDAP code or lookups into an address book. So I leave that to Java later on. Also running a counter when reading individual entries works quite well in Java.
Everything else, short of the icons for the people, can be supplied by a classis Notes view (your knowledge of formula language finally pays off).
Read More

11/04/2015

email Dashboard for the rest of us - Part 1

One of the cool new features of IBM Verse is the Collaboration Dashboard. Unfortunately not all of us can switch to Verse overnight, so I asked myself: can I have a dashboard in the trusted old Notes 9.0 client?
Building a dashboard requires 3 areas to be covered:
  1. What data to show
  2. Where does the data come from
  3. How should the data be visualised, including actionable options (that's the place we preferences between users will differ strongly)
For a collaboration dashboard I see 3 types of data: collaborators (who), summary data (e.g. number of unread eMails) and detail data (e.g. the next meeting). Eventually there could be a 4th type: collections of summary data (e.g. number of emails by category). In a first iteration I would like to see:
  • Number of unread eMails
  • Number of meetings left today
  • Number of waiting for actions
  • Number of action items
  • List of top collaborators
  • List of todays upcoming meetings
  • List of top waiting for actions
  • List of top action items
I'm sure there will be more numbers and list coming up when thinking about it, but that's a story for another time.
Read More

05/03/2015

XPages XML Document DataSource - Take 2

For a recent project I revisited the idea of storing XML documents as MIME entries in Notes - while preserving some of the fields for use in views and the Notes client. Jesse suggested I should have a look at annotations. Turns out, it is easier that it sound. To create an annotation that works at runtime, I need a one liner only:
@Retention(RetentionPolicy.RUNTIME) public @interface ItemPathMappings { String[] value(); }
To further improve usefulness, I created a "BaseConfiguration" my classes will inherit from, that contains the common properties I want all my classes (and documents) to have. You might want to adjust it to your needs:
ackage com.notessensei.domino;
import java.io.Serializable;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
/**
 * Common methods implemented by all classes to be Dominoserialized
 */
@XmlRootElement(name = "BaseConfiguration")
@XmlAccessorType(XmlAccessType.NONE)
public abstract class BaseConfiguration implements Serializable, Comparable<BaseConfiguration> {
    private static final long serialVersionUID = 1L;
    @XmlAttribute(name = "name")
    protected String          name;

    public int compareTo(BaseConfiguration bc) {
        return this.toString().compareTo(bc.toString());
    }
    public String getName() {
        return this.name;
    }
    public BaseConfiguration setName(String name) {
        this.name = name;
        return this;
    }
    @Override
    public String toString() {
        return Serializer.toJSONString(this);
    }
    public String toXml() {
        return Serializer.toXMLString(this);
    }
}

The next building block is my Serializer support with a couple of static methods, that make dealing with XML and JSON easier.
package com.notessensei.domino;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintWriter;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
import javax.xml.bind.Unmarshaller;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;

/**
 * Helper class to serialize / deserialize from/to JSON and XML
 */
public class Serializer {

    public static String toJSONString(Object o) {
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        try {
            Serializer.saveJSON(o, out);
        } catch (IOException e) {
            return e.getMessage();
        }
        return out.toString();
    }

    public static String toXMLString(Object o) {
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        try {
            Serializer.saveXML(o, out);
        } catch (Exception e) {
            return e.getMessage();
        }
        return out.toString();
    }

    public static void saveJSON(Object o, OutputStream out) throws IOException {
        GsonBuilder gb = new GsonBuilder();
        gb.setPrettyPrinting();
        gb.disableHtmlEscaping();
        Gson gson = gb.create();
        PrintWriter writer = new PrintWriter(out);
        gson.toJson(o, writer);
        writer.flush();
        writer.close();
    }

    public static void saveXML(Object o, OutputStream out) throws Exception {
        JAXBContext context = JAXBContext.newInstance(o.getClass());
        Marshaller m = context.createMarshaller();
        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
        m.marshal(o, out);
    }

    public static org.w3c.dom.Document getDocument(Object source) throws ParserConfigurationException, JAXBException {
        DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
        DocumentBuilder db = dbf.newDocumentBuilder();
        org.w3c.dom.Document doc = db.newDocument();
        JAXBContext context = JAXBContext.newInstance(source.getClass());
        Marshaller m = context.createMarshaller();
        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
        m.marshal(source, doc);
        return doc;
    }

    @SuppressWarnings("rawtypes")
    public static Object fromByte(byte[] source, Class targetClass) throws JAXBException {
        ByteArrayInputStream in = new ByteArrayInputStream(source);
        JAXBContext context = JAXBContext.newInstance(targetClass);
        Unmarshaller um = context.createUnmarshaller();
        return targetClass.cast(um.unmarshal(in));
    }
}

The key piece is for the XML serialization/deserialization to work is the abstract class AbstractXmlDocument. That class contains the load and save methods that interact with Domino's MIME capabilities as well as executing the XPath expressions to store the Notes fields. The implementations of this abstract class will have annotations that combine the Notes field name, the type and the XPath expression. An implementation would look like this:
package com.notessensei.domino.xmldocument;
import javax.xml.bind.JAXBException;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;
import lotus.domino.Session;
import com.notessensei.domino.ApplicationConfiguration;
import com.notessensei.domino.Serializer;
import com.notessensei.domino.xmldocument.AbstractXmlDocument.ItemPathMappings;

// The ItemPathMappings are application specific!
@ItemPathMappings({ "Subject|Text|/Application/@name",
					"Description|Text|/Application/description",
					"Unid|Text|/Application/@unid",
					"Audience|Text|/Application/Audiences/Audience",
					"NumberOfViews|Number|count(/Application/Views/View)",
					"NumberOfForms|Number|count(/Application/Forms/Form)",
					"NumberOfColumns|Number|count(/Application/Views/View/columns/column)",
					"NumberOfFields|Number|count(/Application/Forms/Form/fields/field)",
					"NumberOfActions|Number|count(//action)" })
public class ApplicationXmlDocument extends AbstractXmlDocument {

    public ApplicationXmlDocument(String formName) {
        super(formName);
    }

    @SuppressWarnings("unchecked")
    @Override
    public ApplicationConfiguration load(Session session, Document d) {

        ApplicationConfiguration result = null;
        try {
            result = (ApplicationConfiguration) Serializer.fromByte(this.loadFromMime(session, d), ApplicationConfiguration.class);
        } catch (JAXBException e) {
            e.printStackTrace();
        }
        try {
            result.setUnid(d.getUniversalID());
        } catch (NotesException e) {
            // No Action Taken
        }
        return result;
    }

    @SuppressWarnings("unchecked")
    @Override
    public ApplicationConfiguration load(Session session, Database db, String unid) {
        Document doc;
        try {
            doc = db.getDocumentByUNID(unid);
            if (doc != null) {
                ApplicationConfiguration result = this.load(session, doc);
                doc.recycle();
                return result;
            }

        } catch (NotesException e) {
            e.printStackTrace();
        }

        return null;
    }
}


Read More

03/03/2015

Develop local, deploy (cloud) global - Java and CouchDB

Leaving the cosy world of Domino Designer behind, venturing into IBM Bluemix, Java and Cloudant, I'm challenged with a new set of task to master. Spoiled by Notes where Ctrl+O gives you instant access to any application, regardless of being stored locally or on a server I struggled a little with my usual practise of

develop local, deploy (Bluemix) global

The task at hand is to develop a Java Liberty based application, that uses CouchDB/Cloudant as its NoSQL data store. I want to be able to develop/test the application while being completely offline and deploy it to Bluemix. I don't want any code to have conditions offline/online, but rather use configuration of the runtimes for it.
Luckily I have access to really smart developers (thx Sai), so I succeeded.
This is what I found out, I needed to do. The list serves as reference for myself and others living in a latency/bandwidth challenged environment.
  1. Read: There are a number of articles around, that contain bits and pieces of the information required. In no specific order:
  2. Install: This is a big jump forward. No more looking for older versions, but rather bleeding edge. Tools of the trade:
    • GIT. When you are on Windows or Mac, try the nice GUI of SourceTree, and don't forget to learn git-flow (best explained here)
    • A current version of the Eclipse IDE (Luna at the time of writing, the Java edition suffices)
    • The liberty profile beta. The Beta is necessary, since it contains some of the features, notably couchdb, which are available in Bluemix by default. Use the option to drag the link onto your running Eclipse client
    • Maven - the Java way to resolve dependencies (guess where bower and npm got their ideas from)
    • CURL (that's my little command line ninja stuff, you can get away without it)
    • Apache CouchDB
  3. Configure: Java loves indirection. So there are a few moving parts as well (details below)
    • The Cloudant service in Bluemix
    • The JNDI name in the web.xml. Bluemix will discover the Cloudant service and create the matching entries in the server.xml automagically
    • A local profile for a server running the Liberty 9.0 profile
    • The configuration for the local CouchDB in the local server.xml
    • Replication between your local CouchDB instance and the Cloudant server database (if you want to keep the data in sync)
The flow of the data access looks like this
Develop local, deploy global
Read More

10/01/2015

Double-O Bike Light Review

I backed the Double-O bike light Kickstarter project and use them for a while now. This is my verdict:
TL:TR Highly functional light with clever features, some teething problems
If you like to ride through the unlit woods at night, Double-O isn't for you, but that would.
It is a commuter light with a rather clever design.
DoubleO front and back open

The Good

  • It is huge. With the LED arranged in a circle you get a big patch of light, a much bigger surface than the bike lights you commonly find in the market. That alone improves visibility quite a bit.
  • Two of the 3 light modes are what you would expect: blinking and steady. The third one I haven't seen elsewhere: the odd and even numbered LED blink alternate. This is quite clever. Someone looking at it (that's: the other traffic participants) sees something moving, creating similar attention as blinking, but for the rider it is a steady light since the same number of LED is on at any given time. That's especially useful for the front light lighting the path (to some extend) in front of you.
  • The case is sturdy and the threaded cover that you use to screw it open and close (to change batteries) reliably keeps moisture out (trust me, my environment has plenty of that
  • The rubber band mechanism and the rubberised back making fixing the light at a handlebar or a seat stem very fast and reliable. My package even included a set of spare rubbers
  • Locking the lights with your bike lock works as advertised (but someone could steal the batteries if they get what it hanging there)
  • The low power LED lights make the batteries last quite long

The Bad

The original specifications proposed to use USB chargable fixed batteries and a magnetic fix for the light. The rechargable option was abandoned in favour of longer lasting standard batteries (and rechargeable battery in the Kickstarter delivery). While I generally understand the design decision for a general market offering, I would have found USB charging suiting my personal style (I'm used to have a zoo to be charged after my rides: The Garmin, the Bluetooth entertainment, the phone and the helmet).
The rubber band to fix the light at the bike works and it more efficient to produce, but the magnet solution has a way bigger cool factor. Also: When there's no pipe (read saddle stem) available or you want to fix it at your pannier rack, the rubber isn't the best fit. Eventually Double-O might release the fixtures design files, so I can print one for my purpose.

The Ugly

The battery holding mechanism (see the picture above) is flimsy. For a hipster ride that might be sufficient, but in a little rugged environment where I ride (kerbstone jumps, potholes, very uneven surface, the occasional trail) the vibrations make the batteries move, in my case even to the extend of bending the electrical contract latches. The batteries loose contact and the light goes off (or won't switch on).
I haven't found a solution, but I'm contemplating using a rubber ring around battery and latch or hold the pieces in place using a little dense sponge rubber.

10/01/2015

Now that the client is gone - what to do with all that notes:// links?

In your mind you drank the cool-aid and got onto IBM Verse or went to the dark side. You web enabled all of your applications (smartly) and you are ready to turn off the Notes clients.
But then you realize: there are notes:// URLs (or their file equivalents ndl files) and doclinks spread all over your organisation: in eMails, in office documents, in applications in personal information vaults.
So you wake up and go back to the drawing board.

What is the issue at hand?

Lotus Notes was one of the first applications (as I recall, the first commercial) that made large scale use of Hyperlinks (Yep, hyperlink doesn't mean it starts with http).
Creating, storing and exchanging links to information was one of the key success components of groupware and later the world wide web (which Notes predates). A hyperlink is unidirectional, so the link target has no knowledge where hyperlinks are located pointing to it. So inspecting a target gives no indication of a potential source.
When you move a target, e.g. change or decomission the server, then you create Link Rot, the digital equivalent of Alzheimer's disease.
Removing the Notes client adds a second problem. Each Hyperlink not only has a target, but also a protocol. That is the part in front of the colon.
Most of the time you think you deal only with http or https, but there are many more: mailto (for eMail), ftp (for file transfer), cifs (for file servers), ssh (for secure shell connections), call (for VoiP), chat (for chat), sap (to call the SAP client).
Each protocol has a handler application. When you don't have a handler application for a specific protocol installed, your operating system will throw and error when you try to use it.
Like a Demo? Try it for youself!
So to replace a protocol handler application (in our case the Notes client)

Solution

You need to do 2 things:
  1. In your web enablement / data migration project capture all source and target mappings. For some stuff simple formulas (regex) might do, but in most cases a key value store is required. Playing with different web technologies on wissel.net and notessensei.com, but rendering the same content, I tested the hypothesis and found key value stores suitable.
    In a nutshell: links need to live forever, come sing along
  2. Create a replacement for the notes:// URL handler (and the ndl file type), so any link starting with notes:// will then be looked up in your key value store and rendered by the appropriate application.
    The little special challenge here: when you web enable applications over a period of time, you still might have a Notes client, but that specific URL is gone. When you keep your NSF and just add the webUI, this won't be an issue. But it is a problem you will face when switching the platform

Denial

Alternatively you can claim ignorance is bliss and succumb to digital dementia. The usual aguments not to act:
  • the information is old, nobody needs it
  • we have great search in the new application, so users will find it
  • it is a waste of money and time
  • We have it archived, so what's the problem
  • We keep one machine with a client around, so users can look it up
If that's the prevalent line of thought in the organisation, wecome to binary lobodomy, if not, read on.
Read More

09/01/2015

View driven accordeon

The Extension Library features a Dojo Accordion control. In the sample application the panels and their content are created static using xe:basicContainerNode and xe:basicLeafNode. A customer asked: "Can I populate the accordion using view entries?"
Working my way backwards I first designed the format for the SSJS I need for the menu and then how to generate this format from an categorized view. The format that suited this need looks like this:
[
   { name : "Level 1"; items : ["red 1":"blue 1":"green 1"]},
   { name : "Level 2"; items : ["red 2":"blue 2"]},
   { name : "Level 3"; items : ["red 3":"green 3"]}
]

In a live application, the items rather would be objects with a label and an action (items : [{"label": "red 2", "action" : "paintred"}:{"label" : "blue 2", "action" : "playBlues" }]) or URL, but for the demo a string is sufficient. I created a accordeon with a xe:repeatTreeNode containing a xe:basicContainerNode as sole child element (which would repeat).
This child then contains one xe:repeatTreeNode that would again contain one xe:basicContainerNode as sole child (which again would repeat).
In theory that all looks brilliant, except the inner repeatTreeNode would not get access to the repeat variable from the parent basicContainerNode.
This has been accnowledged as a bug and is tracked as SPR # MKEE9SKENK
(I'm, inside IBM, outside the development team, the undisputed XPages SPR champion).
So a little hack was necessary to make that menu work.
Read More

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2015 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.