Search

Mobile tag

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle

Twitter

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Visitors

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.
StopTheSecrecy

17/09/2014

Tracking down slow internet on SingTel Fibre to the home

SingTel makes big claims about the beauty of their fibre offering. I do not experience the claimed benefits. So I'm starting to track down what is happening. Interestingly when you visit SpeedTest, it shows fantastic results. I smell rat.
So I ran a test with Pocketinet in Walla Walla, WA. SpeedTest claims a 5ms ping response, but when I, immediate before or after such a test, issue a ping -c5 www.pocketinet.com I get results rather in the range of 200-230ms.
Ein Schelm wer böses dabei denkt!
While this evidence isn't strong enough, to accuse someone of tampering, it points to the need to investigate why the results are so different (IDA are you listening?). So I started looking a little deeper. Using traceroute with the -I parameter (that uses the same packets as ping) I checked a bunch of websites. Here are the results (I stripped out the boring parts):
traceroute -I www.pocketinet.com
  9   203.208.152.217  3.913ms  3.919ms  4.033ms 
 10   203.208.149.30  204.256ms  203.208.172.226  170.493ms  171.314ms

traceroute -I www.economist.com
  9   203.208.166.57  4.316ms  4.882ms  4.680ms 
 10   203.208.151.246  193.164ms  203.208.151.214  188.148ms  203.208.182.122  196.526ms

traceroute -I www.cnn.com
  9   203.208.171.242  4.772ms  4.679ms  5.160ms 
 10   203.208.182.122  171.006ms  203.208.151.214  187.336ms  203.208.182.122  171.447ms 

traceroute -I www.ibm.com
 9   203.208.153.142  4.385ms  5.857ms  3.853ms 
 10   203.208.173.106  178.135ms  203.208.151.94  183.842ms  203.208.182.86  181.097ms

Something is rotten in the state of Denmark international connectivity! (Sorry Wil)
Only Google bucks that pattern. But that's due to the fact that their DNS sends me to a server in Singapore. So the huge jump in latency happens in the 203.208.182.* and 203.208.151.* subnets.
whois tells me:
 whois 203.208.182.86
% [whois.apnic.net]
% Whois data copyright terms    http://www.apnic.net/db/dbcopyright.html

% Information related to '203.208.182.64 - 203.208.182.127'

inetnum:        203.208.182.64 - 203.208.182.127
netname:        SINGTEL-IX-AP
descr:          Singapore Telecommunications Pte Ltd

So, the servers might be in the SingTel overseas location? InfoSniper sees them in Singapore (you can try others with the same result). Now I wonder, is the equipment undersized, wrongly configured or something else happening that takes time on that machine?
Looking for an explanation.

Update 19 Sep 2014: Today the speedtest.net site shows ping results that are similar to traceroute/ping from the command line - unfortunately not due to the fact that the command line results got better, but the speedtest.net results worse - Now ranging in the 200ms. I wonder what happened. Checking the effective up/down speed will be a little more tricky. Stay tuned

12/09/2014

Foundation of Software Development

When you learn cooking, there are a few basic skills that need to be in place before you can get started: cutting, measuring, stiring and understanding of temperature's impact on food items. These skills are independent from what you want to cook: western, Chinese, Indian, Korean or Space Food.
The same applies to software development. Interestingly we try to delegate these skills to ui designers, architects, project managers analyst or infrastructure owners. To be a good developer, you don't need to excel in all of those skills, but at least develop a sound understanding of the problem domain. There are a few resources that I consider absolute essential: All these resources are pretty independent from what language, mechanism or platform you actually use, so they provide value to anyone in the field of software development.
As usual YMMV

03/09/2014

Flow, Rules, Complexity and Simplicity in Workflow

When I make the claim "Most workflows are simple", in return I'm hit with bewildered looks and the assertion: "No, ours are quite complex". My little provocation is quite deliberate, since it serves as an opening gambit to discuss the relation between flow, rules and lookups. All workflows begin rather simple. I'll take a travel approval workflow as a sample (resemblance of workflows of existing companies would be pure coincidence).
The workflow as described by the business owner
The explanation is simple: "You request approval from management, finance and compliance. When all say OK, you are good to go". Translating them into a flow diagram is quite easy:
Super
With a simple straight line, one might wonder what is the fuzz about workflow systems and all the big diagrams are all about. So when looking closer, each box "hides" quite some complexity. Depending on different factors some of the approval are automatic. "When we travel to a customer to sign a sizable deal and we are inside our budget allocation, we don't need to ask finance". So the workflow looks a little more complex actually:
Different levels need different approvals
The above diagram shows the flow in greater detail, with a series of decision points, both inside a functional unit (management, finance and compliance) as well as between them. The question still stays quite high level. "required Y/N", "1st line manager approves". Since the drawing tools make it easy - and it looks impressive - the temptation to draw further lures strongly. You could easily model the quest for the manager inside a workflow designer. The result might look a little like this:

Read More

02/09/2014

Rethinking the MimeDocument data source

Tim (we miss you) and Jesse had the idea to store beans in Mime documents, which became an OpenNTF project.
I love that idea and was musing how to make it more "domino like". In its binary format, a serialized bean can't be used for showing view data, nor can one be sure that it can be transported or deserialized other than through the same class version as the creator (this is why Serialized wants to have a serialid).
With a little extra work, that becomes actually quite easy: Enter JAXB. Serializing a bean to XML (I hear howling from the JSON camp) allows for a number of interesting options:
  • The MIME data generated in the document becomes human readable
  • If the class changes a litte de-serialization will still work, if it changes a lot it can be deserialized to an XML Document
  • Values can be extracted using XPath to write them into the MIME header and/or regular Notes items - making it accessible for use in views
  • Since XML is text, full text search will capture the content
  • Using a stylesheet a fully human readable version can be stored with the original MIME (good to eMail)
I haven't sorted out the details, but lets look at some of the building blocks. Who ever has seen me demo XPages will recognize the fruit class. The difference here: I added the XML annotations for a successful serialization:
package test;

import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;

@XmlRootElement(name = "Fruit", namespace = "http://www.notessensei.com/fruits")
@XmlAccessorType(XmlAccessType.NONE)
public class Fruit {
    @XmlAttribute(name = "name")
    private String  name;
    @XmlElement(name = "color")
    private String  color;
    @XmlElement(name = "taste")
    private String  taste;
    @XmlAttribute(name = "smell")
    private String  smell;
    
    public String getSmell() {
        return this.smell;
    }

    public void setSmell(String smell) {
        this.smell = smell;
    }

    public Fruit() {
        // Default constructor
    }

    public Fruit(final String name, final String color, final String taste, final String smell) {
        this.name = name;
        this.color = color;
        this.taste = taste;
        this.smell = smell;
    }

    public final String getColor() {
        return this.color;
    }

    public final String getName() {
        return this.name;
    }

    public final String getTaste() {
        return this.taste;
    }
    
    public final void setColor(String color) {
        this.color = color;
    }

    public final void setName(String name) {
        this.name = name;
    }

    public final void setTaste(String taste) {
        this.taste = taste;
    }
}

The function (probably in a manager class or instance) to turn that into a Document is quite short. The serialization of JAXB would allow to directly serialize it into a Stream or String, but we need the XML Document step to be able to apply the XPath.
public org.w3c.dom.Document getDocument(Fruit fruit) throws ParserConfigurationException, JAXBException {
		DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
		DocumentBuilder db = dbf.newDocumentBuilder();
		org.w3c.dom.Document doc = db.newDocument();
		JAXBContext context = JAXBContext.newInstance(fruit.getClass());
		Marshaller m = context.createMarshaller();
		m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
		m.marshal(fruit, doc);
		return doc;
	}

The little catch here: There is a Document in the lotus.domino package as well as in the org.w3c.dom package. You need to take care not to confuse the two. Saving it into a document including the style sheet (to make it pretty) is short too. The function provides the stylesheet as a w3c Document and the list of fields to extract as a key (the field) - value (the XPath) map. Something like this:

Read More

02/09/2014

Bikepad SmartPhone mount review

This is my field impression of the Bikepad SmartPhone mount having used it for a few weeks on my Montague Paratrooper pro

TL:TRThe Bikepad is a highly functional accessory to keep your phone on your bike fully functional. Is has quality craftsmanship and a sleek design. If I had an editor-refuses-to-give-it-back award to give (I actually paid for it), I would award it.

I do cycle for longer durations and some rough spots, so I like to keep a phone in reach. Not at last to keep SWMBO updated. When I learned about Bikepad and their claim "Basically the Bikepad creates a vacuum between its surface and the device. The vacuum is strong enough to hold the device" I had to give it a try. Here's the verdict:
  • The Good
    Works as designed. The surface indeed creates a gecko feet like suction that firmly holds the phone in place. You actually need some force to pull it our. The aluminium base is sleek and solidly build. I like the minimal design: everything is there for it to function and nothing more, form follows function at its best. The half pipe shaped aluminium connector can be easily fixed on a stem (I didn't try handlebar mount) and secured with a Velcro. It comes with a foam pipe segment to adjust to different pipe diameters.
    I tried to shake the phone off in off-road or city conditions, including falling off the bike and hitting the ground (that part unplanned), but it stayed nice in place.
  • The Bad
    Works as designed. For the vacuum to build a close contact between phone and surface is needed. For your iShiny® (that was my main test unit) or a Nexus 4 (the other phone) that isn't a problem. For anything that doesn't have a flat surface or buttons on the back, you need to test it. Also the various phone cases need to be checked carefully. I tested a Otterbox case, which has a ridge running around the back. This prevents the case body from contact with the pad. Only the ridge has contact, which provides not enough suction
  • and The Ugly
    When it rains it pours. If the pad gets wet, it gets slippery and looses its suction. When the phone already sticks on it and it is rained on, it gradually will loose the grip. Luckily the pad comes with a little shower cap rain cover. With the cover it looks a little funny, not as cool anymore - but it does the job. Anyway you wouldn't want to expose your iShiny® to the bare elements. Another little challenge: my stem is quite thick, so the provided foam pipe is too think to squeeze between stem and half pipe. I was left with a little cutting exercise or alternative means. I opted for Sugru that holds everything in place
To see more, check out some of their videos. In summary: a keeper.

14/08/2014

Long Term Storage and Retention

Not just since Einstein time is relative. For a human brain anything above 3 seconds is long term. In IT this is a little more complex.

Once a work artefact is completed, it runs through a legal vetting and it either goes to medium or long term storage. I'll explain the difference in a second. This logical flow manifests itself in multiple ways in concrete implementations: Journaling (both eMail and databases), archival, backups, write-once copies. Quite often all artifacts go to medium term storage anyway and only make it into long term storage when the legal criteria are met. Criteria can be:
  • Corporate & Trade law (e.g. the typical period in Singapore is 5 years)
  • International law
  • Criminal law
  • Contractual obligations (E.g. in the airline industry all plane related artefacts need to be kept at least until the last of that plane family has retired. E.g. the Boing 747 family is in service for more than 40 years)
For a successful retention strategy three challenges need to be overcome:
  1. Data Extraction

    When your production system doesn't provide retention capabilities, how to get the data out? In Domino that's not an issue, since it does provide robust storage for 25 years (you still need to BACKUP data). However if you want a cross application solution, have a look at IBM's Content Collector family of products (Of course other vendor's have solutions too, but I'm not on their payroll)
  2. Findability

    Now an artifact is in the archive, how to find it? Both navigation and search need to be provided. Here a clever use of Meta data (who, what, when, where) makes the difference between a useful system and a Bit graveyard. Meta data isn't an abstract concept, but the ISO 16684-1:2012 standard. And - YES - it uses the Dublin core, not to confuse with Dublin's ale
  3. Consumability / Resillience

    Once you found an artifact, can you open and inspect it. This very much boils down to: do you have software that can read and render this file format?
The last item (and the second to some extend) make the difference between mid-term and long-term storage. In a mid-term storage system you presume that, short of potential version upgrades, your software landscape doesn't change and the original software is still actively available when a need for retrieval arises. Furthermore you expect your retention system to stay the same.
On the other hand, in a long-term storage scenario you can't rely on a specific software for either search or artifact rendering. So you need to plan a little more carefully. Most binary formats fall short of that challenge. Furthermore your artefacts must be able to "carry" their meta data, so a search application can rebuild an access index when needed. That is one of the reasons why airline maintenance manuals are stored in DITA rather than an office format (note: docx is not compliant to ISO/IEC 29500 Strict).
The problem domain is known as Digital Preservation and has a reference implementation and congressional attention.
In a nutshell: keep your data as XML, PDF/A or TIFF. MIME could work too, it is good with meta data after all and it is native to eMail. The MIME-Trap to avoid are MIME-parts that are propriety binary (e.g. your attached office document). So proceed with caution
Neither PST, OST nor NSF are long term storage suitable (you still can use the NSF as the search database)
To be fully sure a long term storage would retain the original format (if required) as well as a vendor independent format.

Read More

14/08/2014

Time stamped encrypted archives

Developers use Version Control, business users Document management and consultants ZIP files.
From time to time I feel the need to safeguard a snapshot in time outside the machine I'm working with. Since "storage out of my control" isn't trustworthy, I encrypt data. This is the script I use:
#!/bin/bash
############################################################################
# Saves the given directory (%1) in an SSL encrypted zip file (%2) within
# the personalFiles folder. The name of the ZIP file needs to be without zip
# extension but might already contain the date. Destination might be %3
############################################################################
# Adjust these three values to your needs. Don't use ~ otherwise it doesn't
# work when you use sudo
tmplocation=/home/user/temp/
keyfile=/home/user/.ssh/pubkey.pem
privatekey=/home/user/.ssh/privkey.pem
if [ -z "$3" ]
  then
    secureloction=./
else
    secureloction=$3
fi
fullzip=$tmplocation$2.zip
fulldestination=$secureloction$2.szip
securesource=$1

#If the final file exists we unencrypt it first to update it
if [ -f "${fulldestination}" ]
then
    echo "Decrypting ${fulldestination}..."
    openssl smime -decrypt -in "${fulldestination}" -binary -inform DEM -inkey $privatekey -out "${fullzip}"
    # Zip the directory
    echo "Updating from ${securesource}"
	zip -ru $fullzip $securesource
else
    echo "Creating from ${securesource}"
	zip -r $fullzip $securesource
fi

#Encrypt it
echo Encrypting $fulldestination
openssl smime -encrypt -aes256 -in $fullzip -binary -outform DEM -out $fulldestination $keyfile
#Remove the temp file
shred -u $fullzip
notify-send -t 1000 -u low -i gtk-dialog-info "Secure backup completed: ${fulldestination}"

To make that work, you need Encryption keys, you can create yourself. A typical script to call the script above would look like this:
#!/bin/bash
############################################################################
# Save the Network connections from /etc/NetworkManager/system-connections
# in an SSL encrypted zip file
############################################################################
securesource=/etc/NetworkManager/system-connections
#Save one version per day
now=$(date +"%Y%m%d")
#Save one version per month
#now=$(date +"%Y%m")
zipfile=networkconnections_$now
secureloction=/home/user/allmyzips/
zipAndEncrypt $securesource $zipfile $secureloction

When you remove the decryption part (one time creation only, no update), you would only need to have access to the public key, which you could share, so someone else can provide you with a zip file encrypted just for you.
As usual: YMMV.

08/08/2014

Designing a REST API for eMail

Unencumbered by standards designed by committees I'm musing how a REST API would look like.
A REST API consists of 3 parts: the URI (~ URL for browser access), the verb and the payload. Since I'm looking at browser only access, the structured data payload format clearly will be JSON with the prose payload delivered in MIME format. I will worry about calendar and social data later on.
The verbs in REST are defined by the HTTP standard: , PUT, and DELETE. My base url would be http://localhost:8888/email and then continue with an additional part. Combined with the 4 horsemen verbs I envision the following action matrix:
Read More

06/08/2014

Running vert.x with the OpenNTF Domino API

In the first part I got vert.x 3.0 running with my local Notes client. The mastered challenges there were 32 Bit Java for the Notes client and the usual adjustment for the path variables. The adoption of the OpenNTF Domino API required a few steps more:
  1. Set 2 evironment variables:
    DYLD_LIBRARY_PATH=/opt/ibm/notes
    LD_LIBRARY_PATH=/opt/ibm/notes
  2. Add the following parameter to your Java command line:
    -Dnotes.binary=/opt/ibm/notes -Duser.dir=/home/stw/lotus/notes/data -Djava.library.path=/opt/ibm/notes
    Make sure that it is one line only. (Of course you will adjust the path to your environment, will you?)
  3. Add 4 JAR files to the classpath of your project runtime:
    • /opt/ibm/notes/jvm/lib/ext/Notes.jar
    • /opt/ibm/notes/framework/rcp/eclipse/plugins/
      com.ibm.icu.base_3.8.1.v20080530.jar
    • org.openntf.domino.jar
    • org.openntf.formula.jar
    I used the latest build of the later two jars from Nathan's branch, so make sure you have the latest. The ICU plug-in is based on the International Components for Unicode project and might get compiled into a future version of the Domino API.
Now the real fun begins. The classic Java API is conceptually single threaded with all Domino actions wrapped into NotesThread.sinitThread(); and NotesThread.stermThread(); to gain access to the Notes C API. For external applications (the ones running neither as XPage/OSGi nor as agent, the OpenNTF API provides the Domino Executor.
Read More

24/07/2014

Workflow for beginners, Standards, Concepts and Confusion

The nature of collaboration is the flow of information. So naturally I get asked about Workflows and its incarnation in IT systems a lot. Many of the question point to a fundamental confusion what Worflow is, and what it isn't. This entry will attempt to clarify concepts and terminology
Wikipedia sums it up nicely: "A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information. It can be depicted as a sequence of operations, declared as work of a person or group,[2] an organization of staff, or one or more simple or complex mechanisms".
Notably absent from the definition are: "IT system", "software", "flowchart" or "approval". These are all aspects of the implementation of a specific workflow system, not the whole of it. The Workflow Management Coalition (WfMC) has all the specifications, but they might appear as being written in a mix of Technobabble and Legalese, so I sum them up in my own words here:
  • A workflow has a business outcome as a goal. I personally find that a quite narrow definition, unless you agree: "Spring cleaning is serious business". So I would say: a collection of steps, that have been designed to be repeatable, to make it easier to achieve an outcome. So a workflow is an action pattern, the execution of a process. It helps to save time and resources, when it is well designed and can be a nightmare when mis-fitted
  • A workflow has an (abstract) definition and zero or more actual instances where the workflow is executed. Like: "Spring cleaning requires: vacuuming, wiping, washing" (abstract). "Spring cleaning my apartment on March 21, 2014" (actual). Here lies the first challenge: can - and how much - a workflow instance deviate from the definition. How have cases to be handled when the definition changes in the middle of the flow execution? How to handle workflow instances that do require more or less steps? When is a step mandatory for regulatory compliance?
  • A workflow has one or more actors. In a typical approval workflow the first actor is called requestor, followed by one or more approvers. But actors are not limited to humans and the act of approving or rejecting. A workflow actor can be a piece of software that adds information to a flow based on a set of criteria. A typical architecture for automated actors is SOA
  • Workflow systems have different magnitudes. The flagship products orchestrate flows across multiple independent systems and eventually across corporate boundaries, while I suspect that the actual bulk of (approval) flows runs in self contained applications, that might use coded flow rules, internal or external flow engines
  • On the other end of scale eMail can be found, where the flow and sequence are hidden in the heads of the participants or scribbled into freeform text
  • Workflows can be described in Use Cases, where the quality depends on the completeness of the description, especially the exception handling. A lot of Business Process Reengineering that is supposed to simplify workflows fails due to incomplete exception handling and people start to work "around the system" (eMail flood anyone?)
  • A workflow definition has a business case and describes the various steps. The number of steps can be bound by rules (e.g. "the more expensive, the more approvers are needed" or "if the good transported is HazMat approval by the environmental agency is needed") that get interpreted (yes/no) in the workflow instance
  • Determine the next actor(s) is a task that combines the workflow instance step with a Role resolver. That's the least understood and most critical part of a flow definition. Lets look at a purchase approval flow definition: "The requestor submits the request to approval by the team lead, to be approved by the department head for final confirmation by the controller". There are 4 roles to resolve. This happens in context of the request and the organisational hierarchy. The interesting question: if a resolver returns more than one person, what to do? Pick one randon, round robin or else how?
  • A role resolver can run on submission of a flow or at each step. People change roles, delegate responsibilities or are absent, so results change. Even if a (human) workflow step already has a person assigned, a workflow resolver is needed. That person might have delegated a specific flow, for a period (leave) or permanently (work load distribution). So Jane Doe might have delegated all approvals below sum X to her assistant John Doe (not related), but that doesn't get reflected in the flow definition, only in the role resolution
  • Most workflow systems gloss over the importance of a role resolver. Often the role resolver is represented by a rule engine, that gets confused with the flow engine. Both parts need to work in concert. We also find role resolution coded as tree crawler along an organisational tree. Role resolving warrants a complete post of its own (stay tuned)
  • When Workflow is mentioned to NMBU (normal mortal business users), then two impressions pop up instantly: Approvals (perceived as the bulk of flows) and graphical editors. This is roughly as accurate as "It is only Chinese food when it is made with rice". Of course there are ample examples of graphical editors and visualizations. The challenge: the shiny diagrams distract from role definitions, invite overly complex designs and contribute less to a successful implementation than sound business cases and complete exception awareness
  • A surprisingly novel term inside a flow is SLA. There's a natural resistance, that a superior (approver) might be bound by an action of a subordinate to act in a certain time frame. Quite often making the introduction of SLA part of a workflow initiative, provides an incentive to look very carefully to make processes complete and efficient
  • Good process definitions are notoriously hard to write and document. A lot of implementations suffer from a lack of clear definitions. Even when the what is clear, the why gets lost. Social tools like a wiki can help a lot
  • A good workflow system has a meta flow: a process to define a process. That's the part where you usually get blank stares
  • Read a one one or other good book to learn more
There is more to say about workflow, so stay tuned!

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2014 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.