What’s on the Kanban board?

When adopting an Agile development process, the team should start with visualising their work using a Kanban board. At the start, the items on the board will probably cover a whole range of work including hopefully a few User Stories. What can happen then is that if the team uses these items to try to create flow they will probably end up trying to model their entire development process (from idea to product) on the Kanban board, or worse still, try to apply Sprint planning to it.

This is especially true for platform teams that move to become cross-functional teams. The problem is that a Kanban board can only really handle the flow of one type of object (i.e. model one process) and that object has to have a clear “Definition of Done”. But hey, the whole point of using Kanban is to visualise the work so the team can do something about it to improve flow.

As I said, the Kanban board works best if you supply it with right-sized work items, and in software development these items are usually User Stories. But using Kanban to manage the flow of User Stories is only going to capture one part of the work that the team do to deliver the right thing to customers. The other part is the preparation of these valuable right-sized User Stories and their acceptance criteria, what is known as the Product Discovery process. Product Discovery and Product Development form a highly integrated dual-track development process.

Product Discovery follows a different process to User Stories: ideas are being analysed and discarded or morphing into something else. (It is a fluid process, but Kanban can still be used even here to visualise it). So even though the team could be spending 50% of their time breaking down the problem, defining an MVP and creating good User Stories, this effort will not be visible on the User Story Kanban board. Or more accurately, the time spent is not part of the User Story lifecycle, since the story can’t start its journey across the Kanban board until the discovery process is complete.

This is not to say that the team cannot visualise discovery work, it just means that it cannot be attached to any particular User Story. Instead the team can represent it using another object: the Task. Unlike Stories, Tasks do not deliver customer value. Tasks could cover any kind of activity: a spike, an analysis, purchasing a license, setting up an environment, etc. Even if there is no customer value involved, the team should still strive to create well-defined or time-boxed tasks: what is the spike attempting to prove? When is the setup complete? In other words, Tasks must also have a clear “Definition of Done”.

While the Kanban board can be used to track Tasks, the team should only use it for well-defined tasks. Activities like meetings and discovery sessions form part of the team’s work that should not be quantified using Kanban. After all, the goal is to deliver valuable right-sized User Stories to the customer, not to document the completion of Tasks.

Lastly, the team can benefit from being able to visualise which User Stories belong to which MVP. In Jira (for example) the team can group related User Stories using the Epic Issue Type. Thus, an Epic can be used to represent an MVP. If the team want to track activities related to breaking down the MVP, then they can also be associated with the Epic. Finally, the team can also create a separate Epic Kanban Board which track the flow of Epics: they are bigger objects that move more slowly but should still have a clear Definition of Done.

Further reading

How to set up a Product Discovery Process?

Technical debt vs. Maintenance debt

Agile software development is about adapting to change, about continuously learning what the customer wants. The team breaks down the solution into small pieces, delivering working software in every iteration that the customer can evaluate and give feedback on. In order to be able to learn faster, the initial releases may sacrifice best practice (in a controlled way), for example, a lack of abstraction, using tightly-coupled modules, hard-coded values, a crude data model, etc. In extreme cases, the developers “build one to throw away” (Fred Brooks). These are all forms of Technical debt (as Ward Cunningham defined the term); deliberate shortcuts in the implementation that we then remove when our understanding has improved.

The key point is that this is a debt that has to be paid back as soon as possible; a short-term loan if you will. Ward describes repaying the debt as using the experience gained from customer feedback to refactor the code to reflect the team’s new understanding of the problem. This chimes well with what Fred Brooks describes as maintaining the Conceptual Integrity of the product. Martin Fowler describes this as Prudent Deliberate debt. Henrik Kniberg calls it Good Technical Debt.

Unfortunately, the world is not perfect and Technical debt is not always repaid (in full). This results in the phenomenon of software entropy; the gradual disorder that arises as the code is modified over time. Martin describes the ways the team can default on their debt in his Technical Debt Quadrant: Reckless Deliberate debt will have to be postponed, and some or all Inadvertent debt will be discovered too late to do anything about within the timeframe of the project. Henrik calls this Bad Technical Debt.

This unpaid Technical Debt now becomes added to the long-term maintenance backlog of the product. This backlog also includes work that results from advances in technology and the deprecation or obsolescence of existing technologies, amongst other things. Meaning, even if we could develop the best possible solution using state-of-the art technologies today, we would still incur debt in the long-term because of obsolescence. This is a long-term loan that must also be repaid as we develop new features.

This long-term debt has also, confusingly, become labelled as Technical debt, a result of semantic diffusion. Uncle Bob addresses this in A Mess is not Technical Debt. This unclear distinction is understandable: from the point-of-view of the Product Owner who is trying to deliver a new feature, it makes no difference if the debt is short-term or long-term, it still has to be paid whether it was incurred as part of the project or as a result of longer-term software entropy or (for example) obsolescence. However, I would argue that Technical Debt is contracted debt, debt that the PO and development team have agreed on incurring as part of the learning process. In that case, long-term debt could be seen as a form of implicit rent.

What can we call this phenomenon of long-term debt? Let’s look at what constitutes it:

  • Obsolescence: new versions or EOL for third-party software, engineering competence not available, etc.
  • Paradigm shifts that reduce accidental complexity (Fred Brooks, The Mythical Man-Month) (or more intuitively called “incidental complexity”), e.g. Garbage collection
  • Software entropy (Unpaid Technical debt): a lot of Reckless Deliberate debt, nearly all Inadvertent technical debt (frequently labelled incorrectly as “accidental complexity”).

All these phenomena affect the viability or vitality of the software solution, in other words we are talking about the technical durability of the solution. If nothing is done we will reach an inflection point (debt ceiling) where it costs too much to develop new features in the existing solution and a fresh start will be needed. Perhaps an appropriate name for all this long-term debt is simply Maintenance Debt.

As Martin wrote, it is the usefulness of these terms that is relevant. We want to distinguish between the Prudent deliberate debt incurred by the project and everything else that requires maintenance. By naming the longer-term debt as Maintenance Debt we can return Technical debt to its original definition: a short-term debt that is deliberately incurred within the context of the project, and which is budgeted for.

In contrast, Maintenance debt encompasses long-term debt which the team have little control over, may or may not be part of any feature development and most likely has not been budgeted for.

An introduction to Agile

In this article I will discuss how to get started with Agile in the most hands-on way possible, with no discussion of frameworks and methodologies. I believe it is important to understand the essence of Agile first, as it is easy to be overwhelmed with all of the techniques and tools that have evolved from it (Scrum, XP, SAFe, etc.).

The goal then is to create an iterative software development process that can be improved upon continuously. The only tool you are going to need is a stack of post-its and some wall space or a whiteboard where the team can work together.

Start small, which means starting at the team level. Learning to work in an Agile way will also require some experimentation as every team works differently. The point being that you will need to create some slack in the team’s schedule if you want to change the way they work. Finally, you or the team lead will take on the roll of Agile Team Coach.

I should also mention that there’s lots of help out there: blogs, forums and books. One excellent resource is the Agile and Lean Software Development Group on LinkedIn. Now let’s get started!

Step 1: Visualisation

First the team should start by visualising the their work, this is especially true in software development which by nature is very abstract. By visualisation, I am not referring to traditional documentation which tries to capture an entire scope such as requirements or test cases. What you want to visualise here is what the team is doing right now. For this you use post-its. Every team member writes down what they are working on, big or small, together with their initials in the corner, and sticks it onto a whiteboard or wall.

Now the team have an opportunity to discuss the work, make adjustments, add or remove post-its. The team can try to group related activities for instance. Spend about 5-10 minutes at this, no more; just enough time to smooth out the rough edges.

Step 2: Create flow

The next thing the team need to consider is what the definition of “Done” is for each post-it note. By “Done” we mean that the team is finished with the work item; it could be putting software in production, writing a manual, upgrading a database, etc. In reality, a lot of teams starting out with Agile do not have a clear definition of Done for their work items, so don’t sweat it too much yet. Fixing this will be part of the improvement process mentioned later on.

Create three columns on the whiteboard or wall and label them: Backlog, Doing, Done. Now each team member places each of their Post-its into one of the three columns. Finished? Great! You have now created your first Kanban board. (Here we introduce the most elementary and useful of Agile tools, the Kanban Board.)

So now the team has visualised their work and created a flow of work from left to right on the board. Congratulations!

Step 3: Reflection

The whole exercise above shouldn’t take more than an hour for a team of 10 people. Stand back and take a look at it. It may be obvious that some items on the board have unclear scope and some items are very large (or small). We’ll come back to these issues later.

One final exercise, sum the number of post-its in the Backlog and Doing columns and divide them by the number of members on the team. This will give you some indication of how much multitasking is going on and how much overhead is being created due to context-switching.

Step 4: Focusing on the goal

OK, the team have taken the important first steps in becoming Agile. And they will continue taking small steps, applying well-proven techniques that will improve the flow of work. But let’s discuss the goal; where is the team trying to get to? In the book The Phoenix Project, Bill is inspired by Lean manufacturing techniques used on production lines. Bill’s goal becomes the creation of a factory production line for his IT Department. As stated in Agile Principle #8:

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

In other words the team shall create a process that they can reuse to build whatever software solutions the organisation or customers need now and in the future. The team can now take their new Kanban board and visualised work flow and use it to build a software factory!

Step 5: Breaking down the work

In our first iteration the team’s work items had unclear scope and different sizes. The team should deal with the scope problems first. This can be solved by breaking these work items into smaller items, each with a clear definition of “Done”.  Spend 1-2 hours on this step, starting with the most important work items.

A classic problem is that a work item involves input from people outside the team. The Kanban board should not contain items that are assigned to “outsiders”. If this external work is a prerequisite for completing a team work item, then it should be added as a dependency to a team work item only. It is essential that the team have control over the work items on their board, even if they are currently blocked by external dependencies. The Kanban board should be used to focus on the team’s work!

Sizing of work items is about creating items that are of roughly equal size. As a rule-of-thumb, a work item should take about 2-3 days to complete, up to a maximum of two weeks. There are techniques for standardising work sizes, but for now I recommend a simple consensus from the team on whether an item is large or small or somewhere in between. Remember, if the definition of “Done” is software in production, then this must include coding, testing, etc.

In the worse case, a work item is so badly scoped and sized that it may not be possible to continue working on it in its current state and some more analysis (of requirements or architecture) is needed. If work stops altogether on such items then it should be moved into the backlog. This is one of the hardest things to do in Agile, but really knowing when a work item is ready for execution is one of the great benefits Agile brings.

A clear definition of Done for each item together with creating items of roughly equal size will build team confidence. By breaking down work items into smaller chunks and visualising them on the Kanban board it becomes possible for every team member (and stakeholders!) to understand what the team is going to deliver. And getting items to Done will make everyone happy.

Step 6: Limiting Work in process (WIP)

At this point the team have broken down the work into similar size chunks and this probably means that there are many more post-its on the board. (There are many tools available for creating digital Kanban boards, but this is still a low priority for now; wait 2-4 weeks before taking that step.) What the team needs to focus on next is WIP. This exercise should take about 30-60 minutes.

Earlier the team calculated how many work items were being done per team member. Ideally, each team member should be working on one item at a time, i.e. sequentially; so for a team of 10, the number of work items in the “Doing” column would be 10. In practice the figure is higher and the team need to think about what that number is.

In Agile terms, we are talking about the team capacity. We use this figure to set a work in progress (or process) limit (WIP limit). In other words, the team cannot start a new work item until they have finished a work item that is already in progress (unless an item is blocked). Remember, the team have a clear definition of Done for every work item, so they are supposed to be able to complete them before starting something new.

WIP limits are extremely important in creating flow. It follows that if the team tries to complete 20 work items at the same time it will take twice as long as if they were working on just 10 items.

For now, there is just one column with work in progress (“Doing”). The team should try to estimate how many man-days of work is in that column. Anything more than 30-40 days (3-4 days x 10 people) worth of work should be moved to the Backlog, and this means prioritising what needs to get done first. Prioritising is the responsibility of the Product Owner or Business Manager responsible for the product being developed, so naturally they need to be involved. Agile creates visibility for both the team and stakeholders!

Step 7: Daily stand-ups

Book 10-15 minutes with the team every morning for a stand-up in front of the Kanban board to discuss the day’s activities. The stand-up is for the team only, but guests can be invited on occasion. Longer discussions should be saved for break-out sessions with those involved. The focus of the stand-up is to make sure everybody knows what they are doing, if there are any blockers that need to be escalated, and to check that the Kanban board is up-to-date.

In case it’s not obvious, the Kanban board has now become the most important tool the team have for organising and visualising their work. Well done!

Conclusion

The team have made great progress! They have managed to visualise their work, create flow, size their work items and limit their work in progress. This demonstrates the concept of Continuous Improvement (“Kaizan“) as preached in Lean manufacturing, meaning that the team are constantly looking for ways to improve the flow of work.

In Agile we use Retrospectives to specifically discuss how well the flow of work is, well, working. All the team are involved in suggesting improvements, and then some or all of the team are responsible for implementing at least one improvement right away. Process automation (e.g. test automation) is a classic example of improving flow.

There are many, many other techniques that are used as part of Agile such as User Stories, Storyboarding, Minimum Viable Products (MVPs), backlog refinement and measuring velocity to create an iterative software development process. Scrum is a subject onto itself. But these are topics for another article.

Further reading

I highly recommend the following books:

  • The Phoenix Project by Gene Kim
  • User Story Mapping by Jeff Patton
  • Lean from the trenches by Henrik Kniberg
  • Accelerate by Nicole Forsgren, Jez Humble and Gene Kim

The Agile factory

In the book The Phoenix Project there is a part where Bill is discussing lead times with Wes and Patty. Even though a certain task only takes 30 seconds, the lead time is still hours due to the time spent waiting for a resource to become available (Brent). Bill draws a graph to demonstrate the problem: if a resource is fully (100%) utilised then wait times become very long.

This statement threw me for a bit until I did some research. The graph definitely nails the problem with trying to maximise utilisation of resources. I mean, it is counter-intuitive for a manager to allow resources to idle. So what is the graph actually showing?

The graph shows that when the average utilisation goes over 80-90% then wait times become very long. So over time, a very high average utilisation will cause the queue to grow very long, increasing lead times dramatically. In other words, if a resource is very busy then it cannot cope with a workload that varies. The team must be allowed some slack so that lead times are still reasonable even when the workload is sometimes higher than average. In short, there is a trade-off between workload variability and resource utilisation.

This is well-understood in the manufacturing industry, but it is intrinsically true for software development. Everything that is built in software development is a one-off, unique. This creates huge variability in the job times of the development process; no two projects are ever the same. The challenge then is to reduce this variability so that we can create a more predictable workload and push up utilisation.

In Agile we use techniques such as storyboarding, MVPs and backlog grooming to manage variability and ensure that we maintain flow. WIP limits and Velocity are our KPIs that let us know how well we are succeeding in maintaining flow. Flow refers both to managing the variability of the arrival time of work (i.e. breaking down the work into smaller deliverables) and the execution time of the job (e.g. sizing of User Stories).

The science

Back to Bill’s graph. Where does it come from? It is actually based on Kingsman’s formula which is from the domain of queueing theory. In layman’s terms the wait time is made up of three parts:

So what Bill is actually saying is that, given a certain variability and a certain job time, then the wait time will be a function of the utilisation as shown in the graph above. Bill wants to focus on utilisation, so he normalises the other parameters (variation and job time) as follows:

For an excellent explanation of the Kingsman’s formula have a look at EuroLEAN+’s Youtube tutorials.

Reducing variability

Variation is the norm in software development, so it has to be dealt with. There are several ways to mitigate variability. Reducing utilisation is one option, but we can do better than having our developers and testers idling.

Kingsman’s formula shows that adding more work centres reduces sensitivity to variation (as well as increasing capacity obviously). However, this is probably more feasible in manufacturing than in software development, because a work centre (i.e. a development team) often has domain expertise, i.e. no two teams have exactly the same capabilities. But this approach may be more applicable in larger organisations.

We have already mentioned reducing variability using Agile techniques such as storyboarding, MVPs and backlog grooming, and this should be the primary focus of the team coach in creating and optimising flow.

Another option available to development teams is to have a technical backlog containing work that is lower priority that can be used to fill idle time and bring utilisation closer to 100%. The kind of tasks in this backlog should be small and independent. For example, it could involve refactoring, writing automated tests, learning about a new technology, and so on.

In summary, it is the combination of these techniques that allows development teams to be fully utilised. What Lean teaches us, is that the same discipline and structure that is used to optimise manufacturing flows, applies even more so to software development.

Tracking the team’s velocity gives us insights into both utilisation and the amount of planned work vs. unplanned work. We can also track the velocity of both Stories and Epics to see how good we are at sizing our MVPs. (An Epic is always an MVP in my book; this makes it clear what the definition of Done is for an Epic).

Skipping the queue

One of the early problems Bill had to deal with was departments trying to skip the queue. This is the result of a chronic failure of the development process. If lead times become unacceptably long (due to high utilisation, high variability or both), then eventually people will try to find shortcuts. This just makes a bad problem even worse, and represents a total breakdown in the chain-of-command. That kind of short circuit has to dealt with before any other improvements have a chance of succeeding. Hence the need to start by visualising all of the work in process.

Inventory

I was almost going to write that inventory doesn’t cost anything in software development, after all it is virtual. We don’t have to purchase raw materials and we don’t have to store anything in warehouses. (Yes, GitHub costs something, but it is a negligible cost in this context.)

But there is still inventory in software. The raw materials are just ideas, one-liners that take up virtually no space at all and until the team commits to building (analysing, developing and testing) something, the backlog can be reorganised and priorities changed as often as desired.

The rest of the inventory is in the queues between work centres, e.g. when handing over from development to test. This inventory does represent an investment in time and effort, e.g. breaking down the problem, defining an MVP and coding a solution. The cost of having this inventory is that the knowledge about the solution disappears over time; no amount of documentation can replace the shared understanding that existed when the team were working actively on the solution. Furthermore, TTM is probably the single most important factor for success nowadays. So to sum up, a lot of inventory, or WIP, is bad in software development. Watch those WIP limits!

Cycle time

Here is an example of a good article describing Lead times and Cycle times. However, the difference between the two is not very clear in my opinion. A new Initiative will contain an unknown amount of work, that’s why we analyse it and break it down into reasonably sized chunks; we are reducing variability and minimising risk. A task (e.g. User story) is only added to the backlog when it is somewhat well-defined and so the Lead time for every deliverable is a reasonably well-understood and managed parameter. Otherwise, Lead times just becomes guess work and that is not so useful.

But the backlog doesn’t only contain planned work, it also contains unplanned work; bugs and outages which must be dealt with immediately. This increases the Lead time for planned work in ways that can be hard to manage. While unplanned work cannot be avoided completely, it can be mitigated using a small iterative release process, i.e. continuous delivery, continuous improvements to the delivery process, as well as detective and preventative security controls.

So ideally, Lead Time only applies to tasks that are MVP-sized and, we should also have a WIP limit on new work to control Lead time. It does not make sense to fill up the backlog with Tasks that will be delivered years from now. Doing this, we achieve an understanding of the team’s capacity and that long Lead times indicate the need for an increase in team capacity, the need for more teams, or a change in priorities.

My agile development team was involved in storyboarding and backlog grooming for all new tasks, not just development and testing. The team were constantly managing the flow of deliverables at all stages on the Kanban board, both when there were too few and too many tasks in a queue. So the difference between “Task created” and “Work started” was really very small, and therefore Cycle time should be uninteresting.

In queueing theory there is a formula known as Little’s Law which is used to calculate the Cycle time. So does this formula still have relevance even if we are not interested in Cycle time?

The term “Cycle time” is somewhat non-intuitive. But if you think about it, the cycle time is also the average time it will take to deliver everything that’s on your Kanban board right now. For example, if your WIP is 10 and your average throughput is 2 tasks/day, then your cycle time is 5 days/task. Or put another way, the team can deliver everything on the Kanban board within the next 5 days. Now that’s a rather powerful statement. So Cycle time is also the Turnover time for all WIP.

The better the team get at breaking down the backlog into equal-sized chunks (i.e. minimising variability), the more relevant the turnover figure becomes.

And so if Cycle times converge with Lead times, then we are much more sure of our commitments to the business side of the organisation. Roll-on Big Room Planning!

Integration bloat

Integration platforms create a useful abstraction layer and are a prerequisite for building a Service Oriented Architecture. The integration platform is often the domain of an “Integration team” which may reside in-house or be out-sourced.

When building new services, one of the first things that has to be done is to create the service specification, which defines how the integration platform will publish your service. For SOAP web services this is done using WSDL. The integration team is then responsible for translating messages between systems, mapping fields, etc.

In some cases the integration work involves packaging a specific functionality of an existing legacy service and publishing it as a more intuitive and lightweight service that can be more easily consumed by modern clients. If the clients are under development, then the scope for the integration team may not be 100% specified. To compensate, the integration team can include mappings that might be needed. This can result in a service that contains more functionality than is strictly necessary to create a working solution.

When end-to-end testing is performed any problems found will be fixed, but only for that portion of the new service that is actually used by the client. Furthermore, the integration team may not have tested all or indeed any of the features of the service they created, instead relying on the end-to-end testing to find problems.

The result is an integration service that fulfils the client’s requirements but includes features that are untested. The integration team document the entire service but have no idea how much of the service has actually been verified to work. This creates a maintenance headache when the service must be modified.

The presence of superfluous fields is an obvious problem. A more subtle issue are fields that support specific values (like enums)  where clients use some values but not all. The service provider might allow values A,B,C,D,E,F, the integration documentation might only advertise A,B,C,D, and the client might only use A and B. In reality, the integration may allow all values if no validation is applied; however all that has been tested are A and B. Since the integration team do not have in-depth knowledge of the client behaviours, they have no alternative but to rely on their own code and documentation to understand the scope of the service.

In conclusion, once a service has been created that is too big for purpose, it is difficult if not impossible to reduce its functionality. Ideally, the service should be built up incrementally in an agile way-of-working, this ensures that the client and the integration are fully meshed. This method may not be possible with out-sourced integration teams. Another alternative is for the integration team to create a mock client that verifies the whole service even if no client actually exists that will use all of the service’s functionality. This at least would enforce a cost constraint on the integration team that will hinder the creation of services that are larger than necessary. Tools such as SoapUI and Postman can be used for this purpose.

The Delivery Storyboard

Storyboarding, or user story mapping, as described by Jeff Patton, is a central part of our Agile development process. We use it whenever we are doing feature discovery and it helps us structure our ideas without constraining the discussion. A lot of what we do involves integrations with other team’s deliveries, usually two or more systems need to interact and will use a service layer as a communication broker, SOA in a nutshell.

Our team’s definition of Done is getting stuff into production. We do that quite well because we only put stuff in our backlog that we can deliver, our dependencies are managed elsewhere, usually on the backlog of the team we are dependent on.

But when it comes time to deliver the complete solution there are a lot of moving parts to keep track of and one of my roles is to coordinate amongst the teams and make sure that each team knows what actions they need to take in order for the roll-out of the entire solution to be successful. Often this process takes weeks to complete because there can be data migrations, third-party upgrades, etc. My primary focus is the order things need to happen in, and which things involve more than one team to make it work. Optimising the schedule of events comes after.

I immediately found it natural to extend user story mapping to planning product roll-outs. This gives us all the benefits of visualisation and discovery that happen when we do product discovery. All the teams can see how the roll-out will be done and where everyone is involved. I call this a Delivery Storyboard. The Delivery Storyboard is completely separate from the User Storyboard we use for product discovery.

The delivery storyboard features a backbone (blue post-its) and describes a flow from left-to-right as usual. The flow in this case is the flow of execution of the activities needed to complete the roll-out. Each backbone activity is broken down into tasks (yellow post-its) that are placed underneath. An example of a task could be “Deploy component X to server” or “Import data file to System A”.

Now for the cool part. Each column is independent of every other column, whereas everything in the column has to be executed more or less at the same time. In other words, we focus on executing all the tasks in one column until completion. The next column can be executed an arbitrary time later, but then all of the tasks in that column must be executed together as well. Repeat the process until the last column is executed and the roll-out is complete.

When all the tasks in any column are completed, the production environment should be left in a stable state and not dependent on other tasks in other columns for the time being. The challenge then is creating an execution flow that is flexible and does not have dependencies or hard time limits between the tasks in one column and the tasks in the next. Of course this can’t always be avoided, but one of the goals here is to visualise these types of constraints!

Another problem I have encountered is that some tasks in the middle of the flow need to be (or can be) executed first. Either the columns need to be reordered or, the storyboard is trying to meet more than one goal. In the later case, try writing down the original goal on a post-it and see if all of the tasks on the board are needed for that goal. Then write another goal on another post-it for the remaining tasks and so-on. Each goal then deserves its own storyboard (big or small). This mirrors the concept of MVP (Minimum Viable Product) that Jeff Patton describes in his excellent book User Story Mapping.

And as always, regular stand-ups with all the teams involved, usually one representative from each team if there are many teams. Depending on where in the execution flow the roll-out is, not everyone needs to be at every stand-up. The teams walk through the delivery process, breaking down the work into concrete activities with clear responsibilities.

Each task is the responsibility of a specific team, and every task is tagged with a coloured sticker to indicate the team responsible. During product development, tasks, user stories, etc. are usually maintained in the team’s product backlog and this may still be so for some of the tasks on the Delivery Storyboard, but now they are duplicated here because we want to visualise dependencies to other teams, and where they will feature in the roll-out plan. If they have a JIRA issue number then write that on the post-it too.

A column with tasks that have different colour tags visually indicates where teams need to coordinate closely. That is pretty neat. Participants in the stand-ups can talk to each other about how they should collaborate to get the backbone item delivered successfully. During storyboarding sessions with the teams we can easily reorganise the tasks to minimise risk, reduce lead time and reduce downtime. The tasks in the column can also be ordered top-down to indicate the order of execution if meaningful.

When a task is completed you should mark it somehow, for example crossing it out using a green marker. This provides a visual cue to focus on the remaining tasks as well as green being a positive colour.

The Delivery Storyboard can be complemented with dates concerning when certain columns and/or tasks are to be executed which is useful for planning to meet deadlines. However, the main focus is on the sequence of events, who-does-what and where do teams need to coordinate their deliveries. Finally, the board should contains only tasks that will be executed, hopefully we are not doing product discovery at this late stage.

A place in Wikipedia

For years I have been reading and writing in Wikipedia. Some time ago I created a page for my home village Kilcloon. Village, or parish or maybe census town? I revisited the Wikipedia article numerous times and was keen to expand it. During my research about the history of Kilcloon it became obvious that Kilcloon could refer to many things, the most common of which is the parish of Kilcloon as stated at the beginning of the Wikipedia article.

There are other definitions, such as the postal town of Kilcloon which applies to some, but not all, of the parish. For me, growing up near the centre of the parish, the postal town was synonymous with the parish name, but apparently this is not so for everybody. Do people still identify themselves as living in Kilcloon if they have a different postal town in their address? Nowadays people moving into an area do not automatically associate themselves with the parish they are in. Parishes and parish boundaries are managed by the Catholic Church, not the state.

More definitions

So how does the Irish state define as Kilcloon? This depends on which authority you ask, and the answers are many! The postal service is run by An Post and Kilcloon is the name of the postal town covering just part of the parish, as mentioned above. A direct question to An Post about what townlands were part of the Kilcloon postal town did not provide a very satisfactory answer, but all was not lost.

Ireland has recently introduced postal codes (eircodes), unique for address, and these will replace the existing address system of townlands and postal towns, though the two systems are aligned for the time being since it is not mandatory to include an eircode when writing an address, yet. It turns out that the areas covered by each of the eircode routing keys has been published on Google Maps. Kilcloon is now part of the A85 (Dunshaughlin) routing key and actually is a very distinct appendage to this routing key as seen on the map. This I believe provides a definitive answer to what Kilcloon is from the postal service point-of-view.

Kilcloon also features in the Central Statistics Office (CSO) statistics as a “census town” or “settlement”. Kilcloon settlement can be seen clearly on the CSO Small Area Population (SAP) map. This can be compared to the Meath County Council’s definition of Kilcloon which is in the form of four physical signposts centred around Ballynare Crossroads. This is the geographically smallest definition of Kilcloon that exists and could be defined simply as the “village” of Kilcloon, which is much smaller than the census town and contains only a fraction of the people that consider themselves as living in “Kilcloon”.

Some history

And so back to the parish of Kilcloon. Historically, the parish of Kilcloon is a modern parish that comprises several smaller medieval parishes, one of which was called “Kilclone”. My research shows that the medieval parish was often referred to as “Kilcloon” and this was used to name the modern parish. Every medieval parish was comprised of townlands, one of which bore the same name as the parish, thus there exists a townland of Kilclone in the medieval parish of Kilclone. While the medieval parish names have disappeared the townlands prevail and are a central part of the postal address system mentioned above. The local post office is called Kilclone Post office precisely because it is in the townland of Kilclone for instance.

The townlands themselves have also been transformed through the ages and the modern townland boundaries differ to varying degrees from the boundaries as they were when the parishes were first formed. This is the subject of some amazing research and the results are available on townlands.ie. It has also provided the inspiration to create the maps I would use to illustrate the multitude of definitions of the place known as Kilcloon.

Maps

Based on all of this research, there were five definitions of Kilcloon that I wanted to created maps for: the parish, the townland (Kilclone), the postal town, the census town and the village!

The townlands website uses the fantastic OpenStreetMap and Leaflet JavaScript library to create maps of all of the Irish townlands, baronies and much more! The data is publicly available and I could extract the coordinates from the web page to create unique maps for the Kilcloon Wikipedia article. These first maps showed which townlands the modern parish of Kilcloon included as well as which baronies the townlands were originally part of.

Medieval parishes and their associated baronies

The Routing Key map data could also be downloaded and used to render the Kilcloon postal area. Leaflet could overlay the A85 routing key onto the parish to see how the lined up!

Leaflet naturally allows points-of-interest to be displayed, so I created several maps showing the most important features of the parish. Finally, the trickiest maps to create were the parish and census town maps. The Kilcloon census town map is available on the CSO SAP map, but not the data. Still I managed to extract the data through visual inspection. The village is defined only by physical sign posts on the roads leading into Ballynare Crossroads, but I combined the positions of the signposts with property boundaries in the area to create a theoretical village boundary and add the coordinates to a Leaflet map.

Maintenance

Creating the maps required some straightforward JavaScript to render the maps. I wanted the code to be open source since the maps must be maintained along with the Wikipedia page, so I added a simple index page to the code base that would render each map in turn and checked everything into Github.

Links

Kilcloon on Wikipedia
Kilcloon maps on GitHub

Scalable Observer Pattern

When developers talk about publish-subscribe design patterns I immediately think of the newspaper analogy. As described in Head First Design Patterns:

  1. A newspaper goes into business and begins publishing newspapers.
  2. You subscribe to a particular publisher, and every time there’s a new edition it gets delivered to you. As long as you remain a subscriber you get new newspapers.
  3. You unsubscribe when you don’t want papers anymore, and they stop being delivered.
  4. While the publisher remains in business, people, hotels, airlines, and other businesses constantly subscribe and unsubscribe to the newspaper.

As a software design pattern, this is known as the Observer Pattern. In this pattern the publisher is called the Subject and the subscribers the Observers.

Comparison of Observer and Pub-Sub patterns

The Observer Pattern has some limitations such as scalability and hard-coupling. Unlike the physical world of newspapers it is possible to build an improved subscription service that does scale and is loosely-coupled. This improved pattern is called the Publish-Subscribe Pattern (or “pub-sub”).

Now you’re wondering, why name the pattern “publish-subscribe” when it does not behave like a newspaper pattern?? This has caused a lot of consternation in my discussions with other system architects. Unless one is aware of the naming convention used for these patterns; then it has happened that one person is talking about pub-sub and the other thinks they’re talking about newspapers.

It would be have been more intuitive to have called pub-sub something like the Scalable Observer Pattern.

Information model vs. data model

As a software developer or architect you will probably have had at least one discussion about the difference between information models and data models. Why do we want to make this distinction? In practice drawing an information model is much the same as drawing a data model; both use the entity-relationship model for describing the world. ER-diagrams are easily transformed into the SQL used to create the table structure in relational databases (MySQL, MSSQL, etc.). So when do we need to create information models? Let’s look at an example.

Startup

ACME Trading has started a business selling pencils to its customers. They have set up a very basic ordering system to handle orders and ship goods to their customers. They designed a data model that will support the business software by examining the process (reality) of ordering goods and came up with the following:

The model has just two entities, one for the customer and one for the orders. These entities contain all the attributes needed to fulfil an order.

Growing

ACME is doing alright but they want to grow the business faster so they try doing some marketing. Again they build a simple application to support this business function. Examining the real world again they design the following data model.

The model contains just two entities; the customer again, this time with different attributes, and an entity called Contact Method.

Boom times

The marketing strategy is a success and ACME soon have to expand their operations and need to develop their existing systems to better handle the increased volume of customers and orders for pencils.

But now it’s becoming a hassle to have to create the customer in two systems and wouldn’t it be great if all customers created in the ordering system were also added to the marketing system automatically?

This shouldn’t be a problem as long as the two systems have compatible data models. In other words, a customer entity in the ordering system can map to a customer entity in the marketing system. But if it’s not possible, which system do we change? The ordering system is business critical so we may not want to mess with that one too much. However, ACME are thinking long-term and realise that they need a more robust representation of reality, one that the company can grow into.

At this point they go back to their view of reality and create a model that is independent of any system, a reference model if you will. This is called an information model. Or as Wikipedia explains:

An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas.

The information model now serves two purposes. First, to aid future software design in creating robust data models, for example by supporting different customer address types. Secondly, to enforce a common terminology across the system landscape and in the documentation, e.g. a mobile phone number is to be called “Mobile number” when writing user stories, test cases, defining class names and methods, creating database tables, etc.

In order for the Ordering system and the Marketing system to be able to exchange information, they can try to map their data models to the information model. All the existing data models and information models are modelling reality so the differences really arise from how faithful or granular the data model is compared to reality.

An organisation can have many data models, usually one per system, but should only have one information model. Different parts of the organisation may only be interested in certain entities and relationships and may create an information model for the parts of reality they are interested in, but these partial information models are really all part of the same organisation-wide information model, even if a complete information model does not yet exist. In very large companies this may not be practical or desirable especially where autonomy between divisions is encouraged.

An information model is almost never implemented as-is in a system. Firstly, an information model will often contain more entities and attributes than any one system needs to implement. The reverse is also true: data models will contain application-specific artefacts as well, as entities needed to handle many-to-many relationships for instance. Secondly, data models are optimised for the specific system that utilises them, meaning the developers have combined entities and attributes in ways that improve the performance of the database. Again, information models should not constrain the implementation of the data model.

Going global

ACME have now decided to establish operations in Europe and have opened a sales and support office in Sweden. The company is now multilingual. While the reality of ordering, shipping and marketing goods is the same globally, each country uses their own language to describe it.  
So when the Swedish sales offices start sending Requests for Change back to HQ, they are using word like Kund for Customer and Beställning for Order. They are referring to the same thing but it is hard for the Swedish Sales people to discuss the changes needed with the English-speaking developers.

The different lingual groups need to agree on a common terminology, this can be neatly reflected in the information model (which also does not expose implementation details the way a data model does):

We can generalise and say that if English is the lingua franca of programming and programming languages, then there will always be a need to agree on the terminology in more than one language in non-English speaking countries. Put another way, the information model provides a useful bridge between the technical and business sides of the organisation which can often use different languages. While there are many tools that can be used to create information models, few have support for multiple languages in the same model unfortunately.

Conclusion

The difference between information models (IMs) and data models (DMs) can be summarised as follows:

  • IMs provide a formal description of the organisation’s view of reality.
  • There should only be one IM per organisation, but there can be many DMs, usually one per system.
  • IMs define the terminology that should be used in documentation and software development.
  • DMs are optimised for the application that needs them. IMs help future-proof the solution but should not constrain the DM.
  • IMs can support multilingual organisations where the business units are using another language than English.

In future articles I hope to discuss how information models can be used in integration platforms to aid the definition of canonical data formats when performing data mapping and also enforcing data access controls. Another area where information models are very important is Master Data Management and in the use of Data Standards.

Information models are also a visualisation of ubiquitous language which is an important part of Domain-driven design (DDD) and Behaviour Driven Development (BDD).

Business Process Modelling with BPMN

Having moved away from software development and design and more towards management of IT processes and services, I have found that Business Process Modelling is more applicable than UML to describing the kinds of processes I am encountering. This is not surprising, as UML is more IT-centric and I needed more flexibility to capture the realities of how things work in real life. Yes, you can use a combination of UML diagrams to capture a real-world process, but this is not as intuitive to non-IT people of which I encounter more often.

My first attempts at modelling a business process was using activity diagrams, sequence diagrams and use case models. The use case model defines all of the actors involved – both people and systems, the sequence diagram showed the message flow between them.

Figure 1 – Use Case Model Diagram
Figure 2 – UML Activity Diagram

However, this was still too low-level and I needed something that would capture the “big picture”. After all, a high-level process (e.g. a sales process) can naturally be broken down into sub-processes. Each level of detail provides meaning to the different layers of the organization as appropriate. Of course, UML is still important for helping to formally describe the resulting IT systems implementation.

The nice thing about BPMN is that you can practice it all the time. With UML you generally want to be working on something IT related, but BPM can be applied to any process. For instance, how do people get something to eat for lunch? Do they eat out or have they brought a lunch box? This process can be described using BPMN.

Figure 3 – Process for eating lunch using BPMN

If BPM interests you and you are reading this article, the chances are that you are a pioneer in in your organization. BPMN is an industry-standard notation so if you are learning BPMN then the quicker you learn the rules and follow best-practice the more rewarding will be the result. I highly recommend the following two books:

Spending time formally documenting a process may seem like a waste of time in some ways. In the real world, situations change and people adapt or take shortcuts and the process model may be out-of-date in no time, but your BPMN model should not try to capture every detail or variation. More importantly, modelling a process using BPMN is an excellent aid to understanding how a given process currently works (even if it is dysfunctional). This process analysis can be much more complete when using a comprehensive notation like BPMN – if it can’t be modelled in BPMN then there is probably some wrong assumption or something hidden in the process that needs to be investigated. BPMN gives you the confidence to pursue a process analysis to its proper conclusion.

I will finish with an example of a process model I was grappling with recently. Systems integration is often done using messaging, typical of a Service Oriented Architecture. Files are transferred from one server to another and then imported into the recipient software system. (As this is an IT-centric problem I could of course have used UML to model this.) File transfer is either push or pull, in this case push. The sender places files on the recipient’s file system. The receiver checks for new files every few seconds and if it finds any it processes them.

Modelling system interaction in BPM it is called a collaboration. The collaboration is named after the process, in the case “File transfer”, and the lanes are named after the actors. The first thing I had to figure out was whether to use events to show that a message had arrived. At the same time the recipient is busy polling the directory looking for files, and will continue to do so as long as the service is available.

The sender and receiver are modelled as two separate processes. The sender sends the file using a message activity with a message flow symbol attached.

Figure 4 – File transfer using BPMN

The message is sent to the recipient’s polling subprocess which can generate a non-interrupting escalation event (ooh!) (the little arrow in the dotted circle) to trigger the next activity that processes the files. The subprocess is looped (the little circular arrow), so it will continue to run after the escalation occurs (forever in this case).

So how did I know how to use a non-interrupting escalation? Well, the non-interrupting part is just saying that the event does not interrupt the subprocess flow, i.e. polling will still continue when files have been found. The escalation part, just means that the polling process has found files and needs someone else to deal with them, so it notifies the parent process (escalation).

The diagrams were produced using Visio Professional 2016 which includes a function to validate the diagram according to BPMN 2.0 (“Check diagram”).