Strategic choices

Bringing together the necessary disciplines to form high-performance cross-functional teams is a key management decision when creating good software products. All software products are one-offs, unique, so having the product owner, designer, developers and testers in the same team is the most effective way to make essential trade-offs when delivering “good enough” solutions. Add to this the importance of early feedback that only comes from doing iterative releases, and an empowered cross-functional teams becomes the best vehicle for success in a highly-competitive marketplace.

True product teams do exist, but more often they are really just delivery teams, or at best feature teams. The distinction can be captured with the question: Are we giving the teams problems to solve or solutions to build? True product teams are trusted to come up with the best solutions to meet business objectives. However, lack of trust is often a big issue, but so is a lack of maturity. Product teams take time to form and usually need coaching. Management should also focus on the teams’ outcomes and allow them to do their jobs without too much interference.

So an alternative organisation is to give the product discovery activities to a separate “Product Team” with the necessary competencies: POs, designers, business analysts, etc. This team should then come up with the winning product concepts for the cross-functional teams to build. The other teams are then reduced to being delivery teams doing exactly what the “Product Team” decides. One side-effect of this is that there is now no room for doing experiments in the teams since the course is already plotted.

The problem with this model is that any team the company puts together will naturally be given a purpose with the expectation to deliver something useful. There is then pressure on this “Product Team” to come up with guaranteed money-spinners for the company, and they work hard to describe a viable product solution, often using high-fidelity prototypes. This results in a large chunk of work, essentially a requirements document, even if it is in graphical format, that must be handed over to the feature/delivery teams, who must then start over, making the necessary trade-offs and reorganising the work into iterations. The delivery teams may well use Agile techniques and tools to build the solution, but it is operating in a big waterfall process.

The best products are built by teams that care about the products they build and the customers that use them. Naturally, they will have insights and ideas about improvements (experiments) that can be made to the product, and in a true product team this is how discovery and delivery are combined to deliver just enough software to satisfy the customers’ needs. However, in the waterfall process described above the ability of the delivery/feature team to influence the product is limited because a) the waterfall process is one-way and b) the “Product Team” see concept work as being their sole responsibility. This is a major cause of frustration for the teams and as a result there is a big risk that the most engaged team members will leave to find companies where they can have an impact on the product.

The PO is a key member in the cross-functional team. If the team is a true product team then the PO will take total ownership of the product and be involved in all aspects of the product lifecycle, from concept and feature development to back-office processes and support tools, legal requirements, and more. However, in the waterfall process above, they essentially have only two roles, one is as a delivery manager in the delivery team, the other as a feature expert in the “Product Team”. Neither role covers the totality of a true Product Owner role, in fact the waterfall process is really just driving a feature factory. Of course, delivering customer value is the most important thing the team can do, but it is not the only thing, the problem is that the waterfall process does not support delivering other types of value.

By giving responsibility for the two closely interlinked processes of discovery and delivery to different teams, management must ensure that a good relationship exists between the teams, and that the one-way waterfall process is instead a two-way exchange of ideas between partners. In the worst case, management are just prioritising feature delivery over every other type of work, ignoring the fact that different types of value exist, value which can have just as much impact on the company fortunes as feature development.

Companies already in the situation described above can try to improve it, but a sense of trust has to exist between all of the teams to do so, because changing how we work requires that we trust that the changes are for the good of the company and not only for the good of one team. This requires management support.

JIRA: Workflow transition rules vs. workflow-triggered automation rules

Jira now provides a powerful way to build automated process flows using Automation rules. These rules can be triggered in different ways; one such way is when an issue is transitioned. However, automation rules are not the only thing that can be triggered on an issue transition, Jira Workflows have Post functions that trigger when an issue transitions from one state to another. So what is the order of execution in that case?

The post functions always contain a “Fire a Generic Event event” function or similar which Automation rules can listen for, but regardless of where in the order of post functions the event is fired, the automation rules are always executed after the post functions have been executed. This I learned while discussing the Automation rule behaviour in the Jira support forums, specifically all the post-functions in a transition are executed as an atomic transaction.

On reflection this is not so strange, triggering and listening for events is an asynchronous process, meaning that the process triggering the event will not wait for the listener (or listeners) to act. And if the post functions were not executed as an atomic transaction, then there would be a risk that an automation rule could trigger at some arbitrary point during the execution of the post functions, creating a race condition with undesirable consequences.

Fidelity

In this post I want to talk about fidelity, meaning the resolution we give to our product prototypes. Marty Cagan talks a lot about the different types of prototypes that can and should be used during discovery. A prototype should only just be good enough (i.e. have high enough fidelity) to verify an idea. We don’t want to spend any more time making it than necessary, because in the end we will throw it away once we build the real product.

To understand if a feature will be valuable and usable, the designers can create wireframes/mock-ups/prototypes in a tool like Figma. It allows the designers to create realistic screens and also run simulations of the real app (just the flows, no data). The risk is that the designers go all-in, creating high-fidelity prototypes that the developers must treat essentially as a requirements document, with all the difficulties that that entails:

  • The developers have to build in increments (screen-by-screen), rather than being able to deliver iteratively (starting with a simple story and elaborating).
  • Changes to the design means that a large prototype has to be frequently updated, requiring the team to spend time figuring out how to maintain the prototype rather than spending time collaborating on delivery of the next story.

Also, the more waterfall the process is, the more overloaded the prototype becomes with all the information the developers need to build the product. The result is a very high-fidelity prototype that actually contains many different types of information:

  1. The flow (process) the user follows
  2. The structure of the information presented in each screen
  3. The graphical details (colour scheme, copy, etc.)

The Agile way

So how should we design prototypes as part of an iterative development process? There is still the need to capture the conceptual integrity; the team still need prototypes to verify the value and usability of potential solutions. The answer lies in the fidelity of the prototypes.

To verify the conceptual integrity of the solution, it is enough to capture the flow, name the activities and identify the states the customer or feature is in. High-fidelity prototypes would be replaced with simple boxes, arrows and labels, and would actually more resemble the process diagrams created using BPMN.

If more fidelity is needed at this stage, in order to verify usability for instance, then it should be added to those screens where it is needed, rather than the whole prototype. But even if creating a full-scale high-fidelity prototype is justified at this stage, it should still not be delivered as a big-bang to the development team for the reasons stated above.

The story starts here

Once the team have identified a valuable, usable, feasible feature that can be built, the next step is to break down the work into smaller pieces, eventually arriving at INVEST-type stories that can be used to create potentially shippable software at every iteration. These stories should be supported by corresponding prototypes which contain all the structure and graphical details needed by the developers to be able to build the feature. The key here is that the designer creates specific prototypes for each story that the team have defined, rather than just referring to some combination of screens in an existing full-scale high-fidelity prototype.

Now the team will have a story-size high-fidelity prototype that only contains enough information for the story they will work on next. Even if the details change, it will be on a much more manageable scale. In fact, creating this small high-fidelity prototype should not be the end of the collaboration between the designer and the developers. They should continue to work closely together during development and make changes directly to the product rather than updating the prototype (which will be obsolete as soon as the story is finished). This avoids the need for any elaborate maintenance procedures.

The flip-side of this is that the earlier full-scale low-fidelity prototype will also be easier to maintain because it only represents the flows, which also should be quite stable even as discovery continues during the development phase. In other words, it is the structure of information and graphical details that are most volatile and should therefore be modelled as close in time to development as possible (“just-in-time”).

Summary

Using this just-in-time approach, the designers would still do about the same amount of work as before with the difference that the more volatile design elements would be created in collaboration with the team and the most volatile elements would not be captured in Figma at all, but added directly to the product in collaboration with the developers, e.g. using pair-programming.

The importance of a reference architecture

The purpose of a reference architecture is to identify the architectural principles that apply when creating a sustainable and scalable solution. As new additions are made to the solution, the reference architecture becomes the yardstick that all solution proposals can be measured against and so enable a fair comparison of ideas.

A key test of the principles used to create an architectural model is whether they display conceptual integrity. Proposed solutions and additions to the solution must respect the conceptual integrity and show where they deviate from said principles as well as motivating why it was necessary to do so.

Deviating from these architectural principles is often (well-)motivated for reasons of time and cost when implementing the solution. However, without an explicit reference architecture one cannot measure the effect of these compromises, compromises that generally result in higher maintenance costs. And the larger the deviations from the reference architecture, the greater the risk for higher maintenance costs and the harder it will be to continue developing and scaling up the solution in future.

For over time, the tendency is always towards greater and greater divergence between the solution and the model; until finally the conceptual integrity of the solution can no longer be discerned. How then do we preserve the conceptual integrity? By training and information dissemination, what is known as shared understanding as preached in the Agile world. The development team must share the same picture of the reference architecture if it is going to be maintained going forward; and it will also be much easier to explain how a solution works if it adheres to architectural principles compared to one that does not.

An essential part of a reference architecture is the creation of architectural artefacts, such information models, state machines, process diagrams and so on. Using standard modelling notation such as UML and BPMN reduces the risk for ambiguity and makes knowledge sharing that much easier.

Where to start? Try to identify what type of solution it is you are building. Does it fit into a known pattern? Try to find the appropriate technical literature (in book format and online) that provides a frame of reference (including vocabulary), and use it to create a reference architecture which can then be applied to the solution.

Further reading

Martin Fowler- Software Architecture Guide

Agile and well-being

I recently read an article about how to help someone get back to work after a long absence, perhaps due to illness or burnout. There was lots of good advice, such as keeping colleagues informed about adjustments to working hours and limiting responsibilities among others things. But what struck me was how a lot of the advice reminded me of the Agile way of working:

  • create clearly defined tasks
  • allow space to work on one thing at a time
  • provide support with prioritising
  • do not set short deadlines
  • set a clear plan for the week and a review at the end of the week
  • ensure delegation of tasks is done via a single channel

This could be from the Kanban playbook. To put it in Agile terms:

  • Tasks should have a clear definition of done
  • Developers should pull tasks, not have them pre-assigned
  • The Product Owner prioritises all work
  • Focus on outcomes not deadlines
  • Set clear goals and use daily stand-ups to ensure progress
  • Nobody outside the team can assign work to the developers

So you could say that creating flow does not just improve the team’s efficiency, it also contributes to the continued well-being of your employees.

INVESTeD

I use the INVEST criteria to help teams define good User Stories. This would normally be sufficient to get any one story into production, but in the case of a new feature (or MVP) this has frustratingly not been enough; the stories just pile up in a feature branch until the team feel there is enough of them to deliver real value to the customer.

I have discussed with the teams how they can enlist the help of alpha and beta testers to get early feedback on new features that are not functionally complete. Here I add that the feature should still possess conceptual integrity. For instance, the first story in the feature might just allow the customer to to log in and log out. This does not deliver any real value to the customer, but it is testable, and it possesses conceptual integrity.

There are some obvious signs when this early testing doesn’t happen: the team hasn’t released anything for a month or two, the stories have been piling up in the Done column, and the PO is feeling a bit stressed. In these situations, the coach can ask the team:

What is stopping the team releasing something tomorrow to customers, friendly or otherwise?

This always starts an interesting discussion and the team usually identifies a (short) list of things to do to get the unfinished feature in front of some friendly customers. This gives the team a much-needed feeling of achievement, but more importantly they can start getting real feedback on the new feature.

This is a win for the coach, but it is still a reactive process. How can I make this a proactive part of the software delivery process? What I want is to encourage the team to really think about their Definition of Done much earlier. What I am hoping for is that the team will set a goal that goes something like this:

The stories the team prepare during backlog refinement must be delivered to customers (internal users, early adopters, etc.) as soon as each story is finished.

So from now on I will include “delivery” in my discussions with the teams by extending the definition of the criteria for a good User Story: Independent, Negotiable, Valuable, Estimable, Small, Testable and Deliverable; or INVESTeD for short.

This builds on a definition that is already familiar to the teams and so it will be natural to think about how to meet this criteria right from the start.

Delivering early

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”

The Agile Manifesto

When I am working with cross-functional development teams, I use the storyboarding technique to help the team break down their work. This usually involves identifying an MVP and creating a prioritised backlog. I also encourage the team to use the INVEST criteria to create good quality stories. So far so good.

What happens next is that the team start development with the goal of delivering the MVP to customers. The problem is that the MVP is still a finished product that usually contains many stories (i.e. several weeks of work) and there is little or no customer collaboration once the concept phase is over and development starts.

Delivering an unfinished MVP is often not considered valuable for customers, but it should always considered valuable for the development team, because getting feedback is critical to building the right thing. There are three major obstacles to making this happen.

Lack of alpha/beta users

Even the very earliest releases of an MVP (e.g. “Hello world”) can be delivered, it is just a matter of a matter of defining who the customer is and setting the right expectations. These can be real customers that are willing to test features under development in return for some discount later, or else they can be people in the organisation (but not the team) that are interested in the success of the product; stakeholders are good candidates for example.

The team of course should continue to do demos of the product, both within the team and the organisation, but this should not be used as a substitute for hands-on customer testing.

Packaging unfinished features

Ideally, each story adds some piece of value while maintaining the conceptual integrity of the product. In other words, the customer should still always have a good user experience, and can readily distinguish limited functionality from buggy software. This is extremely important because what the developers are interested in is valuable feedback.

If the developers do not package the story in a good way, leaving broken links etc., then it will end up wasting the time of the customers and the developers. It also affects customer engagement negatively; customers want to know that the developers treat their time as valuable. Customers are also less likely to test exhaustively if they know that some things are broken (they just don’t know which ones).

Overhead of intermediate releases

“Deliver working software frequently” is one of the principles of the Agile Manifesto. Automating the release/deployment process is essential to achieving this. The harder it is to make releases, the less frequently the team will want to make them. And in the case of unfinished features this is only going to be more so, if at all.

Overcoming inertia

The combination of these three obstacles can create a huge inertia in teams to make early releases. So is it worth the hassle of making early releases? The answer must be yes. Making frequent releases is an essential tool of any Agile team regardless of whether they are delivering an MVP or incrementally improving an existing product.

Packaging the unfinished feature is also good developer practice, after all, who knows when time or money will run out? (“Responding to change over following a plan”).

Utilising alpha/beta testers outside the team is also a good way to create visibility for stakeholders who have a natural incentive to see the end result first-hand. Also, delivering what you’ve done so far is always much better than wasting time giving estimates (“Working software is the primary measure of progress”).

Managing sub-tasks on a Jira Kanban board

The Kanban board is used to visualise the team’s work. This is usually a mix of Bugs, Tasks and Stories. Good stories should follow the INVEST criteria. If the team are using Jira, then it also allows them to create sub-tasks for Tasks and Stories. Sub-tasks are a useful way for the developers to create a “Todo” list for the implementation, e.g. “setup database”, “create service”, etc. without exposing the gory details to the rest of team.

Whenever the team is looking at the flow of value across the board, these implementation details are usually not interesting, and that is why sub-tasks are usually not shown on the board. However, when a developer is discussing their current progress (e.g. during standups), this information can be a useful recall aid. This is especially true if the team are creating vertical stories which usually requires multiple developers (front-end and back-end) to work on the, and therefore the story cannot (should not) be assigned to any one person. Instead, it is the sub-tasks that provide context.

A Jira Kanban board can also be filtered per user; so if sub-tasks are shown on the board, then the team can apply the user filter to quickly see the sum of what any one developer is working on: sub-tasks, tasks, stories, etc.

Displaying sub-tasks on the board is easy to configure, but there are some other changes that the team might need to make as well. For instance, how to hide “Done” sub-tasks without hiding stories that are due for release. I will cover each of these in the following sections.

Displaying sub-tasks

Every Kanban has a Filter Query that controls which issues are displayed. If only certain issue types are displayed, then the filter must be updated to also include sub-tasks. In that case, go to the Board settings, General and edit the Filter Query to include “All Sub-Task Issue Types”. For example:

project = "ACME" AND issuetype in (subTaskIssueTypes(), Story) ORDER BY Rank ASC

If the sub-tasks are using a different workflow, then it is presumably a simpler workflow than the Stories they are a part of. Just make sure that any unique sub-task workflow states are added to the board columns. This can be configured under Board settings, Columns.

Immediately, the team will be able to see all sub-tasks on the board and can filter them per user by clicking on the avatars at the top of the board. The next step is to create a toggle to hide/unhide sub-tasks.

Toggling sub-task display

Displaying sub-tasks inevitably leads to a lot of clutter on the board. It is also important that the team can maintain focus on the flow of Stories and not just sub-tasks. To facilitate this the team want to be able to hide sub-tasks at will.

Under Board Settings, Quick Filters create a new filter called “No Sub-Tasks” and set the query to be

issuetype not in subTaskIssueTypes()

This Quick Filter will appear at the top of the Kanban Board and when pressed will hide temporarily hide all sub-tasks, making the board appear as it was before sub-tasks were added.

Definition of Done

Sub-tasks should have a simple lifecycle. The developer who performs the sub-task is responsible for its testing and integration into the feature branch. Only when all sub-tasks in the Story are completed can the acceptance criteria for the Story be tested. However, the sub-tasks will linger on in the the Done column forever unless they are explicitly removed.

Jira Kanban boards provide a “Kanban board sub-filter” for hiding issues that are part of a release (by setting the “Fix version”). However, it is not desirable to make sub-tasks part of a release; other options exist. Here is a summary of all of the alternatives:

  1. Include the sub-tasks in the Release. This unfortunately pollutes the list of Stories included in the Release, and makes the Release notes unusable.
  2. Build an Automation to create a dummy release just for sub-tasks, that is scheduled to run, say, every week. This is a reasonable workaround, but pollutes the release history and (perhaps not so important) puts the stories and sub-tasks in different releases.
  3. Use the “Hide completed issues older than” option under Board Settings, General. This is a blunt instrument; the problem is that it makes no distinction between Stories and sub-tasks and could end up hiding Stories that are Done but delayed for release.
  4. Adjust the board Filter Query to exclude sub-tasks after time elapsed (e.g. 1 week). This is the least invasive way to effect what is essentially a visual change needed to control what issues are displayed on the board.

I recommend the fourth option; it is easy to set up and modify and does not impact any other aspects of the issue lifecycle, such as Fix versions. To do this, the Filter Query can be modified to not show older sub-tasks; in this example 1 week:

project = "Acme" AND (issuetype in (subTaskIssueTypes(), Story) OR (issuetype in subTaskIssueTypes() AND (status != Done OR resolved >= -1w))) ORDER BY Rank ASC

Summary

Displaying sub-tasks on the team Kanban board allows the team to see in one place exactly all the issues the developers are working on. The new “No Sub-Tasks” Quick Filter allows the team to retain their existing overview of Stories, Tasks and Bugs while allowing them to toggle the display of sub-tasks to support different conversations.

Improved sub-filter for Jira Kanban boards

If you are using releases and Kanban boards in Jira, then you will most likely have a problem with issues not showing on the Kanban board. Specifically, issues are hidden if that have been released but their status is something other than “Done”. This can easily happen as Jira does not check if all issues are done before executing the release. The problem is the Kanban board sub-filter:

fixVersion in unreleasedVersions() OR fixVersion is EMPTY

This means that if the release version of an issue is set then it will be hidden as soon as the release is made in Jira regardless of the issue’s status. From the team’s perspective this is probably perfectly fine, these issues were done in practice it was just that there status was incorrect in Jira. In the worst case real work is hidden. Another problem is that even if the work is done, leaving the issues open skews the data Jira relies for the various graphs and metrics that it provides, e.g. team velocity.

To fix this, what we want is to only hide issues that are released and have status “Done”. To do this we update the filter to:

(fixVersion in releasedVersions() AND status != Done) OR
fixVersion in unreleasedVersions() OR fixVersion is EMPTY

Now all issues are displayed until the team actually sets the status to “Done”, which is the more intuitive behaviour. And if the release has been made before the status is set to “Done” then the issue will disappear immediately from the Kanban board.

Expect that old issues will reappear on the board when the filter is applied, but this doesn’t take long to clean up. Alternatively the filter above can be used to search for these old issues and close them first, but it is probably easier and better to let the developers do it themselves.

References

JIRA Software Kanban board does not show all Issues.

The goal of the Agile coach

Agile teams often use velocity as a metric to measure the team’s performance. This is a measure of the throughput of the team, how fast they are at delivering stuff. But this metric alone cannot be used to determine if the team are making customers happier or helping the company to make money.

To illustrate the problem let’s suppose we have three delivery teams in a chain. The first team pulls stories from the backlog and feeds them to the next team and so on until the feature is delivered to the customer. Throughput is low, so the company hires a couple of agile coaches to help the teams work more efficiently. Their goal is to:

Increase team velocity.

The coaches are good at their jobs, helping each team create INVEST-type stories, remove impediments, focus on delivering one thing at a time, CI/CD, etc. They soon maximise the team’s efficiency, realising their full potential, velocity reaches 100%, hurrah! But wait, there is still a bottleneck; the capacity of the FE team is limiting the flow of features to the customer. The queue of stories is also problematic as the all important shared understanding between teams is quickly lost as the wait-time between work centres becomes longer. What did the coaches miss?

What the scenario shows is that the Agile coaches cannot focus solely on the velocity of stories in the individual teams. It is still a useful measurement for planning team capacities, but the goal cannot be to maximise this value. If velocity can’t be used to solve the queue problem, then the coaches need another measurement that does. Observe that the wait-time between work centres delays the feature getting to the customer. In other words, wait-time adds to the total time needed to design, build and deliver features, i.e. it increases lead time. The coaches are given a new goal:

Increase team velocity while minimising the lead time for new features.

When should the coaches start measuring lead time? Probably from the time the company commits to delivering the feature. How do the coaches measure lead-time? Well, if the product owners are using a Kanban board for example, then they can just write the date on the card when they committed to building and delivering the feature. Then, when the feature begins its journey across the board the coaches can measure how long it took to reach the customer.

The coaches now have two conflicting metrics. On the one hand the team want to maximise their story velocity (local efficiency), but on the other hand the company wants to reduce time-to-market (TTM) (i.e. minimise lead time). (I am deliberating ignoring throughput which is the average time to deliver any feature. Even if throughput is high, it could still take months to deliver any one particular feature if the lead time is long. Thus, adapting to change becomes hard.)

Conclusion: the team must subordinate itself to the company’s goal. This means that if there is a downstream bottleneck (the FE team), then the BE team cannot keep pushing more work into their queue. In practice this means that the BE team cannot start a new story until the queue is cleared. The best way to manage this is using pull instead of push. If the BE team is finished with a story, then it is still included in their WIP limit (Work-In-Process rather than Work-In-Progress) preventing the team from starting a new story. When the FE team is ready to work on a new story they pull it from the BE team.

Do the BE team go home and wait for the FE team to pull the story? Ideally yes, in reality, no; there is always work to be done. Software systems pay a perpetual rent, what can be called maintenance debt, that must be continuously paid off to prevent the slow glide into non-compliance and obsolescence. In practice, the team should maintain a technical backlog of work that they can do while waiting for the bottleneck to pull work.

A more realistic scenario would be that the Design team releases the story simultaneously to the Frontend and Backend teams to work on. This is what happens in loosely-couple architectures; the two teams can agree on a contract and then work independently of each other to deliver their respective parts. This would improve lead time but there is still the problem of one team running faster than the other. What happens is that the bottleneck has moved to the end of the delivery process when the feature is delivered to the customer.

What more can our Agile coaches do to reduce lead times? Cross-functional teams are considered a good thing in Agile, can they help with reducing lead times? Let’s illustrate that in a completely new diagram.

What’s that? I just drew a box around the old diagram you say? OK, yes, I did. Instead of three separate delivery teams, we now have one product team. It’s still the same mix of competencies and the same system architecture, so why would we expect the delivery process to behave any differently?

So how is using a cross-functional team better? Well, the delivery teams now have a common goal that is set by the Product Owner. (The PO must express the company’s goal in terms that are meaningful to the team.) Also, the potential for collaboration and innovation, and the ability to “build the right thing” can be fully exploited. What about velocity and lead time? Maximising velocity now also means reducing lead time. Since there is one WIP limit for the whole team, there are no queues, further reducing lead time. Well done coaches!

OK, let’s take a step back. The coaches are now using two metrics to achieve the goal of reducing lead times efficiently. The new cross-functional team is pumping out features faster than ever. Are the customers happy? Is the company making more money? Eh, still no idea. How do we measure customer happiness or the return-on-investment for all the features the team is delivering?

The team must find some way to measure the effect a feature has on customer growth or customer retention, or increase in revenue, or whatever is important to the company. However, the team’s velocity enables it to fire off lots of features in rapid succession, making it impossible to know which features are actually the ones that are helping the team achieve its goal, and which are just adding to system complexity and maintenance debt. Let’s call this the feature success rate, i.e. what percentage of features released move the team towards their goal.

Once again there is a conflict between two metrics: velocity and feature success rate. The customers must be given the time to evaluate each feature in turn in order for the team to know if it was successful or not. So now the customer has become a bottleneck with a WIP limit = 1. How do we increase the customer velocity?

One way is to divide the customers into groups, so-called A/B testing with each group evaluating different features or different versions of the same feature. But this is the most expensive way for the team to find out if they have built the right thing. Instead the team should try to figure out as early as possible and as cheaply as possible if a feature will move the team towards their goal: customer surveys, impact mapping, wireframes, etc.; whatever it takes to validate assumptions while building as little as possible. Also, when choosing between features the team should pick those that have the biggest impact. For our intrepid Agile coaches the goal is finally expressed as:

Increase team velocity and minimise the lead time for new features while increasing feature success rate.

Summary

One surprising result of this analysis is that it is not possible, nor desirable, for developers to spend 100% of their productive time developing features. This is something every Product Owner for a cross-functional team must be aware of. This is due both to the capacity constraints of the different competencies in the team, the variance in the work itself, WIP limits and the bottleneck (wherever that happens to be in the flow).

Developers must maintain a technical backlog to work on when they are blocked by WIP limits or starved of new work. Like automated testing, developers must also spend time devising methods to measure the impact of whatever features they create. This holistic approach will also help the team members better understand the team’s goal.

The purpose of this analysis was to identify the goal for an Agile coach and to find the minimum number of metrics that the Agile coach needs, to know if they are moving towards their goal. My conclusion is that these metrics are:

  1. Team velocity to aid capacity planning and measure efficiency
  2. Lead time to shorten TTM and allow the team to adapt to change quickly
  3. Feature success rate to minimise the number of features used to meet the team’s goal

In short, the Agile coaches are moving towards their goal if the team’s velocity is increasing, lead time is reducing and feature success rate is increasing.

References