The CIO’s Guide to Breakthrough Portfolio Project Management by Michael Hannan, Wolfram Muller and Hilbert Robinson

Rating: 8/10

Read More on AmazonRead the OriginalGet My Searchable Collection of 200+ Book Notes

The CIO’s Guide to Breakthrough Portfolio Project Management by Michael Hannan, Wolfram Muller and Hilbert Robinson

Rating: 8/10

Read More on AmazonSubscribe to get future book notes & reviews

High-Level Thoughts

A fantastic productivity resource on how to get more done when you’re managing multiple projects. Even if you’re not a CIO (which I’m not) you can learn a lot from this book about how to better manage your many projects.

Summary Notes

When we talk about breakthrough portfolio-wide improvements, we mean selecting much higher-impact projects, at least doubling the number of them that your organization can complete, and being able to deliver over 90 percent of them within plan—all within existing resource constraints.

The three most important objectives for any project portfolio are:

  1. Selecting the right projects.
  2. Maximizing the portfolio’s throughput of project completions.
  3. Optimizing the portfolio’s reliability of project completions.

Two of these are the CMMI Institute’s Capability Maturity Model Integration (CMMI), and PMI’s similarly pervasive Project Management Body of Knowledge (PMBOK) and associated standards and certifications. CMMI focuses on improving the underlying processes required for successful IT project delivery, while PMI provides a set of foundational and “generally recognized good practices” that it reinforces via its certifications.

there is significant emphasis on what we would consider “input metrics”—such as repeatable processes and practices—without corresponding outcome metrics to assess whether this repeatability actually helps improve throughput or reliability.

This discipline comes from the Theory of Constraints (TOC), and its logic is straightforward: In any system, there is one function, resource, process area, or process step that constrains the entire system’s ability to deliver on its mission.

Once an organization has identified its system constraint, it knows that any improvement anywhere other than at the constraint will have little or no impact on overall organizational effectiveness. Putting this concept into practice helps provide much-needed clarity on where to focus improvement efforts.

Throughput per Constraint Unit (T/CU). One way to think of T/CU is in terms of “effective throughput,” as it represents what we actually expect to achieve, given what we know about how our system constraint limits throughput. I simply need to get defensible estimates of T/CU for each project candidate, and fund the highest-scoring ones for which I have budget and available CUs to support.

We now have a somewhat improved project-selection metric that we’ll call “Effective ROI,” as it calculates the actual ROI expected when taking into account the system constraint: Throughput per Constraint Unit, per Investment (T/CU/I)

At some point, however, most or all of this hidden capacity will get used up, such that any further projects delivering new capabilities into operation will only serve to overload the constraint, degrading throughput. As a result, the only projects that make sense at that point are those that can actually expand capacity at the constraint.

Keep Susan focused. Protect her from ad-hoc tasking, while maximizing single-task focus that is aligned with organizational priorities. Make clear that her priority is no longer responding to fires, but staying focused on executing the assigned task at hand. Schedule her resource to take on project tasks according to a given number of available hours per day or per week, and operations tasks for the remainder of her time.

Subordinate all other resources to Susan. In other words, all resources other than Susan (non-CUs) must do whatever they can to help alleviate the pressure on Susan. Even minor assistance can have a big impact—we’ve even seen examples of organizations asking non-CUs to go bring Susan her lunch so she can maximize her available CU time. Even better is when non-CUs shadow Susan and document some of her more repeatable approaches, such as how best to troubleshoot a particular system; oftentimes, the non-CUs even find ways to automate or simplify these approaches, further freeing up Susan.

Generate more Susans. While this may well take more time and effort—and will likely require even more of Susan’s CUs initially—it simply must be done. For example, there must be deliberate efforts to have non-CUs pick up knowledge or skills that only Susan currently has, such as gaining expertise in a critical operational system that Susan knows really well.

A critically important final point on this: If you can find a way to get more projects done without adding resources, you will have a greater ability both to expand capacity at the constraint, and to use that additional capacity to drive up throughput.

When we say “portfolio throughput,” we are specifically referring to how many project completions an organization can achieve over a given period of time.

We can boost highway throughput in a number of ways—here are some common ones: We can keep the road free of impediments and in good working order. We can increase traffic density (e.g., carpooling). We can meter the on-ramps whenever their inflow slows the main flow of traffic. We can recruit underutilized resources elsewhere (such as lanes usually devoted to opposing traffic). We can try and make the cars go faster. We can build an additional a lane or two.

In PPM, we tend to focus mostly on trying to make the cars go faster—even when the highway is all jammed up, sometimes causing accidents, and usually frustrating everyone on the highway who can’t get where they want to go. We also tend to jump right to trying to add a lane or two—which is rarely quick, easy, or inexpensive, if it’s a feasible option at all.

While speed is important, and adding capacity may well be in order, let’s start by getting traffic to flow.

So we need to meter the flow of on-ramp traffic in a way that keeps highway traffic flowing. In the PPM world, we call this project staggering.

the first person to enter the highway will go as fast as he wants, but if all we have is that single individual, our flow is obviously very low. If we add a second person, we double the flow instantly, as there is still plenty of room to go as fast as both people care to. We can keep adding more people, one at a time, and the flow will rise accordingly, until we hit a certain density, at which point our flow improvements start to taper. If we keep adding more travelers, we will hit a point at which our flow peaks, and then starts to degrade. If we continue to push more cars onto the highway, we will worsen the flow dramatically, until our density is so high that we have very little flow at all.

In PPM, how can I visualize my organization’s total project capacity?

But we must also have good insight into the timing for which each project resource will be needed, and understand where my biggest resource constraints actually lie. The only way to know these things for sure is tohave all projects planned out, with tasks and task dependencies identified, and task resources assigned. In our experience, however, you can begin to get a pretty good idea of where the resource bottlenecks are likely to occur, as they tend to follow a pattern in IT projects.

With staggering, we see that all three projects finish four to eight weeks earlier, even though the third project isn’t even kicked off until Week 7.

For CIOs, IT Project Portfolio Managers, and other senior executives looking for a more practical, hybrid approach for improving the throughput of project completions, project staggering is your first step.

The truth, however, is that task switching is slowing us down a lot—by a whopping 40 percent, according to many studies.

For instance, many of us put in an hour or two of work either early in the morning, or late at night, “because that’s when I can really get things done.” We know intuitively that we’re much more productive when we’re able to minimize interruptions and task switching.

Of course, just having your project’s tasks time-boxed into a sprint doesn’t, by itself, do anything to eliminate task switching.

Assuming that most or all of our projects suffer from pervasive task-switching, we would expect an average productivity benefit of 40 percent from focused, single-task execution.

if all we do is stagger our projects and execute them with single-task focus, we can more than double portfolio throughput.

By combining project staggering, single-task execution, and elimination of task/sprint-level commitments, we see that we can now more than triple portfolio throughput—and none of these techniques is complex or difficult to learn and apply.[7]

It turns out that this is higher than normal, but not by much—seasoned process engineers will tell you that the typical business process contains 70-90 percent bloat. The challenge is to devote time, energy, and the right talent into improving processes before software-enabling them.

In order to satisfy Lean’s definition of a “value-added” step, that step must meet three conditions: 1. Change the object being moved through the process. 2. Deliver a result that’s done right the first time. 3. Deliver value—as defined by the customer of that process.

The figure below provides a few of the more typical examples of non-value-added steps.

  1. Lean’s “pull system,” which drives flow more effectively than a “push system.” In Ultimate Scrum, software developers pull their tasks from the backlog, as opposed to managers assigning (pushing) tasks.
  2. Lean’s “single-piece flow” as the ideal unit of flow in a high-performing process. In Ultimate Scrum, single-piece flow manifests as a rule that developers may pull only one task at a time, and that no developer may start a new task until finishing the one in progress.
  3. TOC’s tenet that the throughput of a system is maximized only when governed by the pace of that system’s constraint. In Ultimate Scrum, the software developers are the constraint, and their velocity of task completion sets the pace for all supporting aspects of task flow.

An important side benefit of Agile’s sprints is that they reduce the batch sizes of tasks, compared to most traditional approaches. Ultimate Scrum pushes this concept to the limit by achieving a batch size of 1, effectively doing away with sprints altogether, in favor of a single-piece flow of tasks.

Finally, Ultimate Scrum employs TOC’s “Drum-Buffer-Rope” (DBR) method for maximizing the effective throughput of task completions as efficiently as possible. The drum refers to the developers, as they are the system constraint for software development, and thus set the pace or “drumbeat” of task execution. In order to make sure that the developers always have a ready supply of tasks, we make sure to buffer them with just enough open tasks in progress. And in order to know when we need to release a new open task to the buffer, we need to get a tug on the “rope,” which in this case happens every time a task is finished.

Simply put, the goal here is to make sure that the developers always have something to work on.

Everything is steered by the volume of planned tasks in the task buffer. We recommend you start with a buffer size of 2 for this. So if there are just two tasks left, then one of the developers takes the next story from the product backlog and breaks it into tasks.

If buffer holes persist, then keep increasing the buffer size by one, until no buffer holes occur any more. If your buffer rises to the point at which you have more than one task in the buffer for every two developers, the problem most likely lies in how long it takes you to break stories into tasks.

The PMI survey cited in Chapter 1 reports that even organizations’ highest-priority projects fail to deliver on their original goals and business intent 44 percent of the time; we can only presume that failure rate for the lower-priority projects is considerably higher than that.

We encourage you to aim high—for most portfolios, the optimal reliability target should be in the 90-95 percent range (or more than double the typical rate of about 40 percent).

In traditional project management, buffering has often been introduced as the “triple constraint rule,” which holds that we must have some flexibility in schedule, budget, and/or scope in order to deliver with any hope of reliability.

not only is single-tasking roughly 40 percent faster on average, but the performance is also significantly more predictable, and therefore more reliable. It is also interesting to note that the worst single-tasking performers are about as good as the best task-switching performers.

Under the task-level buffering approach, I can be on track all the way through the project, but then if things go awry on the final task, I only have that task’s five-day buffer to protect the entire project from failing to complete within plan. Contrast this task-level approach with the project-level buffering approach, in which I am likely to have more than five days of buffer available when I reach the final task, because it’s relatively rare that the first three tasks would go so badly that they consume 25 days out of my 30-day buffer.

Ask all team members to block off, say, six hours every day on their calendars for focused task work, leaving the rest of the time to respond to messages and address ad-hoc issues.

  1. Effective ROI is essential for selecting high-impact projects.
  2. At the task level, Single Tasking and Elimination of Task-level Commitments are essential for maximizing improvements in speed and reliability.
  3. At the project level, Project Staggering is indispensable for maximizing the throughput of project completions, while monitoring time-based buffers is essential for boosting reliability.
  4. At the portfolio level, Buffer Balancing using the Buffer Protection Index (BPI) is required for optimizing reliability.

And finally, here is the combined picture for all nine techniques, integrated together:

Obstacle #3: Convincing PMs and Scrum Masters to abandon task-level deadlines

No runners are asked to commit to a specific time or speed—and if you did ask them, they would likely look at you like you’re nuts, because all athletes know that performance will vary from one race to the next. If we can set up our task-execution environments to more closely resemble a relay race, the behaviors will follow—as will the speed and reliability benefits.

While we do recommend organization-wide rollouts, we do not advise attempting to adopt all techniques all at once. Here is the logical progression that we recommend:

  1. Project Staggering
  2. Project Buffering
  3. Portfolio Buffer Balancing
  4. Project Selection Using Effective ROI
  5. Eliminating Task-level Commitments
  6. Single-tasking
  7. Ultimate Scrum
  8. Showing all Buffers as Time-Based
  9. Lean Process VSAs

Enjoyed this? Be sure to subscribe!