Introducing a Simple Portfolio Planning Method

Introducing a Simple Portfolio Planning Method

SAFe Updates

The SAFe Portfolio Level has been evolving rapidly.  Today, it incorporates a Lean-Agile budgeting model, a Kanban system for business and enabler epics, guidance for coordinating multiple Value Streams, connection to the enterprise business strategy, and more. But outside of the portfolio Kanban system, it hasn’t provided much actual guidance on planning for the implementation of epics. We’d like to close that gap bit with an incremental step that advances the toolset for the SAFe Portfolio. Please check out the new guidance article here.

The method is based on balancing two important concerns for Portfolio work: a) consistency of work across Value Streams and b) capacity bottlenecks in the Portfolio. It utilizes a simple view that incorporates the two perspectives, as the figure below suggests.

Figure 4. A number of Epics loaded into the plan

The method enhances the visibility into the two factors and allows the Portfolio planners to reason about the potential portfolio roadmap. There is an important caution though. One must remember that SAFe planning at the Portfolio Level can only produce input to Value Streams. The tool cannot and should not be used to commit the teams to a certain course of action; in SAFe, only teams themselves can ultimately commit to a certain scope of work. They achieve that via PI Planning. The method above aims at pre-planning and forecasting of portfolio work, rather than offering concrete expectations in terms of what should be delivered and when.

Where does this tool fit into SAFe? Clearly, as part of the Portfolio Kanban system. There, Epics get decomposed into Capabilities which are, in turn, distributed across the Portfolio Value Streams. This is where the tool turns out to be very useful. We hope to integrate this content there at some point. But of course, the same model can be used for Value Stream planing as well.

Well, this post is getting to be as long as the article, so enough said. Please check out the article,  and as always, provide feedback in the comments below.

-Alex and the rest of the Framework Team.

 

Author Info

Dean Leffingwell

Recognized as the one of the world’s foremost authorities on Lean-Agile best practices, Dean Leffingwell is an author, entrepreneur, and software development methodologist.

comment (3)

  1. AlexYakyma

    04 Aug 2016 - 5:18 pm

    Ram, thanks for the comment.

    So, normalized story points are used to literally normalize and operate in the same unit of measure, but not to drown in the overly detailed decomposition. The tool has no hidden logic underneath it, or further breakdown of the items. So, in the example that was picked in the article, during the Pre-planning itself at the portfolio level, all that happens is what you see on the picture: epics are split into features, estimates are relatively rough. There’s no full-chain breakdown because it’s not needed. Those features will be only broken down into stories when the teams will be performing PI planning, but not at this point.

    Again, every practice exists only because there’s certain motivation for it, a goal. The goal for this one is very simple: to feed consistent content to the trains for their PI planning – that’s it. Now, if we just play backwards from the goal, the whole estimation topic can be restated as follows: what level of precision do we need for our estimates to simply create candidate input to ARTs? That’s all it is. And it is absolutely okay if different portfolios answer this question differently. As long as we stick to the goal. And all this method does is simply allows me to pre-validate capacity constraints in the portfolio while ensuring scope consistency. It’s just a magnifying glass…

  2. AlexYakyma

    04 Aug 2016 - 5:13 pm

    Ram, thanks for the comment.

    So, normalized story points are used to literally normalize and operate in the same unit of measure, but not to drown in the overly detailed decomposition. The tool has no hidden logic underneath it, or further breakdown of the items. So, in the example that was picked in the article, during the Pre-planning itself at the portfolio level, all that happens is what you see on the picture: epics are split into features, estimates are relatively rough. There’s no full-chain breakdown because it’s not needed. Those features will be only broken down into stories when teams will be performing PI planning, but not at this point.

    Again, every practices exists only because there’s certain motivation for it, a goal. The goal for this one is very simple: to feed consistent content to the trains for their PI planning – that’s it. Now, if we just play backwards from the goal and the whole estimation topic can be restate as follows: what level of precision do we need for our estimates to simply create candidate input to ARTs. That’s all it is. And all this method does is simply allowing me to pre-validate capacity constraints in the portfolio while ensuring scope consistency. It’s just a magnifying glass…

  3. Ram Kompella

    03 Aug 2016 - 12:38 pm

    Hi Alex, how are you?

    Thank you for sharing the article. I like it.

    The major challenge here is estimation of portfolio epics to balance the portfolio and value streams for the purpose of forecasting. It poses couple of estimation challenges as follows:
    1) Mostly the value stream undertakings are large initiatives comparable to greenfield projects (at least ones I am familiar with are) where little or no historic data exists.
    2) Story pointing requires elaborate decomposition. Decomposing the portfolio epics to capabilities, and capabilities, to features, and features to stories is a time consuming exercise and comes in the way of being agile.

    Your Thoughts please??

Leave a Reply