I am amused by meteorologists who say they measure the probability of rain by saying they measure the area that will get rain as a percentage of the total area and multiply that by their confidence. Nobody really understands what 40% means but that's math. We like percentages for rain and risk because in repeated forecasts, we have something to compare whether it really rains or not for that one time. But the weather guys spend more time than project managers do on that so we'll go with their best guess on risk measurement.
Probability of events
Probability, and therefore risk, starts with measuring outcomes of similar events. To measure anything in a business metric so you can compare results, you need a quantity of a standard unit. Math is funny. When you divide the number of useful past events by the total past events you lose your units of measurement. So you don't have any probability units. There is nothing that assigns an IT project to have 4 RUs (risk units) and another 15 RUs. So probabilities of different kinds of events can't be compared. That's our first problem.Risk has to be related to a concrete event. Rain forecasts assume a probability of occurrence from a large number of the same types of events and as a stand-in we use similar historical events. A 50% chance of getting heads on a coin toss doesn't mean anything after one coin toss because it's one event. Your only two outcomes are heads and tails. There's no 50% tails. But a history of 500 tosses shows that heads is 50.3%. Now we have some confidence.
Confidence in chance
Confidence is what gets degraded though when we don't have similar events. In projects, we never have the same events so, like the thousands of influences on a storm system, if more of them are about the same, the confidence level of the forecast increases, which increases the chance of rain. Confidence is easier to do when the events are less reality based and can be easily counted. Coin tosses dominate the probability world but even tossing a coin has been shown to be not that easy.Most business people don't have a backlog of 50 similar sized projects of the same type run by the same project manager with the same budget. The more events with the same influences, the better the confidence value. So confidence is easily eroded making our probability of an event less likely making our calculations less helpful. That's our second problem. We can make an educated guess of how likely the event is based on past events of the same type but the necessary confidence reality check will destroy a risk result.
How priority affects risk
Priority is not risk. Priorities are best assigned to scope units called work packages or use cases that estimate that total value. I like ITIL's breakdown of priority into impact and urgency and adapt it from the incident or trouble ticket to a use case. ITIL puts the value in terms of numbers of people that are affected (impact) and the value perceived (urgency). . For internal projects it might be expressed as a high, medium or low number of employees and high to low level of visibility to executives or high-level managers. For external projects it might be expressed as a high to low market share, and a high to low price point compared to competitive products.In fact, priority does not influence risk at all. Risk can be multiplied times a priority ranking to show how value is reduced but the end result of a subjective number times another subjective number isn't useful except to compare to each other. Priority is good for ranking order of development of use cases. It's also good for ranking test case execution effort and thoroughness after factoring in development complexity and unfamiliarity of the unit under test.
Project risk
There are two easily confused types of risk in a project. There is project risk which occurs during the project. And there's organizational risk which is taken as a bet when the project is chartered and realized when the value is gained.PMI defines project risk as "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives." But using that definition, since all future events are uncertain, means that all project activities are risks. Even if we declare risk to be the measure of the likelihood of a project failing to meet objectives, we're still in the same boat because any event could conspire to reduce the full obtainment of the objectives.
Project risk bets are taken mostly in design. The bet on the scope to be delivered starts when you spend money towards a design and ends when the product/code is delivered using that design. Strategy and analysis phases can influence design but design commits you to a path. These are all known risks and estimated up front. But there's always a component of unknown risks like developers changing jobs or emergencies but that's what project managers have 'flex time' for.
I have seen risk associated with failure to do adequate analysis called scope definition risk. That's the same baloney as when people call bad requirements gathering down the line, scope creep. This isn't project risk, it's systemic and therefore organizational risk. Check your process maturity and start there with a project constraint. Since you know you don't have good enough requirements management, you should expect some sort of decay around the elicited scope. It's a given, not a risk.
Organizational risk
The estimated value or dollars associated with the project which guides the project manager are broken up into three critical success factors. These three things that make a project succeed are- finishing on time
- coming in on budget
- delivering total decided scope
The value measurement units that we can use for each of these are standardized into
- number of days
- amount of dollars
- use cases
The first two can be reduced to similar units of dollars. Days is just a measure of cost of overhead, labor, and other consumed resources. Scope is harder. If you are using good requirements estimation units as recommended in my blogs, you should choose a scope measurement unit that delivers those units of value known as use cases and user stories in the analysis world or work packages in the project management world.
Then you can use a standard conversion of like sized use cases to the approximation of PMI for work packages of up to ten days depending on complexity. So, input from the developers who gauge the difficulty of a use case is critical. Scope estimation is a similar process to test case effort estimation. It might break down to
Then you can use a standard conversion of like sized use cases to the approximation of PMI for work packages of up to ten days depending on complexity. So, input from the developers who gauge the difficulty of a use case is critical. Scope estimation is a similar process to test case effort estimation. It might break down to
- low - 2-4 days
- medium - 5-7 days
- high - 8-10 days
The budget component should be constrained at the point at which the value can be assessed before the decision for approval is made. At that point, if the time component is constrained, that creates no flexibility on the scope component and it means that only a certain unknown amount of value can be delivered creating high anxiety for the project manager who hopes that the scope is minimal, the complexity is low, or the developers are wizards. Once the requirements have been analyzed, it may be the that the solution to the project is too big and the scope can't be delivered successfully leading only to a project cancellation and sanity preservation for the enterprise.
If the time component is flexible, then there's still some angst about the outcome but now with some adjustment ability. Using employees over time will still accumulate project cost so knowing the amount of time necessary to deliver a solution may push the expenditures over what the project value is. So where's the risk here if the value is estimated, the time is estimated and the scope determines whether the project is green lighted or not?
The bet on known risks is taken at chartering and cashed in when in production. But unknown risk is in the changing influences on the solution which is a strategy responsibility. A Pokemon Go clone is only good if the market has not produced any other competitors or people feel like they have found enough Bulbasaurs or Meowths.
Strategic constraints subject to influence now become sources of risk. Events expected to happen that influence the planned time for the project or the budget must be listed. How do you judge a probability of occurrence of a one-time future event? You don't. You just put a number on it and rank the items by that. It's all subjective. And the next person puts a different rank on it and then you compromise. Instead of risk, let's call it a proposed ranking of possible scenarios that affect value.
The bet on known risks is taken at chartering and cashed in when in production. But unknown risk is in the changing influences on the solution which is a strategy responsibility. A Pokemon Go clone is only good if the market has not produced any other competitors or people feel like they have found enough Bulbasaurs or Meowths.
Strategic constraints subject to influence now become sources of risk. Events expected to happen that influence the planned time for the project or the budget must be listed. How do you judge a probability of occurrence of a one-time future event? You don't. You just put a number on it and rank the items by that. It's all subjective. And the next person puts a different rank on it and then you compromise. Instead of risk, let's call it a proposed ranking of possible scenarios that affect value.
It's all a guess
But because the infinite number of influences can never be known, it's not going to be that useful to collect historical information on what is recorded since the past may not accurately reflect the future. And it takes quite a bit of time to collect, enter, write up those reports, and present. Maybe it's better to just go with your gut when you need to rank risky events. Or see if your arthritis pain is flaring up like it does for weather changes.