I’ve learned (and re-learned) a lot from tutoring middle-schoolers in math at Top Honors.1 For example, if a math problem is too hard to solve in your head, I counsel my student to draw a picture. That’s one of the best features of decision trees: it allows you to draw a roadmap for complicated, multi-stage decisions. Trees help you to manage two key complications: uncertainty and path dependency.
Let’s start with uncertainty. We’ve already discussed how the outcomes of our decisions are uncertain because they depend on people and things over which we have little or no control. The outcome of your decision to buy home A rather than B or to invest in stocks rather than bonds will depend on the actions of your family and neighbors on the one hand and capital markets on the other. That’s one reason why you can’t tell the wisdom of a decision by the outcome. It’s the decision process that matters.
For any decision you make today, not only are outcomes uncertain, but also the ultimate results will depend on future decisions you make. For example, let’s say home A is on a larger lot, which will give you options to put in a pool, greenhouse or (my favorite) astronomical observatory. You’d make these improvements based on how your family’s interests evolve. Home B doesn’t offer these valuable options. If B didn’t have any other advantages over A, you’d be wise to choose A. But, if B had more sunlight and was better situated for solar power or a green roof, then you get a situation that a decision tree can help with.
The chart below is a hypothetical decision tree I prepared for my class at City College of New York. Decision 1 is a choice between immediately creating an online financial wellness app or starting with a lower-cost offline prototype. Assume we take the upper branch and build the online version first. We invest $75,000 and have a 25% chance of success, which is worth one million dollars and a 75% chance of failure, which is worth zero. The weighted average of these values is $250,000. Subtract the $75,000 cost and you get a net value of this decision of $175,000.
Let’s instead take the lower branch and start with the offline prototype. It costs much less ($10,000) but has a lower chance of success (50/50). If it succeeds, a second decision becomes available to me. I can turn the prototype into an online app or I can try to sell the idea to an established company. This second stage decision has its own chances of success and failure. To calculate the value of this lower branch, I proceed from right to left, calculating the net value of the second stage decisions and then the first stage decisions. My conclusion: creating the offline prototype first has the highest expected value and is the decision I should make (and did make).
The decision tree’s obvious appeal is the relative ease in understanding multi-stage decisions under uncertainty like this one. But they are deceptively challenging to create. It’s hard to boil down a complex set of decisions to their key paths, to set dollar (or other numerical) values to the outcomes and to estimate probabilities. Lots of relevant outcomes can’t be valued: in this case, reputation; learning; emotional stress among them.
And what do probabilities mean here anyway? They can be objective or subjective. If we understand them to be objective features of the world, then they are helpful only insofar as you make the decision many times. Over hundreds of similar apps, I’ll be likely to end up ahead if I always take the lower path. But, I can develop an app maybe a few times in my career. For any given single instance, I can make the right decision and still have an unhappy outcome.
To use a decision tree for one-time decisions, we’d have to think of probabilities as subjective, i.e., as a measure of degree of belief or confidence. 75 percent means “high confidence” while 25% is obviously “low confidence.” This way, we’re honest about how fundamentally subjective the decision necessarily is.
Another challenge with decision trees is that they don’t give you a very good handle on risk. The worst case scenario in the upper, “Online” branch is the $75,000 cost of failure. The lower branch’s worst case is $80,000 if the Offline strategy succeeds but the subsequent Online decision fails. The decision analyst can report this number separately or apply some Monte Carlo analysis.
In sum, decision trees are appealingly visual ways to sketch the various paths our decision making can take and select an initial path based on our more-or-less objective estimates of the costs, benefits and probabilities of various outcomes. They can be helpful as a commitment device: while we are free to update the values as we get more information, we can use them to limit the influence of irrelevant information and biases by vowing to continue to apply the tree rigorously.
Have you got a multi-stage decision that you’d like to analyze with a decision tree? Let us know…maybe we can help.
1 Top Honors is looking for tutors at their new, Brooklyn location. Find out more here.