Synposium

Jason Y. Sproul

How to Beat the Odds at Judging Risk

| 0 comments

Fast, clear feedback is crucial to gauging probabilities; for lessons, consult weathermen and gamblers

By DYLAN EVANS

Most of us have to estimate probabilities every day. Whether as a trader betting on the price of a stock, a lawyer gauging a witness’s reliability or a doctor pondering the accuracy of a diagnosis, we spend much of our time—consciously or not—guessing about the future based on incomplete information. Unfortunately, decades of research indicate that humans are not very good at this. Most of us, for example, tend to vastly overestimate our chances of winning the lottery, while similarly underestimating the chances that we will get divorced.

Weather forecasters tend to focus on a few clear questions, and their accuracy gets tested the very next day

Psychologists have tended to assume that such biases are universal and virtually impossible to avoid. But certain groups of people—such as meteorologists and professional gamblers—have managed to overcome these biases and are thus able to estimate probabilities much more accurately than the rest of us. Are they doing something the rest of us can learn? Can we improve our risk intelligence?

Sarah Lichtenstein, an expert in the field of decision science, points to several characteristics of groups that exhibit high intelligence with respect to risk. First, they tend to be comfortable assigning numerical probabilities to possible outcomes. Starting in 1965, for instance, U.S. National Weather Service forecasters have been required to say not just whether or not it will rain the next day, but how likely they think it is in percentage terms. Sure enough, when researchers measured the risk intelligence of American forecasters a decade later, they found that it ranked among the highest ever recorded, according to a study in the Journal of the Royal Statistical Society.

It helps, too, if the group makes predictions only on a narrow range of topics. The question for weather forecasters, for example, is always roughly the same: Will it rain or not? Doctors, on the other hand, must consider all sorts of different questions: Is this rib broken? Is this growth malignant? Will this drug cocktail work? Studies have found that doctors score rather poorly on tests of risk intelligence.

Finally, groups with high risk intelligence tend to get prompt and well-defined feedback, which increases the chance that they will incorporate new information into their understanding. For weather forecasters, it either rains or it doesn’t. For battlefield commanders, targets are either disabled or not. For doctors, on the other hand, patients may not come back, or they may be referred elsewhere. Diagnoses may remain uncertain.

If Dr. Lichtenstein’s analysis is correct, we should be able to develop training programs for instilling greater risk intelligence by boosting and speeding up feedback. Royal Dutch Shell introduced just such a program in the 1970s. Senior executives had noticed that when newly hired geologists predicted oil strikes at four out of 10 new wells, only one or two actually produced. This overconfidence cost Royal Dutch Shell millions of dollars. In the training program, the company gave geologists details of previous explorations and asked them for numerical estimates of the chances of finding oil. The inexperienced geologists were then given feedback on the number of oil strikes that had actually been made. By the end of the program, their estimates roughly matched the actual number of oil strikes.

Intelligence agencies are also working to improve their approach to risk. In 2011, researchers began recruiting volunteers for a multiyear, Web-based study of people’s ability to predict world events. The Forecasting World Events Project, an experiment sponsored by the Director of National Intelligence, aims to discover whether some kinds of personalities are better than others at such exercises. Volunteers offer their best guesses about events and trends in realms such as international relations, economics, public health and technology.

Just by becoming aware of our tendency to be overconfident or underconfident in our estimates, we can go a long way toward correcting for our most common errors. Doctors, for instance, could provide numerical estimates of probability when making diagnoses and then get data about which ones turned out to be right. As for the rest of us, we could estimate the likelihood of various events in a given week, record our estimates in numerical terms, review them the next week and thus measure our risk intelligence in everyday life. A similar technique is used by many successful gamblers: They keep accurate and detailed records of their earnings and their losses and regularly review their strategies in order to learn from their mistakes.

No one can be great at estimating all types of probabilities in all situations. But given the right conditions and the right kind of self-reflection and practice, we can all make substantial improvements in our risk intelligence.

— From “Risk Intelligence” by Dylan Evans. Copyright © 2012 by Dylan Evans

A version of this article appeared May 12, 2012, on page C3 in the U.S. edition of The Wall Street Journal.

Leave a Reply