Reducing Uncertainty—A Valuable Practice By, RJ Hixson

Whenever I used to hear or read an outstanding book recommendation, I would typically buy the book and put it on my “to-read shelf”, which always offered a choice of great material. Maybe five or six years back I purchased one book for that shelf but was unable to read much more than a chapter or two on two different occasions. While highly recommended, the book read as dry and textbook-ish so I just put it back on my shelf. Last year, I had a couple of audible.com credits to use quickly before my subscription expired so I decided to give the audio version of the book a shot. Wow! What a difference listening made. For whatever reasons, the narration enthralled me in a way that reading had not.

Douglas Hubbard became interested in the idea of measurement and its challenges as a young consultant and some years later wrote How to Measure Anything. He had regularly heard the protest, “We can’t measure that” during project meetings. Rather than buy into that mindset, he challenged the whole idea that intangibles could not be measured and, in the process, developed a whole new field of knowledge about the value of information. Throughout the book, he gives numerous examples of measuring tangible items and intangibles—nearly all that were previously believed to be unmeasurable.

Hubbard has seen the most egregious “We can’t measure it” protests coming from IT departments. Regardless of the size of your company or organization, you are probably aware of any number of large-scale IT projects that were blessed and funded but never brought the promised benefits. Or, maybe they failed horribly. The Van Tharp Institute had one of those a few years back, and we are not even a large company. Since he wrote the book, Hubbard seems to have specialized in measuring return on investment and other success metrics for IT projects and cyber security.

Hubbard’s case studies and methods to measure intangibles are fascinating. I was also intrigued by the ways that his methods apply to trading.

A Few Measurement Examples

Fish Population: To estimate the number of fish in the pond, catch a bunch of fish, tag them and release them. Then capture more fish in the same exact way as the first bunch. By using the proportion of tagged to untagged fish in the second catch, you can come up with a reliable estimate for the number of fish in a body of water.

German Tank Production During World War II: Allied intelligence estimated how many tanks the Germans were producing but really, they had no idea. Posed with the problem, a group of statisticians developed a new sampling method using the only real information they had—serial numbers from tanks that had been captured. The statisticians’ estimates for tank production turned out to be very different than the intelligence service numbers and they also turned out to be remarkably close to the actual production figures (confirmed after the war).

Forecasting Fuel for US Marine Corps: Hubbard helped the US Marine Corps improve their battle planning process. Previously, the service had used a very safe (large) number for the fuel units needed and the related resources to provide that fuel. By measuring fuel consumption on trucks over various paved and off-road conditions (which had never been done), Hubbard’s consulting team came up with a fuel consumption model that allowed the Marine Corps to cut its fuel usage estimates dramatically yet safely, save money, reallocate resources, and in the end, save lives.

A Few Takeaways

Here are just a few of the takeaways I found reading the book. Some were brand new ideas while others confirmed previous learnings.

When you have a lot of uncertainty, you don’t need tons of data to reduce that uncertainty. Even a small sample can be highly valuable if that’s all that’s available or affordable.

Labeling risk as high, medium, and low is ambiguous and probably harmful to decision-making processes. Providing very simple quantifications to those terms is actually a huge improvement.

Confidence intervals are close enough to probabilities that practitioners (traders) can use them interchangeably. For example, some figure at a 95% confidence interval effectively says that figure has a probability of 95%.

Monte Carlo modeling is a very useful and typically underutilized decision tool.

Perhaps the biggest distinction the book provided was that measurement does not mean counting. Here is Hubbard’s definition:

Measurement is a quantitatively expressed reduction of uncertainty based on one or more observations.

So, measurement is a process rather than a static nominalization, which expands the usefulness of measurement a great deal. As a process, measurement does not eliminate uncertainty but reduces it. The idea of eliminating uncertainty (relatively speaking) with enough measurement (perfect information) may be possible but for most decisions, there are costs associated with collecting incremental information. This new area – Information Economics – looks at the costs and benefits of gathering more information and if or how additional measurements help decision-makers.

Application to Tharp Think

Actually, How to Measure Anything relates to many Tharp Think principles but Hubbard’s core concept of reducing uncertainty struck me as applying very well to this one: “I meet my objectives through position sizing strategies.”

This principle has several assumptions and Mr. Hubbard’s ideas fit in well with them.

You have “good” objectives. Good here would include written down, quantifiable, complete/robust objectives that fit you. Those are all about reducing uncertainty about what you are trying to accomplish and about why you are trading.

You know how your system performs. You can reduce uncertainty about your system’s performance greatly in two main ways: by following rules and measuring performance by market type. Obviously, gathering a large sample size of R multiples in a particular market type is very useful. Also, useful is calculating your system’s SQN score. These practices reduce uncertainty about what a trading system will generate when you trade it in the future.

You understand position sizing strategies. They answer a quantification question: “How much to risk in a trade?” They are modified for each system’s R multiple distribution. One of their primary functions is to provide a solid level of confidence that you will not blow up your account.

Hubbard can also help us understand one or two logical conclusions of this Tharp Think principle:

You need well-crafted position sizing strategies to help you reach your objectives. Well-crafted position sizing strategies help you achieve the returns you seek and avoid the drawdown level you are unwilling to tolerate. Well-crafted position sizing strategies cannot guarantee some result (imperfect information) but they can greatly reduce the uncertainty about the range of ending results you can expect in your equity.

You need processes to define good objectives and craft effective position sizing strategies. Again, the processes will involve some measurement and will greatly reduce the uncertainty around each item. These processes might include some modeling and at least for the position sizing strategy development, Monte Carlo simulations can be very useful.

Recommendation

A few good resources (on sale now) that fit nicely with this premise are: The Definitive Guide to Position Sizing Strategies and perhaps Bear Market Strategies to help with the uncertainly of volatile bear markets.

Scroll to Top