Why predicting the future is more than just horseplay

The science of prediction lies at the heart of the modern world, but attempts to forecast even the most straightforward systems often confound scientists, while complex systems sometimes reveal themselves to surprisingly predictable.

10 min readApr 5, 2018

--

By Daniel B. Larremore, Santa Fe Institute & Aaron Clauset, University of Colorado Boulder and Santa Fe Institute

Originally appeared in Christian Science Monitor, April 24, 2017 as part of a continuing series about complexity science by the Santa Fe Institute and The Christian Science Monitor, generously supported by Arizona State University.

The science of prediction lies at the heart of the modern world, but attempts to forecast even the most straightforward systems often confound scientists, while complex systems sometimes reveal themselves to surprisingly predictable.

When the odds posted by the track are different from the odds determined using insider information, Kelly’s formula explains how to take those differences and place the best bets possible, mathematically speaking. The formula is powerful in its simplicity. It tells us to put money on every horse for which we have an informational or statistical edge, and then calculates exactly what fraction of our bankroll to bet on each horse, depending on the strength of that edge.

While this basic idea had long been known — the larger the difference in the track odds and the real odds, the bigger the opportunity for the gambler — Kelly quietly revolutionized the practice of prediction by writing down the optimal exchange rate between knowing something that others do not and the benefits of that knowledge.

Today, racetracks are less popular, but the principles remain the same. Asymmetries in the power to statistically predict the future are the bread and butter of finance around the world, for example. But predictions underpin more than financial markets alone.

Prediction is the decoder ring of the modern world, touching everything from healthcare, car insurance, politics, and terrorism, to sports, scientific discovery, and even the ride-hailing apps that are disrupting the taxi industry. In the age of bigger data and better algorithms, however, researchers are discovering straightforward systems that appear to be fundamentally unpredictable, as well as complicated systems whose behavior is surprisingly predictable.

Then again, this sort of thing is the norm when studying what researchers call complex systems, systems with many interacting elements whose collective behavior defies expectations based on their component parts. There may be simple patterns that organize seemingly chaotic events, but complicated limits to prediction in rather simple systems. Finding the organizing patterns and challenging the limits to prediction are at the core of complex systems research.

Systems that involve people can be particularly surprising, because human agency would seem to make accurate prediction impossible. After all, if an equation predicts that a stock trader or public official will take a particular course of action, that person can simply take a different course, rendering the prediction immediately wrong.

And while predicting what an individual might do is sometimes next to impossible, as we’ve seen throughout this series in The Christian Science Monitor, complex social systems can exhibit highly predictable behavior at large scales. For instance, no driver wants to get stuck in a traffic jam, but because of the choices each driver makes independently and the constraints of rush-hour travel, traffic jams emerge despite efforts to avoid them. Understanding the conditions under which they appear is fairly straightforward, even if no individual driver can predict which specific decisions will lead to a traffic jam.

Finding predictable patterns that emerge from the complicated interactions of many individual parts is the norm when studying complex systems. Detecting these organizing patterns and outlining the limits of their predictability lies at the core of complexity science.

Deep patterns in war and violence

When an apple falls from a tree, everyone knows what happens next. We know from the application of the scientific method — that is, from observation, then explanation, then prediction, and finally verification — that gravity causes the apple to move toward the ground at a specific and constant rate of acceleration. Gravity and falling are so predictable that NASA engineers can hurl a satellite one hundred million miles across the solar system at Mars and still predict with an accuracy of a dozen feet where it will enter the atmosphere of the Red Planet.

Human affairs are far messier. Take organized violence. Acts of terrorism can seem to occur at random places and times, and wars can erupt from causes as varied as internal political uprisings to territorial disputes. War and terrorism are archetypal chaos.

And yet, both wars and terrorism follow the same predictable mathematical pattern. In the early 20th century, the English polymath Lewis Fry Richardson began looking at statistical regularities in the sizes of wars, measured by the number of deaths they produce.

He discovered that wars follow nearly the same pattern as earthquakes. That is, just as the famous Gutenberg-Richter Law (on which the Richter Scale is based) allows us to predict how many earthquakes of magnitude 3 or 4 or 6 will occur in California this year, Richardson’s Law allows us to predict how many wars will occur over the next 30 years that kill 10,000 or 50,000 or any other number of people.

Richardson’s Law does not let us go beyond broad forecasts. It provides little help in predicting which countries will go to war, over what, and how large any particular war will be. Likewise, seismologists still struggle to predict precisely when or where any particular earthquake might occur or how large it might be.

In 1960, Dr. Richardson speculated that the statistical law that governs wars would hold for other types of violence, such as homicides or mass murders. Recent work suggests that he was not far off the mark. The same mathematical pattern as in the Gutenberg-Richter Law also appears in the sizes of terrorist attacks worldwide, and may even hold for the mass shootings that have become disturbingly common in recent years.

Although these statistical regularities have improved our ability to estimate the broad brushstrokes of events, we still can’t predict the precise details of the next mass murder.

Because we lack systematic data on the precise stresses and energy build-ups in different parts of the Earth’s crust, we cannot predict earthquakes. Similarly, the contingencies of human behavior make prediction that much harder within complex social systems, such as the ones that generate wars and terrorist attacks. We are far from having complete data. But even if we did have it, we would not know what any particular person would do. We can predict patterns only at the global scale.

Certainty and serendipity

Predicting the progress of science itself also runs aground on hard limits to accurate prediction. In 1964, Arno Penzias and Robert Woodrow Wilson discovered the cosmic microwave background, the noise left over from the early universe. They received a Nobel Prize in 1978 for their discovery that confirmed the Big Bang theory.

But Penzias and Wilson weren’t even looking for the cosmic microwave background when they stumbled upon it. They were trying to detect radio waves bouncing off of satellites that were carried to high altitudes by balloons. They kept getting a strange noise from their receiver. They tried to remove the noise by reorienting their antenna, by experimenting both day and night, and by clearing away a family of pigeons nesting in the antenna. But the noise remained. Only after eliminating all of the alternative possibilities did they realize that, in fact, no radio source on Earth or even within our galaxy that could explain their anomalous readings.

Drs. Penzias and Wilson had stumbled upon the echo of the Big Bang. Who could have predicted that? Alexander Fleming discovered penicillin in 1928 not through the deliberate and predictable processes of the conventional scientific method, but by accident. CRISPR-Cas9, the wonder protein that enables scientists to edit a gene inside a living organism, was discovered by scientists studying an obscure aspect of certain bacteria.

You might think we scientists would be better at predicting discoveries. After all, scientists help choose which scientific projects receive support from tax payers, which young researchers get hired to run their own labs, and which scientific papers survive peer review. Each of these choices is a kind of prediction that the scientific community depends on.

But the biggest discoveries are often the hardest to predict. We don’t see them coming because they reorganize how we thought the world worked. Big discoveries are valuable precisely because they are fresh and new, whereas predictions are always based on historical patterns.

Predicting that the future will be like the past, that accomplished scientists will continue doing good science, or that a hot area of research will continue to produce new ideas, is easy and natural. But it is also boring and shortsighted and therefore unlikely to hit upon truly unexpected ideas. Making those big leaps forward, the ones that change the way we understand the world around us almost always require a gamble with no guaranteed payout.

When predictability is boring

People like predictability. Predictability keeps us safe, ensuring that your car’s airbag will deploy 99 percent of the time. Predictability allows us to anticipate and plan, and so homes in California must be built to withstand earthquakes. And predictability helps us avoid natural disasters, which is why we invest millions of dollars every year to operate weather satellites.

It may come as a surprise then, that in some systems, we actually seek to make things less predictable.

Professional basketball, it turns out, is one of these systems. Although millions of fans may feel otherwise, physicists and mathematicians have shown that the ups and downs of lead sizes over the course of a game are highly unpredictable. So unpredictable, in fact, that the outcome of most games is only slightly less predictable than guessing whether there will be more heads than tails when flipping 100 coins.

Of course, you can improve your predictions about how a game will evolve if you know something about the teams, with separate calculations for offensive and defensive strengths, star players, coaching acumen, injuries, and the like. Even so, modeling how a game evolves by flipping coins will do a good job of predicting the outcomes of games over a whole season.

This dramatic unpredictability is puzzling. After all, professional athletes spend enormous energy honing their skills, and teams win or lose depending on how well their players play. To make this less puzzling, let’s consider the spectators.

The fans love exciting games. Huge blowouts are no fun. Team sports are best when their ups and downs, and ultimately their outcomes, are as random as possible. A great game is one in which the teams are evenly matched and they battle valiantly We love it when an underdog manages an upset victory in the nick of time. In other words, we crave limited predictability in our sports.

In the early 1950s, basketball had become a boring game to watch. When one team opened up a lead, the game turned into keep-away, allowing the leading team to effectively freeze the score. There was no randomness, no level playing field, no promise of an exciting finish. Once a team gained a good lead, spectators might as well have headed for the car.

That’s why Danny Biasone, owner of the Syracuse Nationals, advocated for and eventually won the introduction of the 24-second shot clock in the 54’-55’ season. The shot clock, which requires a team to make a shot within 24 seconds of gaining possession of the ball, unfreezes the score and infuses a greater degree of randomness into every game from beginning to end. In other words, in sports, we crave unpredictability so much that when a game becomes too predictable, we happily change the rules to make it more uncertain.

Unpredictability is an essential ingredient in any form of entertainment. A horror movie isn’t fun without surprising frights, the best jokes always turn on an unexpected element, and love stories are appealing because it seems impossible that the two people will ever get together, yet they do so against all odds.

In team sports and entertainment, then, the limits to prediction are at the core of our interest. Unpredictability keeps us on the edges of our seats and can delight us or break our hearts. The same limits are the source of the gambler’s love of sports, and the dream that one person’s insider information can somehow be translated into a statistical edge at the bookies’ desk.

The future of prediction

Ben Mezrich’s book Bringing Down the House tells an exciting but fictionalized story of the MIT Blackjack Team. But in the 1990s, the real MIT Blackjack team did go to Las Vegas. Armed with statistics, they turned their edge into cash and walked away with fortunes.

They weren’t the first. The threads of their ideas reach back through history. In the late 1970s, a team from Santa Cruz built miniaturized computers, which they hid in their shoes and used to predict the clattering fate of the roulette wheel. Like the MIT students who would come after them, their predictions weren’t perfect, but they knew that any statistical edge could be turned into winnings.

Earlier still, Ed Thorpe’s victories over blackjack in the mid-1960s were the product of carefully exploiting the differences between good and bad prediction. He wrote the book on counting cards in blackjack, and started a hedge fund. By 1988, Dr. Thorpe’s personal investments had grown at an annualized rate of 20 percent per year for over 28 years.

Thorpe’s story begins earlier too. He built the world’s first wearable computer with Claude Shannon, a scientist at Bell Labs who fathered the age of computers with information theory. In 1961, two decades before the Santa Cruz students, Thorpe and Shannon built enough of a statistical advantage to beat Nevada’s roulette wheels. Claude Shannon worked at Bell Labs with none other than John Kelly Jr.

Kelly never gambled himself, but his formulas taught each of the players who followed how to convert prediction into earnings. In 1953 he quantified the fundamental value of prediction by equating an information edge with earnings and used the horses at the racetrack to illustrate his points. Although most of us have never heard of Kelly, today we use his ideas when making predictions about every part of our complex society.

When the stakes of prediction are high and the unexpected occurs, it’s tempting to throw the baby out with the bathwater and fire the statisticians for the surprises they told us were unlikely. We would, of course, be unwise to do so.

In spite of its limits, the future of prediction has never looked brighter. Those who walk away from statistical predictions are leaving money on the table. Eventually, in the long run, they’ll be on the losing side of people willing to read John Kelly’s work.

Daniel B. Larremore is an Omidyar Postdoctoral Fellow at the Santa Fe Institute. His research focuses on developing methods of networks, dynamical systems, and statistical inference to solve problems in social and biological systems.

Aaron Clauset is an assistant professor in the Department of Computer Science and the BioFrontiers Institute at the University of Colorado Boulder, and an external professor and former Omidyar Postdoctoral Fellow at the Santa Fe Institute. He is an internationally recognized expert in network science and computational analyses of complex systems.

--

--

Santa Fe Institute
Santa Fe Institute

Written by Santa Fe Institute

The Santa Fe Institute is an independent research center exploring the frontiers of complex systems science.

No responses yet