**Author: Feng Zhu**

**Editors: Nayiri Kaissarian, Jimmy Brancho, and Noah Steinfeld**

The first part of this post explained what chaos is, how it was first discovered in studies of the solar system, and why chaotic systems can be difficult to understand. In this second part of the post, we will explore what we can do to get a grip on such systems.

**Dealing with chaos**

Many chaotic systems are deterministic, not random: the evolution of the system is completely specified by its current state. Borrowing Einstein’s words from a different context, God does not play dice with these systems.

Nevertheless, these systems can appear random if we are not careful, especially if we are solving our systems numerically rather than analytically, i.e. by computing approximate solutions rather than using algebra to obtain exact ones. Minute rounding errors introduced in intermediate steps can propagate into significant errors in the output. Unfortunately, avoiding the use of approximations is often not an option: many of our models do not have—as far as we know—exact, “closed-form” analytical solutions, and we can only extract concrete, specific predictions from these models using numerical methods.

Chaos theory can warn us when we need to be careful with our input parameters by helping us measure the uncertainty in our predictions, hence telling us, for instance, how accurate the input weather data would need to be to guarantee a certain level of accuracy in our weather forecasts. It can also indicate how far in the future we can expect the error in our predictions to remain within a controlled range—a characteristic timescale that goes by the name of “Lyapunov time.”

The solar system, for example, has a Lyapunov time of 5-10 million years. Roughly speaking, this means that after 5-10 million years, the predictions made by numerical solutions to Newton’s laws for the Solar System will have an error ten times larger than the uncertainty in our input parameters.

**Quantifying chaos**

Modern chaos theory has ways to quantify the central notion of “sensitivity to initial conditions.” In a chaotic system, at each point we can find other arbitrarily close points with different future paths, or trajectories. An arbitrarily small disturbance to the current trajectory may lead to significantly different future behavior.

One way to more precisely quantify this sensitivity is to measure how quickly trajectories spread apart. This calculation produces a quantity called “entropy” (related to, but not exactly the same as, the similarly-named concept in physics). Any system with positive entropy is considered chaotic; the higher the entropy of a system, the more quickly nearby trajectories will spread apart, the larger the range of possible outcomes will be, and the more “chaotic” the system is.

Entropy is closely related to the Lyapunov time mentioned above; there are also other measures of how chaotic a system is, such as rates of mixing.

**Harnessing chaos**

Although chaotic systems are, by and large, not actually random, they exhibit some of the qualitative or large-scale features of random systems. As chaotic systems evolve over long periods of time, statistical quantities such as means and variances begin to obey precise and well-understood statistical laws that also apply to random systems, such as the Law of Large Numbers and the Central Limit Theorem, giving us another way—a fundamentally more powerful one—of understanding chaotic systems.

We cannot know for sure the long-term fate of the solar system, but we can learn something about its fate in a statistical sense, by running simulations of the solar system with a family of slightly different values for the input parameters and following their evolution. It turns out that in about 99 percent of these systems, the orbits of all the planets remain stable until the death of the Sun. Thus, the answer to the question of the stability of the solar system becomes neither “yes” nor “no” but “yes, with 99 percent probability.”

Both of these ideas—understanding the uncertainty present in a system, as well as harnessing the statistical regularities that can appear in chaotic systems—are present in and contribute to the power of ensemble weather forecasting. We may not know for sure whether there will be a massive snowstorm in a week, but we can say, for example, “there is a 70 percent chance of heavy snow” with reasonable confidence.

It turns out that systems that may be best described by chaotic models are everywhere around us: in everything from star systems and planets to the weather to epidemiology and population models. To understand these systems, and to use the chaotic models that describe them to obtain useful insights, we need to either pay careful attention to the errors propagating through our models as we go from input data to output predictions, or be willing to work with predictions of a probabilistic nature.

*About the author*

Feng Zhu just finished his fourth year in the Math PhD program at the University of Michigan. He studies geometry and topology, or more specifically the weird and wonderful things that groups of symmetries of negatively-curved spaces can do. He was born in Shanghai, China, grew up mostly in Singapore, went to college in Princeton, New Jersey, and finally came to Ann Arbor for graduate school (in short, he realized that the tropics aren’t all that cool and started migrating back towards the cold.) When not doing math, he enjoys running, traveling, reading, attempting to learn languages, and playing the keyboard.

Read all posts by Feng here.