2 May 2009

Mandelbrot's Natural Wild Random

by Corry Shores
[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Central Entry Directory]
[Mandelbrot Entry Directory]

Mandelbrot's Natural Wild Random

Our brains are uncertain, probabilistic computer systems. It assists our adaptation. Irregularities and interferences are sources of this essential neural randomness. Some researchers have found that a certain sort of noise predominates in our brain’s operations. It is called pink noise, or 1/f noise. This sort of noise exhibits a randomness that fractal mathematician Benoit Mandelbrot calls “wild random.” He articulates this sort of unpredictability in his writings of economics. Our aim is different. We want to characterize and describe the wild randomness inherent to our brain’s ‘computations.’ However, to uncover Mandelbrot’s description, we will need to learn what he has to say about financial markets. An extra advantage is it will help us understand the current financial crisis.

[Given how extraordinarily unqualified I am to explain mathematical matters, I will largely quote from Mandelbrot’s texts. If a mathematically-minded reader could offer corrections or additions, I would be deeply grateful.]

Mandelbrot’s basic premise is this:

For too long, market methods were based on an oversimplified “bell-curve” model for market dynamics. This presupposes that significant price-changes are extraordinarily rare. It also presumes that each day’s changes are like a flip of the coin. If you flip heads first, there is still a fifty-fifty chance you will flip heads the second time.

Mandelbrot argues that wild market-changes are unpredictable, but not rare. And also, seemingly small changes one day can be found to periodically “ripple” into the future. So we are dealing with two qualitatively different forms of randomness: the simple or “mild” randomness of the coin-flip, and the chaotic complex “wild” randomness of the markets (and as we will later see, of our consciousness.)

Mandelbrot & Hudson offer illuminating illustrations:

In the summer of 1998, the improbable happened.

August 4, the Dow Jones Industrial Average fell 3.5 percent. Three weeks later, as news from Moscow worsened, stocks fell again, by 4.4 percent. And then again, on August 31, by 6.8 percent.

The hammer blows were shocking – and for many investors, inexplicable. It was a panic, irrational and unpredictable. (Mandelbrot & Hudson 3, emphasis mine)

The odds of getting three such declines in the same month were even more minute: about one in 500 billion. Surely, August had been supremely bad luck, a freak accident, an "act of God" no one could have predicted. In the language of statistics, it was an "outlier" far, far, far from the normal expectation of stock trading.

Or was it? The seemingly improbable happens all the time in financial markets. A year earlier, the Dow had fallen 7.7 percent in one day. (Probability: one in 50 million). In July 2002, the index recorded three steep falls within seven trading days. (Probability: on in four trillion.) And on October 19, 1987, the worst day of trading in at least a century, the index fell 29.2 percent. the probability of that happening, based on the standard reckoning of financial theorists, was less than one in 1050 – odds so small they have no meaning. It is a number outside the scale of nature. You could span the powers of ten from the smallest subatomic particle to the breadth of the measurable universe – and still never meet such a number. (4c)

Despite this wild behavior, economic theorists have devised ways to determine risks. So even though one might not know how their investment will fare that day, they at least know there is a certain percentage of chance that it will go up or down.

In 1900, one such theorist, French mathematician Louis Bachelier, used the “random walk” model to describe market changes.

It postulates prices will go up or down with equal probability, as a fair coin will turn heads or tails. If the coin tosses follow each other very quickly, all the hue and cry on a stock or commodity exchange is literally static – white noise of the sort you hear on a radio when tuned between stations. And how much prices vary is measurable. Most changes, 68 percent, are small moves up or down, within one "standard deviation" – a simple mathematical yardstick for measuring the scatter of data – of the mean: 95 percent should be within two standard deviations; 98 percent should be within three. Finally – this will shortly prove to be very important – extremely few of the changes are very large. If you line all these price movements up on graph paper, the histograms form a bell shape: The numerous small changes cluster in the center of the bell, the rare big changes at the edges. (9-10)

[Image credits listed below. Click for an enlargement. Image 1]


The bell shape is, for mathematicians, terra cognita, so much so that it came to be called "normal" – implying that other shapes are "anomalous." It is the well-trodden field of probability distributions that came to be named after the great German mathematician Carl Friedrick Gauss. An analogy: The average height of the U.S. adult male population is about 70 inches, with a standard deviation around two inches. That means 68 percent of all American men are between 67 and 72 inches tall; 95 percent between 66 and 74 inches; 98 percent between 64 and 76 inches. The mathematicians of the bell curve do not entirely exclude the possibility of a 12-foot giant or even someone of negative height, if you can imagine such monsters. But the probability of either is so minute that you would never expect to see one in real life. The bell curve is the pattern ascribed to such seemingly disparate variables as the height of Army cadets, IQ test scores or – to return to Bachelier's simplest model – the returns from betting on a series of coin tosses. To be sure, at any particular time or place extraordinary patterns can result: One can have long streaks of tossing only "heads," or meet a squad of exceptionally tall or dim soldiers. But averaging over the long run, one expects to find the mean: average height, moderate intelligence, neither profit no loss. (10c-d)


Bachelier's model later became adopted, modified and termed "the Efficient Market Hypothesis"

This hypothesis holds that in an ideal market, all relevant information is already priced into a security today. One illustrative possibility is that yesterday's change does not influence today's, nor today's, tomorrow's; each price change is "independent" from the last. (11a)

This theory allowed fund managers to build "efficient" portfolios that "target a specific return, with a desired level of risk." (11b)

The old financial orthodoxy was founded on two critical assumptions in Bachelier's key model: Price changes are statistically independent, and they are normally distributed. The facts ... show otherwise (11d)


Price changes are not independent of each other. ... Many financial price series have a "memory," of sorts. Today does, in fact, influence tomorrow. If prices take a big leap up or down now, there is a measurably greater likelihood that they will move just a violently the next day. It is not a well-behaved, predictable pattern of the kind economists prefer – not, say, the periodic up-and-down procession from boom to bust with which textbooks trace the standard business cycle. (12a)

What a company does today – a merger, a spin-off, a critical product launch – shapes what the company will look like a decade hence; in the same way, its stock-price movements today will influence movements tomorrow. (12c)

Mandelbrot & Hudson elaborate on this phenomenon.

How much does the past shape the future? A moral philosopher would phrase it this way: Is it fate that determines our course, or do we choose our paths afresh with each new decision? A mathematician trades in another terminology: Is one event dependent on another, or independent from it? If Event B is dependent on Event A, the A’s occurrence changes the odds of B happening. If a basketball player sinks two shots in a row, evidence suggests, odds are greater that his third shot will also score. By prowess or psychology, a player can have “hot” streaks; successive shots are, to some degree, dependent on one another. But how long will his scoring streak last? (181d)

With economic quantities – production, inflation, unemployment – some form of dependence is the rule, and economists crank the numbers through cookbook tests to measure how strong it is, and over how many time-periods it extends. If inflation jumps in April, how likely is it to rise in May? How about two periods later, in June? Three? For each time-lag, economists measure the strength of the correlation, and that strength can vary between an arbitrary value of 1 for events that move in perfect lockstep, and -1 for events that always zig when the others zags. Zero, in the middle, means no correlation at all; events bounce around with no regard to one another. There are an infinite number of intermediate values on that -1 to +1 correlation scale. Each one tells a different story of the sign and strength of the short-term dependence. Most often, the strongest correlations are the short-term ones between periods close together; the weakest are those between periods far apart. If you plot the correlations, from short-term to long-range, you get a rapidly falling curve. How fast it falls varies from one economic quantity to another. Inflation is “persistent”: Its curve falls rather slowly. Once inflation gets going, it is difficult to slow – as central bankers discovered in the late 1970’s. (182c)

There are correlations that decrease, but so slowly that they seem never to vanish completely, no matter how far back in time you go. How is that possible?

Say a series of wet years fills the [dam] reservoir. Then, some years of mostly moderate weather follow – but the reservoir is full; the prior wet years are still having an effect. Then some dry years arrive. Now the reservoir is emptying. But it has more water than it otherwise would; still, the prior wet years are having an effect. You can get a glimpse of this in the chart below, of ring-widths in some of the world’s oldest trees, ancient bristlecones on Mount Campito, in the White Mountains of California.

The curve starts out as in most such charts, called correlograms, with high correlations for short time-periods: Adjacent tree-rings, the marks of growth only a year or two apart, are highly correlated. Beyond a few years, the correlations fall; the pattern from one decade or century to the next is more haphazard. But the correlations fall more slowly than expected. In fact, it is 150 years before they are so insignificant that to distinguish them statistically from chance, the usual tests are powerless. (183b-d)

This is a long-term dependence. It is a subtle idea. A pure radioactive substance decays geometrically in time. After one half-life, only half is left; after two half-lives, only a quarter, then an eighth; and then it is practically gone. But consider a mixture of different radioactive substances, such that very short, medium, long, and very long values of the half-life are present. When the short half-life components are practically all gone, the others have barely begun to decay; their effect will endure. That is long-term dependence. (184-185)

No one is alone in this world. No act is without consequences for others. It is a tenet of chaos theory that, in dynamical systems, the outcome of any process is sensitive to its starting point – or, in the famous cliché, the flap of a butterfly’s wings in the Amazon can cause a tornado in Texas. I do not assert markets are chaotic.... But clearly, the global economy is an unfathomably complicated machine. To all the complexity of the physical world of weather, crops, ores, and factories, you add the psychological complexity of men acting on their fleeting expectations of what my or may not happen – sheer phantasms. Companies and stock prices, trade flows and currency rates, crop yields and commodity futures – are all inter-related to one degree or another, in ways we have barely begun to understand. In such a world, it is common sense that events in the distant past continue to echo in the present. (185c-d, emphasis mine)


price changes are very far from following the bell curve. If they did, you should be able to run any market's price records through a computer, analyze the changes, and watch them fall into the approximate "normality" assumed by Bachelier's random walk. They should cluster about the mean, or average, of no change. In fact, the bell curve fits reality very poorly. From 1916 to 2003, the daily index movements of the Dow Jones Industrial Average do not spread out on a graph paper like a simple bell curve. The far edges flare too high: too many big changes. Theory suggest that over that time, there should be fifty-eight days when the Dow moved more than 3.4 percent; in fact, there were 1,001. Theory predicts six days of index swings beyond 4.5 percent; in fact, there were 366. And index swings of more than 7 percent should come once every 300,000 years; in fact, the twentieth century saw forty-eighth such days. (13a)

Below is the Dow showing is different values from 1916 to 2000.

We might think that as it rose, its daily changes were no less volatile. However, look below at the chart that traces the daily changes.

You see that even as the Dow climbed, it still saw many incredible drops. But then, that could just be because the market's value is so high. Thus a large drop at its height is only as significant as a smaller but proportional fall during its lower period. Yet consider the chart below. Here the values are configured so that a 1 percent change at its height in 2000 shows the same as a one percent change near its low points during the great depression.

Now let's compare the Dow with a simulation of a market that follows a bell-curved "Bachelier Brownian motion model." The top shows the values for different times. The bottom shows the daily changes.

We see that the bell-curved development is far more homogeneous, even though it is based on pure chance.

We can also consider how far the changes deviate from the mean value. We see that clearly the Dow does not fit within the standard deviation range of the bell-curved Brownian market.

Below displays together both the Dow's and the artificial Brownian market's deviations. We see just how much wilder the Dow's changes are.

The market's more 'wild' deviations are characteristic of the patterns in nature.

Examine price records more closely, and you typically find a different kind of distribution than the bell curve: The tails do not become imperceptible but follow a "power law." These are common in nature. The area of a square plot of land grows by the power of two with its side. If the side doubles, the area quadruples; if the side triples, the area rises nine-fold. (13bc, emphasis mine)

A power law also applies to positive or negative price movements of many financial instruments. It leaves room for many more big price swings than would the bell curve. (13d)

And trouble runs in streaks.

They know that when a wild Tuesday may well be followed by a wilder Wednesday. And they also know that it is in those wildest moments – the rare but recurring crises of the financial world – the biggest fortunes of Wall Street are made and lost. (21a, emphasis mine)

Mandelbrot & Hudson describe two ways of viewing chance: the garden of eden and the black box.

1) Garden of Eden. Chance is what we do not know.

The first is cause-and-effect, or deterministic.

If only we had the vast knowledge of God, everything could be understood and predicted. Scientists thought this way.

The great French mathematician, the Marquis Pierre-Simon de Laplace, asserted that he could predict the future of the cosmos – if only he knew the present position and velocity of every particle in it. (27c)

But we cannot know everything. Instead we see the world as a

2) Black Box. We cannot understand overwhelming complexities, but we can measure the proportions of chance distributions.

We can see what goes into the box and what comes out of it, but not what happens inside; we can only draw inferences about the odds of input A producing output Z. Seeing nature through the lens of probability theory is what mathematicians call the stochastic view. The word comes form the Greek stochastes, a diviner, which in turn comes from stokhos, a pointed stake used as a target by archers. We cannot follow the path of every molecule in a gas; but we can work out its average energy and probable behavior. (28b, emphasis mine)

In Chaos & Complexity, Brian Kaye defines stochastic process this way:

The mathematician describes a process in which a series of events is determined by chance, independently of what has happened previously, as a stochastic process. This comes from a Greek word meaning "to guess." The idea is that if one is looking at a stochastic variable such as the sequence of heads and tails displayed by a flipped coin, then one can never predict the outcome of any one particular flip. One can only guess which number will show. In essence, a stochastic variable is one which we have to guess. A sequence of events in which a new event is independent of the previous event, but in which the chain of events generates a final physical quantity such as accumulated gambling gains is known as a Markovian chain. (Kaye 40-41, emphasis mine)

Mandelbrot & Hudson go on to explain what we mean by chance and random.

But to say the record of their transactions, the price chart, can be described by random processes is not to say the chart is irrational or haphazard; rather, it is to say it is unpredictable.

The English phrase "at random" adapts as a medieval French phrase à randon. It denoted a horse moving headlong, with a wild motion that the rider could neither predict nor control. Another example: In Basque, "chance" is translated as zoria, a derivative of zhar, or bird. The flight of a bird, like the whims of a horse, cannot be predicted or controlled. (Mandelbrot & Hudson, The (mis)Behavior of Markets, 29-30, emphasis mine)

Kaye offers his explanation for the terms.

The dictionary tells us that the word random comes from an old French word randir meaning “to gallop”. The idea was that in the warfare of mediaeval times a knight on horseback would move hither and thither without any obvious plan of attack. A strategy probably aided by the fact that the knight had drunk a large quantity of ale or other alcoholic drink before he had the courage to face the enemy. (Kaye 38)

Chance is defined in a dictionary as “events which happen without assignable cause or an unexpected event”. It comes from the Latin word cadere, which means “to fall”. The meaning of unpredictable behavior in the word chance can probably be traced back to the fact that witch doctors and their relatives in the Roman empire used to pretend to predict what was going to happen in the future by letting several objects, such as chicken bones or sticks, fall onto the floor and interpreting the pattern created by the falling objects. (Kaye 40)

Simple vs. Complex Chance

Mandelbrot & Hudson go on to explain the difference between simple and complex chance. They quote Kolmogorov: "chance phenomena, considered collectively and on a grand scale, create a non-random regularity." To that Mandelbrot & Hudson add, "Sometimes the regularity can be direct and awesome, at other times strange and wild." (30d, emphasis mine)

To understand this regularity, we are to imagine a game.

Imagine Harry wins a Swiss franc on heads, and his brother Tom wins one on tails. ... Each toss is pure luck. But after these three centuries of playing the game, millions and millions of times, each brother has every reason to expect to have won half of the time. Such is the dictate of the law of large numbers, a common-sense notion also approved by mathematicians: If you repeat a random experiment often enough, the average of the outcomes will converge towards an expected value. With a coin, heads and tails have equal odds. ... This is what Kolmogorov meant. (31a-b)

If we denoted heads as 1, and tails as two, we would obtain an enormous list like this. [4]

But other aspects of the game get more complicated. At any particular moment, one brother may have accumulated far more winnings than the other. (Mandelbrot & Hudson, 32c)
An erratic, but pronounced pattern appears: A few long, up-and-down cycles stand out, while many shorter cycles ride on top of them. The "zero-crossings" – the movements when the imaginary purses of Harry or Tom go back to the empty state at which they started – are not uniformly spread but cluster together. It is structure of an irregular kind. (32c)

Consider how Kaye shows such streaks between Fred and Frieda. Notice the points where the winnings switch-over to the other player. [5]

If played enough times, these spaces when one person is winning can become extraordinarily long. Here is Kaye's table for streaks lasting up to 10,000,000 throws. [6]

This is Mandelbrot's & Hudson's chart for 10,000 throws.

The randomness that makes heads come up one time, and tails another, is a randomness that is qualitatively different from the sort that makes there be extraordinary long streaks at some times, and many rapid changes at others. In other words, there is a change to the change, or as Mandelbrot puts it, a volatility to the volatility. It is a sort of unpredictability in the way change changes, like Deleuze's intensive speed changes.

On the basis of this qualitative difference, Mandelbrot distinguishes three types of randomness: mild, slow, and wild.

A key point in my work: Randomness has more than one "state," or form, and each, if allowed to play out on a financial market, would have a radically different effect on the way prices behave. One is the most familiar and manageable form of chance, which I call "mild." It is the randomness of a coin toss, the static of a badly tuned radio. Its classic mathematical expression is the bell curve, or "normal" probability distribution – so-called because it was long viewed as the norm in nature. Temperature, pressure, or other features of nature under study are assumed to vary only so much, and not an iota more, from the average value. At the opposite extreme is what I call "wild" randomness. This is far more irregular, more unpredictable. It is the variation of the Cornish coastline – savage promontories, craggy rocks, and unexpected calm bays. The fluctuation from one value to the next is limitless and frightening. In between the two extremes is a third state, which I call "slow" randomness. (32c-d, emphasis mine)

Below are some images of Cornwall's coast. [7]


Think about the three – mild, slow, and wild – as if the realm of chance were a world in its own right, with its own peculiar laws of physics. Mild randomness, then, is like the solid phase of matter: low energies, stable structures, well-defined volume. It stays where you put it. Wild randomness is like the gaseous phase of matter: high energies, no structure, no volume. No telling what it can do, where it will go. Slow randomness is intermediate between the others, the liquid state. (33a, emphasis mine)

Mandelbrot & Hudson further characterize mild randomness.

Say Harry or Tom keeps a record – such as Feller's diagram reproduced earlier – of the deviations from the expected average of zero. Like in tennis, divide the game into "sets," each made of one million tosses, and record how much Harry won during the first set, the second, and so on. The size of the per-set purse will vary greatly, of course. It will often be about zero. But often, theory suggests, it will range in the favor of one brother or another – "typically," by 1,000 tosses. And on rare occasion, the "error," or deviation from the average they expect the coin to produce, will be far, far greater. If the brothers then graph the results in a "histogram" with a different-height bar for the number of times each score occurred, then the bars will start to form a familiar pattern. The numerous small winnings group around the expected average, zero – the tall center of the chart. The rare, fat purses go to the two extreme edges. Trace across the tops of all the bars, and you see the profile of the bell curve emerging. (36-37)

Some bells may be squatter, and some narrower. But each has the same mathematical formula to describe it, and requires just two numbers to differentiate it from any other: the mean, or average, error, and the variance or standard deviation, an arbitrary yardstick that expresses how widely the bell spreads. (36c)

To better understand wild randomness, we need to think of a different sort of game.

The Blindfolded Archer's Score

I think the theory best imagined in terms of an archer standing before a target painted on an infinitely long wall. He is blindfolded and consequently shoots at random, in any direction. Most of the time, of course, he misses. In fact, half of the time he shoots away from the wall, but let us not even record those cases. Now, had his recorded misses followed the mild pattern of a bell curve, most would be fairly close to the mark, and very few would be very wide of it. Suppose he shot arrows long enough, in successive "sets." For each set, he could calculate an average error and standard deviation – even give himself a score for blindfolded archery. But our archer is not in the land of the bell curve; his misses are not mild. All too often, his aim is so bad that the arrow flies almost parallel to the wall and strikes hundreds of yards from the target, or even a mile, if his arm is strong enough. Now, after each shot, let him try to work out his average target score. In the Gaussian environment, even the wildest shots have a negligible contribution to the average. After a certain number of strikes, the running average score will have settled down to one stable value, and there is practically no chance the next shot will change that average perceptibly. (36-37)

This wilder distribution is of a Cauchy sort. It may infinitely vary from the norm.

But the Cauchy case is completely different. The largest shot will be nearly as large as the sum of all the others. One miss by a mile completely swamps 100 shots within a few yards of the target. His scores for blindfolded archery never settle down to a nice, predictable average and a consistent variation around that average. In the language of probability, his errors do not converge to a mean. They have infinite expectation, hence also infinite variance.

Cauchy's is a totally different way of thinking of the world than Gauss's. The errors are not distributed as near-uniform grains of sand; they are a composite of grains, pebbles, boulders, and mountains. (37-38c, emphasis mine)

These two games illustrate mild and wild chance.

the difference between the extremes of Gauss and of Cauchy could not be greater. They amount to two different ways of seeing the world: one in which big changes are the result of many small ones, or another in which major events loom disproportionately large. "Mild" and "wild" chance, described earlier, are my generalizations from Gauss and Cauchy. (39c)

Mandelbrot & Hudson illustrate with more examples.

Under a microscope, the edge of a sharp razor blade looks a bit ragged. It has random pits and bumps, but they appear to be minor imperfections on an approximately straight edge. You can easily spot the dominant trend. The is mild variation. (39c) [9], [10]

By contrast, consider the rugged coastline of Brittany: Does it really have an "average" outline, like that of the razor blade? Only from the very great height of a satellite, where the familiar map shape can be imagined. (39c)


But from closer up, in an airplane or from a tower, the tortuous, random details of the promontories and bays, crags and hollows obscure the image. The coastline is wild. (39c, emphasis mine)


Yet a third example, this time in electronics. If you run a steady electrical current through a copper wire, you can "hear" it on a loudspeaker as a steady, white noise – the static of mild variation, due to the thermal excitation of the electrons. (39c, emphasis mine)


But if you try to run computer data down a very long wire, you will pick up irregular, intermittent "pops" and crackles on the line. (39d)


Engineers call this 1/f noise, and it is the bane of computer communications, causing transmission errors. It cannot be predicted or prevented; it can only be accommodated, with error-correcting software. That is wild variation. (39-41, emphasis mine)

Wild randomness is uncomfortable.

There is much in economics that is best described by this wilder, unpleasant form of randomness – perhaps because economics is about not just the physics of wheat, weather, and crop yields, but also the mercurial moods and unmeasureable anticipations of wheat farmers, traders, bakers, and consumers. (41b, emphasis mine)

Mandelbrot & Hudson offer this remarkable example.

A Citigroup study in 2002 found unpleasantly sharp price swings in several currencies – dollar, euro, yen, pound, peso, zloty, even the Brazilian real. On one day, the dollar vaulted over the yen by 3.78 percent. That is 5.1 standard deviations, or 5.1(σ), from the average. If exchange rates were Gaussian that would be expected to happen once in a century. But the biggest fall was a heart-stopping 7.92 percent, or 10.7(σ). The normal odds of that: Not if Citigroup had been trading dollars and yen every day since the Big Bang 15 billion years ago should it have happened, no once. (97a-b)

To explain the unpredictable roughness of natural edges, Mandelbrot uses the concept of fractals.

The natural sciences have largely ignored the property of roughness. They favored instead a Euclidean geometrical idealization of the world around them. But roughness is precisely what makes things natural (rather than abstractions).

How many natural objects around you really fit these old Greek patterns?

Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. (124a-b)

In the past, scientists did their best to view the irregularities of nature as minor imperfections from an idealized shape – like the slight fuzz on an otherwise perfectly smooth peach skin, or the minor distension and dimpling of an otherwise spherical orange. The same assumption stood behind the reasoning of Gauss and Legendre two centuries ago, when they developed the least-squares method of estimating a planetoid's "true," elliptical orbit from a mess of imprecise telescope readings. Once tools like least-squares became available and familiar, other scientists found it easy to follow without much question. For instance, metallurgists used to measure the roughness of a surface or metal fracture by the very same least-squares method – even though they found, puzzlingly, different roughness estimates when measuring different portions of the same metal sample. The same occurs in finance: The "roughness" of a price chart is commonly measured by its volatility – yet that volatility, analysts find, is itself volatile. My contribution was, foremost, to recognize that in turbulence and much else in the real world, roughness is no mere imperfection from some ideal, not just a detail from a gross plan. It is of the very essence of many natural objects – and of economic ones. (124c-125, emphasis mine)

And natural roughness involves the mixture of order & chaos, predictability & unforeseen, chance & determination, rule & rebellion.

Consider first a fractal form that does not involve chance: the basic Koch Curve.

He then writes:

So far, all the fractals in this gallery have been regular and, once you knew the rule, the constructions were exactly repeatable and the results, predictable. But such constructions are nothing but appetizers. I like to call them cartoons. Adding an element of chance complicates the game, and starts to produce structures that look more like sports of Nature than of man. (139, emphasis mine)

The top diagram is the Koch curve again, with luck added. It starts with the same initiator and generator as shown earlier. But whereas the prototypical Koch curve plugs the ever-shrinking generators in exactly the same way at each step, here we toss a coin at each step to decide whether to place the "tent" right side up, or upside down. The result is more irregular and flows more naturally. In fact, it starts to look a bit like a coastline. The bottom diagram, using a more complicated fractal process driven by a computer, starts to look startlingly real – as if traced from a shipping chart. (139)

So to simulate natural fractals (including markets), we must incorporate wild random variation.

Fractals in the physical world: clouds and cluster. With random processes added, we finally start to see the hand of nature. The top diagram is the work of a computer to illustrate the principle. It represents a completely artificial cloudy sky. The bottom diagram illustrates fractal growth starting from an irregular "seed" in the center. As a random fractal process adds particles to it step by step, tendrils and branching structures slowly appear to yield a structure called DLA: a diffusion-limited aggregate, one of the most fascinating, ubiquitous, and difficult objects of statistical physics. (142)

Fractals are natural. But to be so, they must be wild. John Matson of Scientific American writes:

Theories grounded in the physical sciences, Mandelbrot said, presume that the markets harbor elements of randomness, but in a form that he calls "mild randomness." Mild randomness is embodied by the roulette wheel at a casino—each spin is random but over time the distribution of winning numbers averages out. (And, of course, over time the casino wins out.) He contends that more realistic models of economics—including, naturally, models based on fractals—are driven by "wild randomness," wherein things don't average out and individual freak occurrences matter. This wildness, he said, "imitates real phenomena in a very strong way."

With these fractal ideas in mind, Mandelbrot & Hudson return to the idea of wild markets.

The jumps can be quite large, indeed. Days of minor fluctuations, of less than a percent, can be punctuated by great leaps upward or falls downward – 3 percent, 17 percent, even 40 percent in a day. That is wild variation: ungovernable and seemingly unpredictable spasms of movement. When you analyze it, you quickly see it does not fit the tidy pattern of the bell curve (known throughout the civilized universe as the epitome of mild, manageable variation). There are too many very big and very small changes, not enough medium-sized ones. And the changes appear to scale with time: The proportion of bigger to smaller price-moves follows a regular pattern as you look at monthly, weekly, or daily charts. In fact, if you consider only how much the charts wiggle, at different time-scales they all look roughly alike – and all very bumpy. (199b, emphasis mine)

Now you look at the irregular trends. The size of the price changes clearly cluster together. Big changes often come together in rapid succession, like a fusillade of cannon fire; then come long stretches of minor changes, like the pop of toy guns. There is scaling here, too: If you zoom in on an individual cluster of big changes, you find it is made up of smaller clusters. Zoom again, and you find even finer clusters. It is a fractal structure. Nor is it just the price changes of interest; at times, the price levels also exhibit some kind of irregular regularity. The charts sometimes rise or fall in long waves, or with small waves superimposed on bigger waves. But none of these phenomena – clusters of volatility, or irregular trends – resemble any of the cycles, waves, or other patterns that characterize those aspects of nature controlled through well-established science. There are no familiar sine or cosine waves, with regular periods, of the kind that undulate evenly across the green screen of an old oscilloscope. These peculiar patterns cannot be predicted; and so humans who bet on them often lose. Yet there clearly is a system to them. It is as if the charts have a memory of their past. If the price changes start to cluster, or the prices themselves start to rise, they have a slight tendency to keep doing so for a while – and then, without warning, they stop. They might even flip to the opposite trend. This is maddening." (199c-200, emphasis mine)

To analyze the wildness of natural markets, Mandelbrot distinguishes two types of wild: Noah Wild and Joseph Wild.

The flood came and went – catastrophic, but transient. Market crashes are like that. The 29.2 percent collapse of October 19, 1987, arrived without warning or convincing reason; and at the time, it seemed like the end of the financial world. Smaller squalls strike more often, with more localized effect. In fact, a hierarchy of turbulence, a pattern that scales up and down with time, governs this bad financial weather. At times, even a great bank or brokerage house can seem like a little boat in a big storm. (200)

The market's second wild trait – almost cycles – is prefigured in the story of Joseph. (200d, emphasis mine)

The Pharaoh dreams of things in groups of seven. Joseph prophesies that there will be seven years of prosperity then seven years of famine. He advised the Pharaoh to stockpile grain. When the seven years of famine came, they made a killing. This pattern is found in markets.

A big 3 percent change in IBM's stock one day might precede a 2 percent jump another day, then a 1.5 percent change, then a 3.5 percent move – as if the first big jumps were continuing to echo down the succeeding days' trading. Of course, this is not a regular or predictable pattern. But the appearance of one is strong. Behind it is the influence of long-range dependence in an otherwise random process – or, put another way, a long-term memory through which the past continues to influence the random fluctuations of the present.

I call these two distinct forms of wild behavior the Noah Effect and the Joseph Effect. They are two aspects of one reality. One, the other, and usually both can be read in many financial charts. They mix together like two primary colors. The red of one blends with the blue of the other, to produce an infinite palette of purples and violets. (201b-c, emphasis mine)

Mandelbrot elaborates further these sorts of wild randomness in his book Fractals and Scaling in Finance. It is a bit technical, so we should first review some terms in Samorodnitsky's & Taqqu's Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance.

Gaussian distributions and processes have long been well understood and their utility as both stochastic modeling constructs and analytical tools is well-accepted. However, they do not allow for large fluctuations and are thus often inadequate for modeling high variability. Non-Gaussian stable models, on the other hand, do not share such limitations. In general, the upper and lower tails of their marginal distributions decrease like a power function. The rate of decay depends on a number α, which takes a value between 0 and 2. The smaller the α, the slower the decay and the heavier the tails. These distributions always have infinite variance and when α is 1, they have an infinite mean as well. (xiii, emphasis mine)

Gaussian distributions, moreover, are always symmetric around their mean; the non-Gaussian stable ones can have an arbitrary degree of skewness. (xiii)

The stable distribution is Gaussian when α = 2. Stable distributions with α < 2 share many properties with the Gaussian distribution, but they also differ from the Gaussian in significant ways. When α < 2, for example, the tails of the distributions decay like a power function. This means that a stable random variable exhibits much more variability than a Gaussian one: it is much more likely to take values far away from the median. Mandelbrot (1982) referred to this as “Noah effect,” an allusion to the biblical figure who lived through a very severe flood. The high variability of the stable distributions is one of the reasons they play an important role in modeling. Stable distributions have been used to model such diverse phenomena as gravitational fields of stars, temperature distributions in nuclear reactors, stresses in crystalline lattices, stock market prices and annual rainfall. (1-2)

So consider this diagram showing the sorts of "tails" for different distributions. [Diagram by Yiding Han] [13]

The Blue Bell has tails that descend and converge to the base line. Also its median value does not extend very far. This is the Gaussian Bell Curve. But we see that for the other wild distributions, the tails do not converge just yet to the base line. This means that it is always possible for there to be very extreme instances, some so far off the chart they disrupt the balance of the whole distribution. We saw this already with the Dow.

Here, by the way, is Mandelbrot's depiction of the wild tail distributions.

We turn now to Mandelbrot's Fractals and Scaling in Finance.

There is wild randomness exemplified by distributions with infinite variance. (p120c, emphasis mine)

Non-averaging, non-Gaussian, and/or non-Fickian fluctuations were long resisted and viewed as “improper” or even “pathological.” But I realized that many aspects of nature are ruled by this so-called “pathology.” Those aspects are not “mental illnesses” that should or could be “healed.” To the contrary, they offer science a valuable new instrument. In addition, a few specific tools available in “pure mathematics” were almost ready to handle the new needs. The new developments in science that revealed the need for those tools implied that science was moving on to a qualitatively different stage of indeterminism.

(Mandelbrot Fractals and Scaling in Finance, 128c-d, emphasis mine)

“Noah wild” recursive functions are cartoons of discontinuous wildly random processes whose jumps are scaling with α < 2. “Joseph wild” recursive functions are cartoons of (continuous) Gaussian processes called fractional Brownian motions. The “sporadic wild” recursive functions are cartoons of wildly random processes that I call sporadic because they are constant almost everywhere and supported by Lévy dusts (random versions of the Cantor sets). (194a, emphasis mine)

Some research suggests that the randomness which aids our brain's learning and adaptation is of Mandelbrot's wild variety. Later we will explore the consequences of such wild unpredictability for the possibility of simulating human consciousness on a computer.


Kaye, Brian. Chaos & Complexity. Cambridge: VCH, 1993.

Mandelbrot, Benoit B. Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Berlin: Springer, 1997.

Mandelbrot, Benoit B., & Richard L. Hudson. The (mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward. New York: Basic Books, 2004.

Matson, Jonh. "Benoit Mandelbrot and the wildness of financial markets." in www.scientificamerican.com, 13-May-2009. Available online at: http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=benoit-mandelbrot-and-the-wildness-2009-03-13

Samorodnitsky, Grennady, & Murad Taqqu. Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. CRC Press, 1994. More information and preview available at: http://books.google.be/books?id=wTTUfYwjksAC&dq=non-gaussian+random&hl=en&source=gbs_summary_s&cad=0

All images are from Mandelbrot's & Hudson's The (mis)Behavior of Markets, except those numbered accordingly:







[4, 5, 6]

Kaye, Brian. Chaos & Complexity. Cambridge: VCH, 1993.




Google Maps






Google Maps





White and Pink Noise sound files from:


1 comment:

  1. I have just read Taleb's "The Black Swan" which got me interested in Mandelbrot's work on randomness, and this post was really helpful in explaining some core elements of his work. Thank you so much, Mr. Shores!