Of Intelligence and Chaos

Front-Page Opinion Technology

Conventional definitions of the word ‘intelligence’ either bring to mind the cold, steel-tinted specter of sentient machines, or images of bespectacled, bald men hunched over desks and computer screens, holding pages of indecipherable equations that appear more magical than mathematics. However, there are aspects of intelligence that have taken the backseat to more philosophical questions concerning its nature. Problem-solving, the exhibition of preferences, and reactions to stimuli are all identifiable aspects of intelligent systems.

Most important, however, is the fact that intelligence can also been construed as a physical force that changes the world upon its application. In particular, new problem solving approaches build on each other to develop a new method of interaction between the system and the world; think about how the smartphone revolution, itself born out of a desire for compact, convenient technology (and therefore a problem-solving strategy) has redefined the way we interact with the world at large.

Also, any system said to have ‘intelligence’ has to survive, and survival requires adapting to changing circumstances, specifically, ones that are completely unpredictable. The intelligent thing to do to ensure survival, therefore, is for a system to position itself as follows: at worst, randomly occurring negative events would leave the system unharmed, or at least capable of recovering. At best, however, randomly occurring events would actually benefit the system as a whole; in the words of scholar, mathematical trader and self-proclaimed ‘epistemologist of Randomness’ Nassim Nicholas Taleb, such a system would be antifragile, capable of profiting infinitely from the upsides to randomness and unpredictability while not suffering much of a downside from exposure to such volatility.

Well and oft-decorated Harvard Fellow and MIT Researcher Dr. Alex Wissner-Gross claims to have found an equation to define these aspects of intelligence, and it summarizes with great beauty, precision and applicability the concepts of antifragility and the indelible mark intelligence leaves on the physical world. The equation is as follows:


The equation describes intelligence as a force (F) that acts to maximize future freedom of action (T) across a diversity of possible futures (S) with a time factor (subscript tau).

This equation casts intelligence as a force that seeks to maximize the number of future states it can occupy across time. Simply put, this equation considers intelligent any process or system that actively tries to keep its options open, and which avoids being boxed in by circumstances. By this definition, then, any system that strives to maximize entropy is an intelligent system, and Wissner-Gross goes on to demonstrate the intelligence underlying entropy-maximization by revealing a program based off the equation: Entropica.

Entropica is, in essence, a software engine which has the overriding objective of maximizing the long-term entropy – maximizing the number of future states a system is capable of being in, as per Wissner-Gross’ definition – of any system it is placed in, as per the ‘intelligence equation’. Logically then it should pass certain tests of intelligence, and it indeed appears to perform actions that appear purely volitional. In one simulation, Entropica is exposed to three virtual objects; a laterally moving cart, a pole and a ball, upon which it proceeds to balance the pole on the cart, and the ball on the tip of the pole, thereby maximizing the number of possible future states the system consisting of the ball, pole and cart can exist in without being explicitly commanded to. In another, more relatable simulation, Entropica apparently manages to grow assets under management exponentially without being explicitly commanded to.

I’m going to run around the elephant-in-the-room question of whether Entropica as a program is truly intelligent; the underlying mechanisms of the program and the extent (and the nature) of the purported ‘intelligence equation’s application are rather murky, and deciphering them takes a far greater mind than mine. Rather, let’s run with the idea of entropy maximization, and its link to intelligence, as per Wissner-Gross’ definition.

An overriding directive to maximize all possible future states can be ascribed to almost every aspect of any biological system. Take, for example, bacteria, one of the most abundant life forms on the planet. As individual bacteria divide, DNA duplication errors often occur, resulting in mutation. On one extreme, some mutations turn out to be beneficial upon the addition of a new stressor to the environment (i.e. spontaneous anti-biotic resistance, for example). On the other extreme, however, other mutations end up disastrously impacting critical cell function, rendering the bacterium either impotent or dead.

Importantly, however, the diversity generated through random mutation provides the colony as a whole with incredible resilience to a variety of stressors; via random mutation, individual bacterial cells gain the ability to resist certain environmental conditions in the event that these mutations are selected for via Darwinian pressures. While such mutations may lead to the premature death of many individual bacterial cells, the colony as a whole benefits tremendously from each ‘successful’ mutation than it loses from the death of an individual cell.

Furthermore, recent research has uncovered a new type of bacteria known as persister bacteria, that stand in direct contrast to their exponentially-replicating brethren; in fact, persisters, in some cases, are absolutely dormant and do not reproduce at all. Now what, you might ask, would be the evolutionary edge granted to a colony by a non-reproductive bacterium? Especially in laboratory settings predominantly free of random stressors, persister bacteria could reasonably be seen as non-contributing members to the bacterial community at large, individuals that prefer to free-ride whatever resources are available and lie dormant. But as it turns out, persister bacteria are indispensable to the colony precisely because they do not reproduce. Most antimicrobial substances destroy planktonic bacteria by inhibiting crucial cell function during cell division, but since persisters do not divide, they are effectively immune to antibiotics. When a colony of planktonic bacteria (i.e. free-floating, individual bacteria) are exposed to antibiotics, it is often the persisters, those previously useless cells that end up being the survivors and eventually re-propagating the colony. Crucially, persister cells are also formed entirely via stochastic processes (i.e. via random mutation).

The underlying principle, then, appears to be the preference for variety over the perfection of individual cells. While seemingly a rather trivial point, it defeats the conventional notion of organisms evolving to perfection; by definition, perfect organisms would replicate perfectly, making them immune to any sort of genetic mutations. The population would therefore consist of perfect clones, which, while perfectly suited to their current environment, would not have the genetic variation required to survive the addition of stressors they were not already capable of surviving. This principle is eloquently summarized in an article by Kim Lewis in his article ‘Persister Cells, Dormancy and Infectious Diseases’, published in Nature magazine: “stochastic processes seem to contradict the evolutionary paradigm of evolving to perfection… In processes controlled by stochasticity, evolution to perfection would have been maladaptive. Settling for less than perfect level of control results in useful variety.”

Simply put, the individual mutations undergone by bacterial cells, while possibly harmful to the individual cells, provide the larger system as a whole with incredible resistance to a large variety of stressors. In fact, bacterial systems have almost no downsides to volatility of any sort; even if an antimicrobial manages to wipe out most of the colony, the survivors are either persisters (allowing them to re-propagate a colony of bacteria with similar resistances), or resistors (who spontaneously developed antibiotic resistance via random mutation). In the former case, the addition of the antimicrobial has no net effect; the bacteria that eventually propagate will have the same resistances as the ones who existed before. In the latter case, however, the bacterial colony that emerges will benefit from the antibacterial resistance that was selected for.

This phenomenon is hardly restricted to bacteria; every life form on the planet that survives to this day has done so because random variations in its genetic code have been selected for via natural events and interactions. The variations in behaviors, physical appearance, and preferences that plants, animals and fungi (both terrestrial and aquatic) display have all proven to be evolutionary advantages in one way or another. This concept of entropy maximization is therefore the very substance from which the basic principles of evolutionary biology – that of natural selection, and the survival of the fittest have emerged. The systems (or groups of organisms) that survive are the ones that are most capable of harnessing randomness to their benefit, or at the very least being robust to periods of volatility.
So the question then needs to be asked: why does humanity feel like all of the above does not apply to it?

Our society is constructed in a manner that reflects our desire to smooth out randomness and volatility wherever we see it, and our cultural biases reflect as much. As a collective, we maintain predictable lifestyles with regular hours and conditions, consume food at regular intervals in regular amounts, and even life consistently heavy weights over extended periods of time. Our bodies have gotten acclimatized to sedentary, regular lifestyles a far cry from the evolutionary circumstances in which they evolved. We also tend to overmedicate, insisting on treating simple infections and sicknesses with medication rather than letting our immune systems do their jobs.

While stressor deprivation at the micro scale does not carry with it any significant threats to the collective, depriving the system as a whole of volatility leads to very bad results. Looking around, virtually every aspect of our economy is centered around predicting the future, and of smoothing out randomness everywhere we see it; the stock market fluctuates based on the aggregate of predictions made by traders, corporate performance is determined by predicted growth, and, most shocking of all, people seem to have ignored the fact that of economic crashes and depressions have never been predicted or avoided by current economic models. In general, volatility of any sort at a macro level results in disaster. Society as a whole is ill-prepared for shocks and stressors, and in fact, the emphasis in a variety of fields, from scientific research to governance, is shifting from bottom-up to top-down. This has the effect of reducing all future possible states the system can occupy; in the event of disturbances, the system we have created does not have contingencies analogous to persister or resister bacteria, but instead behaves like a system in which every cell is perfectly adapted to the absence of additional stressors.

Perhaps, it should not be so surprising if Entropica’s, nay, Nature’s method of decision-making via entropy-maximization proves wildly successful; perhaps it is we, steeped in intellectual arrogance and our confidence that we understand exactly how the world works, who are mistaken.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


Time limit is exhausted. Please reload CAPTCHA.

Lost Password