Posts Tagged ‘paolo magrassi’

FYI, the discussion of Gaussian vs. other forms of stable statistical distributions wast not born in a recent best-selling book.

It was brought up by Vilfredo Pareto a century ago and then again, in specific relation to finance, by Benoît Mandelbrot in the 1960’s, when he suggested that markets do not follow a normal Gaussian but rather a leptokurtic, heavy-tailed Pareto distribution.

This is what happens when the constraint of finite variance on the participating stochastic variables (e.g.; securities prices, individual risks, et cetera) is relaxed: from the Central Limit theorem, we derive no longer a Gaussian but rather other stable distributions (although not necessarily a “power law“, as is believed in popular versions of the theory).

If Mandelbrot is right, the current forecasting models (e.g. those for risk assessment, which failed in 2008) are wrong and it would be little surprise that they be not very good at anticipating even highly-impactful events.

Advertisements

I am most grateful to Dr. Massimiliano Ignaccolo for referring me to an arXiv paper on the use and abuse of power-law distributions in empirical data.

It has become oddly customary to recognize power laws everywhere, from finance to biology, from politics to earthquakes. Now, Clauset (Univ. of New Mexico, Albuquerque), Shalizi (Carnegie Mellon) and Newman (Univ. of Michigan, Ann Arbor) clarify that reasearchers sometimes employ too liberal means for qualifying a statistical distribution as a power-law one:

The common practice of identifying and quantifying power-law distributions by the approximately straight-line behavior of a histogram on a doubly logarithmic plot should not be trusted: such straight-line behavior is a necessary but by no means sufficient condition for true power-law behavior.

According to these authors, examples of distributions wrongly purported as power laws include, e.g.:

  • the size of files transmitted over the internet
  • the intensities of California earthquakes 1910-1922
  • the number of links to web sites
  • the distribution of human wealth
  • the degrees of metabolites in the netaboilic network of Escherichia Coli

A number of other presumed power-law distributions are found, in this study, to correlate equally well with other statistical models, such as log-normal or stretched-exponential. I.e., in order to classify them as power-law distributed we should investigate the underlying mechanisms more deeply (examples: severity of terrorist attacks; populations of US cities; distinct sigthings of bird species; number of aherents to religious denominations; citations of a scientific paper over a period of time).

Furthermore, there are distributions that, while seemingly following a power law, are merely heavy-tailed (i.e., p(x) goes like x at the a power only for values of x greater than some minimum threshold). And in many such cases, different types of statistical distributions are a better fit than the power-law one.

It is a pop-complex author’s favorite sport to refer you to some underlying “power law” (especially since they read of the Black Swan), as if this necessarily were an indication of underlying non-linearity (which it is not): next time they do it to you, just refer them to this paper.

Life can only be understood backwards, but it must be lived forward. [Søren Kierkegaard]

grahamsIn 1841, Graham’s Magazine serialized the publication of Edgar Allan Poe’s The Murders in the Rue Morgue, arguably the first detective story ever written. Auguste Dupin, a Parisian who is not a professional investigator, reads newspaper reports of a mysterious and vicious crime, and using only induction and deduction solves the mystery of two women butchered in a room locked from inside. Analysis and logic thus made their entrance in the history of fiction, and the novel was forever to be taken as a symbol of the triumph of analysis and reasoning, because it appeared in an age when trust in scientific progress and human speculative faculties was at its climax. The story resonated with widespread feelings.

The scientific revolution had made impetuous between the eighteenth and nineteenth centuries, even allowing for the development of a mechanistic view of the universe: knowing the basic laws of dynamics, and applying them pedantically to every elementary particle of matter, it could be deemed in principle possible to predict the future state of any system, whether it was a bunch of artillery shells, the molecules of a gas, or an entire human body. Abusing such logic, some extremist fringe groups of this scientism held it natural to subject to the same mechanism the psychological and the social spheres, too: if the brain is chemistry and electricity, and if we can get to know all the laws that govern electric fields and molecules, then we can, in theory at least, know the dynamics and the future development of any person, group, society. Everything is knowable, hence predetermined. Anecdotal is the response that eminent scientist Pierre-Simon de Laplace gave to Bonaparte, who had asked why there was no mention of the Creator in his work on astronomy: «It is a hypothesis which I did not need».

These views, which are taken very seriously today by a burgeoning literature on complexity in organization and management science, are in reality little more than caricatures and often even apocryphal. For example, Laplace is usually cited as a champion of unbridled determinism: however his jokes, his popular writings and his lectures were not but the media forays of an intellectual who sought to profit from his academic success and credibility to build a political career –which in fact he got to, helped by bold transformism. He knew that the imperial rhetoric would gladly espouse a scientistic doctrine voted to power, control and vainglory. You will not win a dictator’s heart and mind with talks of concern, nuance, doubt and vagueness! In fact, when Laplace at the end did act that way, because he had become invested with ministerial responsibilities and was dealing with day-to-day practical issues, he was fired. In the memoirs written at St. Helena, Napoleon will note: «A scientist of the first rank, Laplace was soon to be a poor administrator; from his first acts we recognized our mistake. […] He would not consider any issue under the right angle, sought subtleties everywhere and did not conceive but problems».

End of extremist alibis

However, the foundations on which extremists could base their mechanistic faith began to be demolished by science itself at the beginning of the twentieth century, which we might call the Century of Complexity.

Quantum physics showed that even an infinitely accurate meter can not determine the precise position of a particle (as Laplace believed), whose motion is also, if in small part, subject to chance. Furthermore, the study of nonlinear dynamical systems, made possible by computers in the second half of the century, consolidated a knowledge that had emerged several decades earlier: two systems A and A’, no matter how similar their initial conditions, may become increasingly different as time elapses1. Making predictions is therefore in principle impossible, because if I fix my attention on system S the possibility exists that in the future it will behave as S’. These two discoveries frustrated all deterministic ambitions.

midAt the same time, physicists closed the door to reductionist dreams, that is the hope of understanding the world by only studying microscopic physics: just like a bunch of sports fans or a flock of birds sometimes do things which are not explained by individual attitudes, equally so elementary particles, when observed not one by one but rather as sets, may exhibit behaviors that are unpredictable by using the physical laws that govern the motion of the individual particle2. It is therefore necessary to find the laws that govern the aggregates, the systems, as opposed to just the “fundamental” ones.

Reality is complex

Not only is the world complicated (from complico: to bend, to twist), namely of many features, often hidden: it also is inherently complex (complector: to hold together, to combine, to tangle), in the sense that almost always those features are interrelated and influence each others.

Take political economy. If you increase taxes, you will have more resources to build production-enabling infrastructures; but at the same time you are reducing the capacity of consumer spending, which is production’s ultimate goal. The risk is that warehouses remain full and companies start to lay off, which further weakens consumer demand and establishes a nasty vicious circle (feedback). If, to remedy, you cut taxes, you must do it in a timely manner, because if you wait too long a time down the spiral, families will use their increased monetary resources for saving rather than for purchasing, since they lack confidence in the future. At that point the State has no more resources to push the economy, which itself is struggling to recover because no one buys anything.

There is a great deal of organizational and management situations in which the feedback between a cause and its effect(s) gives rise to very complex situations, creating the clear feeling that simple cause-effect analyses can be of limited usefulness and that they should be complemented by common sense (“heuristics”) as well as by the adoption of drastic simplifications. It happens every day with our children, investment portfolios, or health. If a patient is prone to heart failure, they will be advised not to drink and take diuretics, even if they suffer from kidney stones. The doctor is choosing, with good sense, the lesser evil, though aware that anything done to an organ of the human body is reflected on a half dozen others at least, which in turn will impact on others with cascading effects that could eventually affect the one that was aimed to be cured.

If we look at the global economy, the potential for complexity is obvious: just think how many connections there are, how many cause-effect relationships that might be subject to feedback. Financial markets, economies, networks (such as energy or transport) are interconnected. Consumers also are, and influence their behaviors, through forms of communication such as social networking, mobile telephony, email. Feedback phenomena are numberless, and there is no economic context, such as for example the ecosystem in which a company is immersed, that can be understood simply by breaking it down into its parts and taking them into consideration one by one: the analytical approach ought to be complemented by the holistic, looking at the system, not just the components, since the linear sum of their effects is not equal to the behavior of the complex.

Linearity as a useful approximation of reality

The “systems” and the “problems” encountered in nature are essentially of that kind, namely non-linear. However, in many situations one may resort to linearity (i.e.: A) cause → effect, and B) sum of causes → sum of effects) as a first order approximation: as long as the effects of non-linearity can be considered negligible, we can build a mathematical model of the system as if it were linear. A problem, that is, which is the linear sum of its causes, whose mutual interactions give rise to negligible second order effects, like the diuretic does for heart failure.

This simplifying approach is fruitful in many situations, from electronics to ecology, from computers to economics, from biology to celestial mechanics, and enormous scientific and technological advances have been made on the basis of linear approximations. Linear models are useful because within their linear regime many systems are similar and their behavior can be described by the same equations, even though the contexts are very different. Complex systems, to the contrary, each have a different personality and mathematical formulation and, indeed, in most cases not even that: equations are replaced by computer simulations. That is why technology strives to remain in linear territory as much as possible. It is like when a diuretic is given to someone with kidney stones. Or like when the laws of economics are formulated.

At the root of the current economic doctrine indeed stand some obvious simplifications of reality (Efficient Market Hypothesis, Rational Expectations Hypothesis), which are well known to economists. It is an almost self-evident truth that the price of a stock or an asset can affect the price of another (a primary source of non-linearity); it is a well known fact that economic agents do not behave rationally (a Nobel Prize was awarded in 2002 to Daniel Kahneman for proving this in the seventies), that markets are perfectly efficient only in extremely rare circumstances (Nobel Prize to Joe Stiglitz in 2001), and that they can suddenly go crazy at times, moving far away from their “equilibrium”. These convictions notwithstanding, economic theory and practice continue to proceed on the basis of those linear simplifications. Only occasional adjustments to the models are made, because in essence we do not know better ones. (Not even Mandelbrot and Taleb, for example, despite the subtlety, the importance and validity of their criticism, were able to propose usable mathematical models for the financial sector).

It is anyway a fact that catastrophic or near-catastrophic crises appear to be increasingly frequent. It may be sudden and severe disruptions in financial markets, or shocks that propagate along the increasingly complex value “chains” of businesses, which really are complicated networks or lattices. These shocks can affect in unpredictable ways productive sectors seemingly unrelated to those that suffered the crisis in the first place. For these reasons, one wonders, like physicist-financier Jean-Philippe Bouchaud did in 20083: are those assumptions of linearity and rational expectations still realistic and sustainable in 2011? Are the effects of their imperfection really negligible?

Think of Einstein’s relativity. In the vast majority of everyday situations, including those involving sophisticated technologies, we do not worry about the effects of relativity, because the objects with which we deal do not move at speeds approaching that of light or travel intergalactic distances. The effects of relativity are negligible. Still, GPS devices in our cars and smart phones would not work if their hardware and firmware did not take the effects of relativity into account. There goes an example of a common situation in which the approximation «this theory has no practical relevance» was valid 25 years ago (when we did not use GPS’s) but is wrong today: we have had to enrich our mathematical models to consider its practical effects.

By the same token, we ought to be careful not to underestimate the occurrence in the economic world of facts that could render obsolete and wrong the linear approximations underlying the dominant economic paradigm. And in this sense, a new fact is the number of interconnections between economic agents, both at macro and micro level. (Substantial quantitative variations can lead to qualitative changes: one gram of paracetamol cures headache or fever, but a hundred grams are deadly!).

According to a minority but growing number of scholars, 21st century global markets can not be modeled as linear systems: the linearity assumption will grow more and more inadequate as interconnections increase, because they are the ultimate source of non-linearity and their growth beyond a certain threshold is the fact that makes the approximation no longer realistic. Radically new doctrinal approaches, if only embryonic for now, are being proposed in econophysics, a discipline aiming to encourage economic research to adopt methods that in the natural sciences have been developed to describe complex systems. Many physicists and a few economists are testing complex models or agent-based simulations that have proved successful in physical or biological situations and could perhaps be replicated in the financial/economic context4.

(Econophysics research needs to be strengthened by a broader and deeper participation of economists. Many economists of high rank, while recognizing the limitations and imperfections of the current paradigm, do not seem much worried about its sustainability and health. For example, according to some of them the financial crash of 2008 was not but the warning, provided by efficient markets, of an imminent violent economic downturn. In this vision, expressed effectively by Eugene Fama in an interview on the New Yorkermagazine in January 2010, finance was the victim and not the cause of the economic collapse. In addition, some econophysicists seem to ignore, or at least not to care about, the corrections that economists are gradually making to the simplified models.)

Complexity and business

Similarly, a growing minority of business/managerial economists consider Taylorist scientific management (which has made it to the present day through various mutations and enrichments) as still plagued by extreme mechanism and unaware of the lessons coming from complexity science (deterministic chaos, emergent behavior): they therefore believe that it should be replaced with models inspired by non-linear dynamical systems, non-equilibrium thermodynamics, agent-based simulations and other modern “complex” tools. (Even though, as in econophysics, none has proved applicable yet to business economics).

In a 2009 book (Difendersi dalla complessità. Un kit di sopravvivenza per manager, studenti e perplessi, Franco Angeli) I analyzed this phenomenon and showed that, while bringing (like econophysics) real problems to the table, it is still immature and based on a distorted understanding of the relevant scientific concepts and on profound misconceptions concerning their applicability. Business economists dealing with complexity already miss their target in the early steps of their analysis, as they move from the assumption that science should get rid of mechanistic approaches. Apart from the fact that Laplacian mechanism is not, as we have seen, but a caricature of the nineteenth century’s scientific mainstream (widely surpassed, e.g., by quantum mechanics), the anti-mechanistic obsession of these scholars is grounded on a lack of understanding of the mentality and instruments of science: their publications systematically refer to myths such as «exact sciences», «the determinism of mathematics»5, and «reductionism» in the sense of «breaking down a system into parts to analyze them one by one» (confusion between reductionism and analysis)6.

Another recurrent mistake, nearly systematic in this school of thought, is the confusion between epistème and tèkne. Almost all authors miss the distinction between matters of principle and practical, technological issues. For example, from the observation that in the field of epistemology and science non-determinism is ruling, they conclude that forecasting is a scientistic obsession, when not an exercise in futility. However the truth is that even in complex application domains now considered classics, such as meteorology, we continue to make forecasts, and they get ever better. Macroeconomy projections, like GDP or deficit at year end, are routinely made because they are necessary for governance, and deviations from actual values are seldom dramatic. (It should be recalled that predictions in economics are usually expressed as three distinct scenarios depicted as scissors diagrams: the fact that the media only report and discuss the central curve attests not the forecasts unreliability but rather the public opinion’s inability to digest them). In high energy physics, it is not uncommon to come across causes occurring after the relevant effects: this intrigues physicists, but engineers do not draw the immediate consequence that time travel is feasible.

We know that all models could be improved, but we use what we have, until more precise ones emerge. These, in turn, will of course be pale approximations of reality: the vision of science as ultimate and granitic truth pervades the literature we are discussing, in clear contradiction with the essence of the scientific approach, which builds on the recognition of uncertainty in Nature and consequently assumes the incessant dynamism of provable knowledge (episteme).

Some see in scientific management an approach similar to the megalomania of the mechanistic extremists me mentioned earlier: «give me some basic laws of economics and powerful-enough computers, and I will manage the company, its ecosystem, the entire world economy». This attitude would indeed be foolish and dangerous. However, as we realize the complexity and the unpredictability inherent in virtually all everyday situations, we would not dream of coming to the conclusion that since everything is complex and unpredictable, we might as well give up that bit of “Taylorism” which can be useful. When using common sense to prescribe a diuretic, your doctor does not think «and to hell with physiology texts!». When starting a campaign for a new service or product, the CEO does not think «software can not give me any useful indication anyway. I’m not going to use it». When issuing a tax reform decree, the Finance minister does not say «Ladies and gentlemen, this is black magic, a matter of luck. Forget financial skills, computers, and econometric models».

For sure, inaccurate and superficial versions of scientific thought can always be found: but choosing these as targets and invoking a «Copernican revolution», as many business economists dealing with complexity do, may serve to épater les bourgeois in boards of directors but will not bring scientific results of sort. The academic management and organization science literature must strengthen its understanding of non-linear science, which it is infatuated with but does not yet master. (It is not surprising that, downstream, popular literature and consulting practices can sometimes appear naïve).

Starting to see light

The road will likely consist in abandoning the evocative but misleading epistemological debates, and focusing instead on useful techniques to tackle the growing non-linear distortion of the business world, as it happens in econophysics, leaving the reforms of the scientific paradigm to whomever should be concerned (tèkne tòn teknòn kai epistème tòn epistemòn…). On that road, I am particularly interested in two approaches which I came across in recent years.

One is Ontonix (a company in which I have no involvement but admiration for the genius of its founder, Jacek Marczyk), which has developed a holistic risk-monitoring software. What I like about Ontonix, despite a few humble methodological reservations of mine, is their pragmatism and quantitative orientation. Ontonix do not “talk” of complexity: they measure it, based on a conceptual framework in which it is seen as an intrinsic property of systems, like temperature or pressure. Physical quantities, says Ontonix, attain scientific dignity when they are measurable. Epistemological objections can be raised concerning the definition of complexity, as well as methodological ones related to the metrics. But who cares about the hair in the soup, if we are offered an inexpensive tool that can provide, with a surprisingly small organizational effort and a simple user interface, an assessment of the systemic risk of our business? I do recognize a breakthrough in Ontonix, the first measure ever of a company’s «stability rating».

An equally pleasant and enriching encounter was a 2009 paper by Sergio Barile, a professor in Rome’s La Sapienza University: “Verso la qualificazione del concetto di complessità sistemica” (Towards the characterization of the concept of systemic complexity), which I believe was published for the first time in Sinergie, Rivista di studi e ricerche (N. 79/09): one of my top-three reads of all time in the field of micro-economic complexity. I was impressed by the lucidity of the analysis, the precious and rare ability to illustrate concepts by offering real-world situations, and the acumen with which the author places the role of the observer centrally to the notion of complexity. Although Dr. Barile has more recently doped his work with some exoteric stuff that leaves me perplexed (such as “syntropy” and “anticipated potentials”), he is a creative and fascinating author and I would not be surprised if he came up with some interesting contributions.

From these signs I can tell that we are making progress, although I do not expect complexity theories and technologies to mature and penetrate the mainstream of business management meaningfully any earlier than 2025.

The complex twentieth century

gaddaIn 1946, one hundred years after Edgar Poe, a Florentine literary magazine issued in installments Quer pasticciaccio brutto de via Merulana (That Awful Mess On Via Merulana), the detective novel that does not end because reality is too complex to be reduced to logic: life is chaos, a confusion of events and contributing factors of which police commissioner Ciccio Ingravallo knows he can not possibly get on top. In fact, he

«claimed […] that unexpected disasters are never the consequence or anyway the effect of a singular cause: they are rather like a whirlpool, a cyclonic depression in the consciousness of the world, toward which a wide variety of convergent causes have conspired. He also talked of a knot or tangle or snarl, or gnommero, which in Roman means a ball of thread. […] The view that we should “reform our sense of the category of cause” which we drew from philosophers such as Aristotle or Immanuel Kant, and replace it with the plural causes, was for him a central and persistent idea: almost a fixation».

It is the same vision as the author’s, Carlo Emilio Gadda: the world as a system of systems, in which every single system affects the others and is affected by them; a world that the Milanese engineer-philosopher always tried to depict like a maze, a ball, without mitigating its inextricable complexity and without concealing, as Italo Calvino pointed out, the plurality «of the heterogeneous elements that combine to determine each event. […] Gadda knew that “knowing is to insert something in the real world: it is, therefore, distorting reality”»7. Much like Poe’s story reflects the positivist culture of its time, Gadda’s pops up right in the middle of the century of complexity and even anticipates some of its developments.

With The Name of the Rose, in 1980, Umberto Eco wrote a relativistic detective story, a metaphor of the interpretation of a text by the reader: a sign, a sentence, a plot have a meaning and a significance depending on the context in which they take place. What is true in a frame of reference may not be so in another. The clues and events that occur before William of Baskerville’s eyes have a meaning only within their respective contexts, and in order to unravel the mystery the monk must continually realize what context is relevant to interpret this or that sign. He is rigorous, analytical and Aristotelian, but is also Galilean to the extent that he can use empirical experience and recognize the effects of a change in coordinates. In the end his deductions turn out to be partially incorrect, however they still allow him to solve the plot and achieve some truth, despite the fact that truth onlyreveals itself «at times (alas, how mysterious) in the error of the world, so that we must spell out its faithful signs, even when they seem obscure and almost entirely woven of an evil will».

In Eco’s view, the reader always plays an active role in creating the meaning of a literary work: William interprets the events that occur in the convent much as a reader interprets a text and, in doing so, changes it and makes it his own. Even here, thus, to know is to put something in the real and to distort the real, as Gadda said. This is always the case. It was for Niels Bohr. It is in the epistemology of complex systems: according to some authors, to talk about the complexity of a system only makes sense if an observer is brought in. Poincaré’s three-body system, for example, while subject to tranquil deterministic laws, can become unstable: the set consisting of Sun, Moon and Earth can stage a chaotic ballet in phase space. Yet none of the three bodies, taken individually, and none of the three pairs, individually observed, ever becomes chaotic. Is the phase space of a system an institution of Nature or a construct of the researcher, only existing in the model?

In Poe’s story, the truth is there waiting to be unveiled, provided that it be consistently and wisely engaged. For Gadda, no truth is possible, because the tangle of causes hides it in a vortex of chaos. For Eco, truth has many faces: it is relative to the observer and the context. As we know, they all are right.

PAOLO MAGRASSI, CREATIVE COMMONS NON-COMMERCIAL – NO DERIVATIVE WORKS

1 Lorenz, E. (1963), “Deterministic Nonperiodic Flow”, Journal of the Atmospheric Sciences, Vol. 20, pp. 130-141

2 Anderson, P. W. (1972), “More Is Different”, Science, New Series, Vol. 177, No. 4047, pp. 393-396

3 Bouchaud, J. P. (2008), “Economics Needs a Scientific Revolution”, Nature, Vol. 455, pag. 1181. As I noted already, the limitations of rational expectations and efficient markets have been known for decades to economists. Stiglitz’ works are as old as 1975, and they are predated by those of, e.g., Herbert Scarf (1960) or Hugo Sonnenschein (1972).

4 A short review of these can be found in Magrassi, P. “A Call to Arms and a Blessing for 21st Century Information Technology: the Complexity Challenge”, Proceedings of the 4th European Conference on Information Management, Lisbon, 9-10 September, 2010, reachable on the web.

5 As we know, mathematics is actually rich with approximations, estimates, guesses. And at its core, namely theorem proving, is essentially an indeterminate exercise.

6 In the referenced book, on pp. 95-99, I showed the flimsiness and lack of scientific soundness of the most-cited paper on complexity theory and management, which at the time had already received over 1100 citations in Google Scholar. As to the confusion between analysis and reductionism, it can be traced back to an old-fashioned vitalism according to which complexity was an exclusive feature of living organisms and the structure of the latter might only be described using non-physical laws. Reductionism and vitalism are two extremes that were resolved in the Sixties by P.W. Anderson on one side and the advent of molecular biology on the other, however their scars sometimes re-emerge.

7 Italo Calvino, Lezioni americane, Garzanti 1988, pages 101 and following.

It wasn’t a happy line that of Nikita Khrushchev, who called «painted by the tail of a donkey» the works of Jackson Pollock. It was a stupid thing that arose from the obscure and abstruse meanderings of Soviet orthodoxy, according to which abstract art was a bin, a plaything of capitalism. So much so that the Muscovite Kandinsky, who had inaugurated it in the ’10s of the twentieth century, had to escape to Paris, where he died as a French citizen.

For sure, abstract painting raises interpretation issues. If the subject is reflected in a superficial way, it seems logical that people like portraits, landscapes and action scenes better than scribbled color patterns on a canvas with no bearing on natural reality.

Pollock’s paintings, then, are even more enigmatic than those of Kandinsky, where reassuring geometric or even parafigurative shapes prevail: Pollock’s really look as randomly made, from the tail of the donkey! Yet they have a huge success and I myself, while not an art expert, go crazy for them.

The truth is that geometric shapes exert a charm on us, and maybe even the random or almost random ones do: just think of waves or clouds. Ever since Plato’s time we reason about why we humans can feel good before a work of art. Over the past 50 years, with the advent of computers and ever more refined tools of scientific investigation, such as optical scanners or magnetic resonances showing the functional centers of pleasure in action in people’s heads, research has taken it quite seriously.

Already in high school I was struck by a collection of essays edited by Umberto Eco: Aesthetics and Information Theory (Estetica e teoria dell’informazione, Bompiani 1972), in which the tools of Shannon’s theory and related works were used to measure and explain the meaning of artistic perception. There also have been furious fads, such as when people wanted to recognize the golden ratio throughout, from Egyptian Pyramids to Raphael to sports cars: in his The equation that could not be solved (L’equazione impossibile, BUR 2005), Mario Livio dismantled several of those speculations, at the same time telling us the importance of symmetry in our sensory perception.

Now, since twenty years but exponentially growing, fractal aesthetics has established itself, and with it the search for fractality in abstract art.

Fractal aesthetics

Fractal geometry, namely the one made of everywhere continuous but nowhere differentiable curves, seems more suitable to describe the real world than the ordinary geometry of idealized regular shapes. After all, we have never met a circle or a triangle: only mild approximations. And since fractal geometry has been invoked to analyze chaotic phenomena and explain the shape of the “phase space” of complex systems, it seems reasonable to also apply it to the tail of the donkey daubing a canvas.

According to University of Oregon’s physicist Richard Taylor, Jackson Pollock’s painting is made of fractal patterns. This is indeed the first case ever discovered and studied of a fractal generated by a human being, that is not found in nature or generated by a computer (hereinafter, there was talk of fractals concerning paintings by Leonardo, the Eiffel Tower and other works of art).

Taylor et al. measured the entire production of the painter and found a fractal dimension increasing over time: from 1.3 of 1945 to 1.9 in 1950. (The dimensions of the fractal world are comprised between 1 and 2. Differentiable curves of the ordinary geometry have a dimension of 1. The Peano curve, which can fill the two-dimensional space, has dimension 2. A piece of paper crumpled up and thrown in the trash bin has D = 1.5).

In parallel to the studies on Pollock, and even before that, a thriving research had been born on the alleged aesthetic of fractals. How do humans perceive fractals? Which do we like most and which least? (Here goes again the theme of symmetry and its charm). In the mid-’90s, several measurements were made ​​using human observers and software-generated shapes, but failed to find any correlation between the fractal dimension D and the pleasantness of the sensations experienced by human recipients.

Stubborn Taylor did not give up, and in 2002-2003, enlisting a bevy of staff psychologists, showed to 200 human guinea pigs three types of fractals: those found in nature (clouds, trees, cauliflower, etc.), those generated by the software … and the paintings of Pollock. He believes to have found a prevailing aesthetic preference for fractal dimensions between 1.3 and 1.5.

Fractals as a bridge between science and art

Following these (controversial) studies, fractal art movements have started to appear. These authors, unlike Pollock and other artists predating Mandelbrot’s works in the 70’s, have the explicit purpose of painting fractally and/or are consciously influenced by fractal geometry -even though in reality they rarely understand it well. They, for example, often confuse random and fractal shapes.

Another confusion concerns the relationship between mathematics and art. Today it is common to hear that fractal geometry be the definitive trait d’union between the artistic and the scientific worlds.

Yet the link between mathematics and art is more complex, and far more ancient than the discovery of fractals (a rationalization, due to Mandelbrot, of work by Weierstrass, Cantor, Koch, Poincaré, Klein, Julia and other early twentieth century scholars): just think of symmetry, Keats’ «truth is beauty», or the eternal talks on the beauty of mathematics, occurring long before Mandelbrot and Pollock.

Although today we find them specially intriguing, fractals are but one of the links between science and art, not the only or even the main one. They are one of the arches of a complex bridge.

The role of the observer

I admire Taylor’s and similar works and I believe they are of great interest. It is important to understand how aesthetic perception in humans si formed.

But at times these studies have us forget that art perception is something more complicated than a beautiful sequence of numbers and a chi-square test. Aesthetics is not merely a function of the work of art but also, and strongly, of the observer’s cultural level.

Therefore, no matter how many tricks we devise to try to understand what a man or a chimp feels when looking at a landscape or a symmetrical shape or a fractal, we are still far from an holistic explanation of aesthetic perception. We should organize experiments that filter out the “noise” of culture. With monkeys, it can be done: with humans it’s more difficult. Even artists that are great but popular, and that everyone likes, like Fellini, Caravaggio, van Gogh, Haring or Verdi, are perceived differently by me and by the historians of their respective arts.

It is a problem similar to that which arises in the measurement of IQ. Many of the tests contain a cultural component, which interferes with the “innate” intelligence background. Subjects who master the language better and/or are more familiar with the cultural references made in the tests, appear more intelligent even though perhaps they are not.

Paolo Magrassi Creative Commons Non-Commercial – No Derivative Works

We have already discussed, here and elsewhere, the essential a-scientificity of the “pop complexity” phenomenon.

Part of the pop complex literature is genuinely motivated by scientific curiosity. But a larger part is, at its core, a conscious or unconscious attempt to liquidate the scientific approach.

If nothing is predictable, all is at the edge of chaos, and behaviors always emerge unexpectedly in Nature, it follows that no rigorous, methodical, controlled approach is possible. That is how the pop-complex enthusiast does reason (and often explicitly states, such as, e.g., in some essays on emerging behavior and Darwinian evolution).

In her mind, all we are left with are animistic beliefs or at most some organized religious scheme. In the [few] more sophisticated cases, the pop-complex zealot accepts at most the notion of numeric simulations: scientific investigation is reduced to studying the behaviour of complex adaptive systems.

[Nota bene: We are not talking about hard-science researchers here. We are referring to approaches in the popular literature about complexity, such as the one on complexity and management, complexity and human organizations, complexity and psychology, complexity and medicine, etc.].

Now, why is this happening? Because many people, including intellectually gifted people such as some of the pop-complex exponents, are afraid of mathematics. Math repelled many if not most of us in mid and high school. And those who did not venture into “hard” scientific studies afterwards have remained forever preys of such repulsion.

To these folks, mathematics is the hallucinating and monstrous sequence of untameable formulae that we remember from when we were fifteen: a deterministic mechanism (two of the most recurring, obsessive words in pop complexity) that needs to be followed pedantically in order to get to some predefined result.

The learned person knows that mathematics is quite another thing, and that our teen-age recollections aren’t but the exercises that we were given in order to make sure that we were learning the concepts and acquiring an inclination for precision. (Similarly, Latin literature is about reading Virgil, Ovid and Horatius, not inflecting nouns and conjugating verbs. However we need the latter in order to progress to reading).

Like music, mathematics is about creativity, abstraction and beauty, not merely exactitude. Vast parts of mathematics and formal logic aren’t exact at all, as they involve estimating, approximation and guessing. Math isn’t deterministic, either: one of its quintessential activities, giving proofs, is profoundly non deterministic.

The fortunate Italian reader who questions our words is welcome to reading Discorso sulla matematica by Gabriele Lolli (Bollati Boringhieri 2011), where the author equates the fundamental mathematical methods to the literary ones discussed by Italo Calvino in his Lezioni americane (1985-1988), namely

Lightness

Quickness

Exactitude

Visibility

Multiplicity

Consistency

(A book for which we anxiously wish an English edition.)

If pop complexity authors really studied mathematics as opposed to just some of its grammar, they would learn how to position non-linear phenomena and complex approaches under the light they deserve, instead of drawing caricatures.

Paolo Magrassi Creative Commons Non-Commercial – Share Alike

In November, 2008, I was writing a book in Italian on pop complexity and I had to undergo a (painful) review of the scientific literature on “complexity theory” for management.

The most cited paper at the time (no idea what might have changed ever since) was “The Art of Continuous Change: Linking Complexity Theory and Time-Paced Evolution in Relentlessly Shifting Organizations”, by Shona Brown and Kathleen Eisenhardt, which had been published in 1997 by Administrative Science Quarterly (Vol. 42, No. 1).

It had already been cited over 1100 times by other authors as a sort of management-science complexity Bible.

In actuality, rather than «extending thinking about complexity theory», as bombastically announced in the abstract and implicitly in the title itself, all the paper accomplishes is to offer  a bunch of suggestive references to a sloppy popular literature. It also explicitly admits, in the very last paragraph, not to have empirically proved anything on the relationship between complexity and organization: «If these inductive insights survive empirical test, then they will extend our theories […]».

As typical with unsuccessful scientific accounts, the bibliography is very long and includes citations of works totally unrelated to the paper’s content as well as of others which the authors have obviously not understood, if at all read, such as a renowned popular work by physicist Murray Gell-Mann, The Quark the Jaguar, an absolute must as a citation for authors who dwell upon complexity but, lacking a scientific background, feel the need of putting together a credible bibliography. 

The 35-page Brown and Eisenhardt paper starts talking about complexity only on page 30 (beginning with «Perhaps closest to our research is work on complexity theory […] »). It does the job by merely quoting four books (no page numbers), and  concludes it with these words:

«Although speculative, our underlying argument is that change readily occurs because semistructures are sufficiently rigid so that change can be organized to happen, but not so rigid that it cannot occur. Too little structure makes it difficult to coordinate change. Too much structure makes it hard to move. Finally, sustaining this semistructured state is challenging because it is a dissipative equilibrium and so requires constant managerial vigilance to avoid slipping into pure chaos or pure structure. If future research validates these observations, the existence of semistructures could be an essential insight into frequently changing organizations».

The words «[it] is challenging because it is a dissipative equilibrium and so […]» are one annoying example of unnecessary abuse of pseudoscientific language to state something that could have been said in a clear and simple fashion.

What the authors intended to say is that if an organizational structure is too rigid it will tend to oppose any change, while if it is not structured at all it inclines to chaos; the intermediate organizational condition is more flexible, however its equilibrium is unstable since the state can become rigid or chaotic unless it is persistently controlled.

This may not sound like a tremendously innovative concept to you, yet if you go and read the paper you will concur that it could be stated as I just did. The analogical and imprecise resort to “dissipative structures” serves the purpose of leading the reader to believe that the authors are referring to a scientific context which they know well and which presumably attests the veracity of their statements, adding credibility to them.

However, the reader with a minimal scientific culture is annoyed by the paucity of the content and by the unfounded allusions (the expression unstable equilibrium would have been clearer and more correct, with no need to call into question the entropic mutation and environmental exchange issues implied by the term “dissipative”, to which the remainder of the paper makes no reference whatsoever).

In the Conclusion section the paper states that

«At a more fundamental level, the paper suggests a paradigm that combines field insights with complexity theory and time-paced evolution […]. Continuously changing organizations are likely to be complex adaptive systems […]»

The « paradigm that combines» is what I illustrated previously, i.e. 10 lines of rhetoric, and complex adaptive systems are an obligatory slogan that you must utter if you want to make believe that you know what you are talking about when daydreaming about complexity.

The truth is that this paper is pervaded by a fundamental confusion between complexity and dynamism (which is what it is really about) and that when the authors make reference to complexity (that is on pages 30 and 33 only) they reveal their incompetence in the field.

Enough said about the most successful (by 2008 at least) scientific paper on complexity in organizational management. And if this is the scientific state of the art, you can imagine what follows in the food chain down below…

 

PS: There are good papers too! One example, again taken from my 2008 review, “Complexity Theory and Organization Science”, by Philip Anderson (Organization Science, May-June 1999, Vol. 10, No. 3). An excellent overview of complexity concepts that may or may not turn out useful in management theory.

Last year two Italian scholars who live abroad posted a funny “Sociology Working Paper” on an Oxford University website.

The piece reported on the authors’ misadventures with some Italian academic institutions (delayed meetings, decreased reimbursements for travel expenses, no-shows, etc.), and it proposed a psycho sociological model that generalized those personal experiences to become features of the Italian society as a whole.

In other words: “Italians do behave like this (unprofessionally, chaotically and unpredictably). It happened to us on these occasions. And if you go to Italy, it will happen to you as well: everywhere, not just in universities or the universities that we visited. Here is a scientific explanation of the how and why Italians are lunatic and unreliable.”

The appearance under the “ox.ac.uk” domain, the capital letters, the words “working paper” and the signatures of two academics (a sociologist and a philosopher) gave it some scientific spin. However there was nothing scientific in the article.

If you want to study certain traits of a population, you will need to A) develop an objective and repeatable method for measuring such traits and B) apply it to a representative sample of such population.

The Oxford University working paper did not have any of either (A) or (B). Just a few grotesque game-theory symbols as make-believes, the narrative of episodes of unprofessionalism of which the authors had been the victims, and a naïf, fully a-scientific attempt to generalize.

Gossip. Or, at most, a humorous and thought-provoking journalistic account. But since it was posted on a University website, some people took it seriously.

One Italian reader posted it on an Italian daily’s blog devoted to emigrants, where it was received with some enthusiasm: snob expatriates always love it if someone defames their ungrateful home country (because it does not want them back).

The simple-minded acclaim went: “here are two illustrious scholars, who had to escape the country because their cleverness caused embarrassment in Italian university departments, who have scientifically proved why Italians are so unprofessional and unreliable. Which explains many things, including why I, who am a phenomenal researcher / professional / manager, cannot get a top-notch job a in world-renowned think-tank here in Italy, a university tenure or a Nobel price.”

But it was worse than that. A few rapid internet exchanges eerily let a much worse truth emerge before me: the authors themselves thought that their work was science! Up to the point that one of the two refused to publish a comment of mine in her blog, because I was demolishing their unscientific attitude.

Now, here’s the bottom line.

I have the privilege of personally knowing many expatriate Italian scientists of extremely high level (Italians produce the fourth scientific output in the world when measured by number of highly-cited papers divided by dollars spent in R&D): not just by publication impact factor or academic ranking, but by the reputation that they enjoy with all experts in their respective fields. (These are the people that Italy should be wanting back, whatever that means).

Not one of these folks would ever write such a cumbersome piece of prose and then think it was science.

Most, I suspect, would furthermore concur that the stories of unprofessionalism told by our two goliards laureate are to be experienced almost anywhere in the world with approximately the same frequency as in Italy (this is my recollection of 30 years on the road).