Posts Tagged ‘complexity’

downloadAs we already discussed when we mentioned the guy who claimed he could distinguish real Pollocks from fakes by looking at their fractality, here goes another lot of scholars who maintain they can tell wheat from chaff in the commercial art world (a proven method would lead to lucrative consulting fees): their paper seems to me less a breakthrough work than an abracadabra written in poor English about a quantity, complexity, which the authors can neither measure nor define.

If I am wrong, please let me know…

Advertisements

Life can only be understood backwards, but it must be lived forward. [Søren Kierkegaard]

grahamsIn 1841, Graham’s Magazine serialized the publication of Edgar Allan Poe’s The Murders in the Rue Morgue, arguably the first detective story ever written. Auguste Dupin, a Parisian who is not a professional investigator, reads newspaper reports of a mysterious and vicious crime, and using only induction and deduction solves the mystery of two women butchered in a room locked from inside. Analysis and logic thus made their entrance in the history of fiction, and the novel was forever to be taken as a symbol of the triumph of analysis and reasoning, because it appeared in an age when trust in scientific progress and human speculative faculties was at its climax. The story resonated with widespread feelings.

The scientific revolution had made impetuous between the eighteenth and nineteenth centuries, even allowing for the development of a mechanistic view of the universe: knowing the basic laws of dynamics, and applying them pedantically to every elementary particle of matter, it could be deemed in principle possible to predict the future state of any system, whether it was a bunch of artillery shells, the molecules of a gas, or an entire human body. Abusing such logic, some extremist fringe groups of this scientism held it natural to subject to the same mechanism the psychological and the social spheres, too: if the brain is chemistry and electricity, and if we can get to know all the laws that govern electric fields and molecules, then we can, in theory at least, know the dynamics and the future development of any person, group, society. Everything is knowable, hence predetermined. Anecdotal is the response that eminent scientist Pierre-Simon de Laplace gave to Bonaparte, who had asked why there was no mention of the Creator in his work on astronomy: «It is a hypothesis which I did not need».

These views, which are taken very seriously today by a burgeoning literature on complexity in organization and management science, are in reality little more than caricatures and often even apocryphal. For example, Laplace is usually cited as a champion of unbridled determinism: however his jokes, his popular writings and his lectures were not but the media forays of an intellectual who sought to profit from his academic success and credibility to build a political career –which in fact he got to, helped by bold transformism. He knew that the imperial rhetoric would gladly espouse a scientistic doctrine voted to power, control and vainglory. You will not win a dictator’s heart and mind with talks of concern, nuance, doubt and vagueness! In fact, when Laplace at the end did act that way, because he had become invested with ministerial responsibilities and was dealing with day-to-day practical issues, he was fired. In the memoirs written at St. Helena, Napoleon will note: «A scientist of the first rank, Laplace was soon to be a poor administrator; from his first acts we recognized our mistake. […] He would not consider any issue under the right angle, sought subtleties everywhere and did not conceive but problems».

End of extremist alibis

However, the foundations on which extremists could base their mechanistic faith began to be demolished by science itself at the beginning of the twentieth century, which we might call the Century of Complexity.

Quantum physics showed that even an infinitely accurate meter can not determine the precise position of a particle (as Laplace believed), whose motion is also, if in small part, subject to chance. Furthermore, the study of nonlinear dynamical systems, made possible by computers in the second half of the century, consolidated a knowledge that had emerged several decades earlier: two systems A and A’, no matter how similar their initial conditions, may become increasingly different as time elapses1. Making predictions is therefore in principle impossible, because if I fix my attention on system S the possibility exists that in the future it will behave as S’. These two discoveries frustrated all deterministic ambitions.

midAt the same time, physicists closed the door to reductionist dreams, that is the hope of understanding the world by only studying microscopic physics: just like a bunch of sports fans or a flock of birds sometimes do things which are not explained by individual attitudes, equally so elementary particles, when observed not one by one but rather as sets, may exhibit behaviors that are unpredictable by using the physical laws that govern the motion of the individual particle2. It is therefore necessary to find the laws that govern the aggregates, the systems, as opposed to just the “fundamental” ones.

Reality is complex

Not only is the world complicated (from complico: to bend, to twist), namely of many features, often hidden: it also is inherently complex (complector: to hold together, to combine, to tangle), in the sense that almost always those features are interrelated and influence each others.

Take political economy. If you increase taxes, you will have more resources to build production-enabling infrastructures; but at the same time you are reducing the capacity of consumer spending, which is production’s ultimate goal. The risk is that warehouses remain full and companies start to lay off, which further weakens consumer demand and establishes a nasty vicious circle (feedback). If, to remedy, you cut taxes, you must do it in a timely manner, because if you wait too long a time down the spiral, families will use their increased monetary resources for saving rather than for purchasing, since they lack confidence in the future. At that point the State has no more resources to push the economy, which itself is struggling to recover because no one buys anything.

There is a great deal of organizational and management situations in which the feedback between a cause and its effect(s) gives rise to very complex situations, creating the clear feeling that simple cause-effect analyses can be of limited usefulness and that they should be complemented by common sense (“heuristics”) as well as by the adoption of drastic simplifications. It happens every day with our children, investment portfolios, or health. If a patient is prone to heart failure, they will be advised not to drink and take diuretics, even if they suffer from kidney stones. The doctor is choosing, with good sense, the lesser evil, though aware that anything done to an organ of the human body is reflected on a half dozen others at least, which in turn will impact on others with cascading effects that could eventually affect the one that was aimed to be cured.

If we look at the global economy, the potential for complexity is obvious: just think how many connections there are, how many cause-effect relationships that might be subject to feedback. Financial markets, economies, networks (such as energy or transport) are interconnected. Consumers also are, and influence their behaviors, through forms of communication such as social networking, mobile telephony, email. Feedback phenomena are numberless, and there is no economic context, such as for example the ecosystem in which a company is immersed, that can be understood simply by breaking it down into its parts and taking them into consideration one by one: the analytical approach ought to be complemented by the holistic, looking at the system, not just the components, since the linear sum of their effects is not equal to the behavior of the complex.

Linearity as a useful approximation of reality

The “systems” and the “problems” encountered in nature are essentially of that kind, namely non-linear. However, in many situations one may resort to linearity (i.e.: A) cause → effect, and B) sum of causes → sum of effects) as a first order approximation: as long as the effects of non-linearity can be considered negligible, we can build a mathematical model of the system as if it were linear. A problem, that is, which is the linear sum of its causes, whose mutual interactions give rise to negligible second order effects, like the diuretic does for heart failure.

This simplifying approach is fruitful in many situations, from electronics to ecology, from computers to economics, from biology to celestial mechanics, and enormous scientific and technological advances have been made on the basis of linear approximations. Linear models are useful because within their linear regime many systems are similar and their behavior can be described by the same equations, even though the contexts are very different. Complex systems, to the contrary, each have a different personality and mathematical formulation and, indeed, in most cases not even that: equations are replaced by computer simulations. That is why technology strives to remain in linear territory as much as possible. It is like when a diuretic is given to someone with kidney stones. Or like when the laws of economics are formulated.

At the root of the current economic doctrine indeed stand some obvious simplifications of reality (Efficient Market Hypothesis, Rational Expectations Hypothesis), which are well known to economists. It is an almost self-evident truth that the price of a stock or an asset can affect the price of another (a primary source of non-linearity); it is a well known fact that economic agents do not behave rationally (a Nobel Prize was awarded in 2002 to Daniel Kahneman for proving this in the seventies), that markets are perfectly efficient only in extremely rare circumstances (Nobel Prize to Joe Stiglitz in 2001), and that they can suddenly go crazy at times, moving far away from their “equilibrium”. These convictions notwithstanding, economic theory and practice continue to proceed on the basis of those linear simplifications. Only occasional adjustments to the models are made, because in essence we do not know better ones. (Not even Mandelbrot and Taleb, for example, despite the subtlety, the importance and validity of their criticism, were able to propose usable mathematical models for the financial sector).

It is anyway a fact that catastrophic or near-catastrophic crises appear to be increasingly frequent. It may be sudden and severe disruptions in financial markets, or shocks that propagate along the increasingly complex value “chains” of businesses, which really are complicated networks or lattices. These shocks can affect in unpredictable ways productive sectors seemingly unrelated to those that suffered the crisis in the first place. For these reasons, one wonders, like physicist-financier Jean-Philippe Bouchaud did in 20083: are those assumptions of linearity and rational expectations still realistic and sustainable in 2011? Are the effects of their imperfection really negligible?

Think of Einstein’s relativity. In the vast majority of everyday situations, including those involving sophisticated technologies, we do not worry about the effects of relativity, because the objects with which we deal do not move at speeds approaching that of light or travel intergalactic distances. The effects of relativity are negligible. Still, GPS devices in our cars and smart phones would not work if their hardware and firmware did not take the effects of relativity into account. There goes an example of a common situation in which the approximation «this theory has no practical relevance» was valid 25 years ago (when we did not use GPS’s) but is wrong today: we have had to enrich our mathematical models to consider its practical effects.

By the same token, we ought to be careful not to underestimate the occurrence in the economic world of facts that could render obsolete and wrong the linear approximations underlying the dominant economic paradigm. And in this sense, a new fact is the number of interconnections between economic agents, both at macro and micro level. (Substantial quantitative variations can lead to qualitative changes: one gram of paracetamol cures headache or fever, but a hundred grams are deadly!).

According to a minority but growing number of scholars, 21st century global markets can not be modeled as linear systems: the linearity assumption will grow more and more inadequate as interconnections increase, because they are the ultimate source of non-linearity and their growth beyond a certain threshold is the fact that makes the approximation no longer realistic. Radically new doctrinal approaches, if only embryonic for now, are being proposed in econophysics, a discipline aiming to encourage economic research to adopt methods that in the natural sciences have been developed to describe complex systems. Many physicists and a few economists are testing complex models or agent-based simulations that have proved successful in physical or biological situations and could perhaps be replicated in the financial/economic context4.

(Econophysics research needs to be strengthened by a broader and deeper participation of economists. Many economists of high rank, while recognizing the limitations and imperfections of the current paradigm, do not seem much worried about its sustainability and health. For example, according to some of them the financial crash of 2008 was not but the warning, provided by efficient markets, of an imminent violent economic downturn. In this vision, expressed effectively by Eugene Fama in an interview on the New Yorkermagazine in January 2010, finance was the victim and not the cause of the economic collapse. In addition, some econophysicists seem to ignore, or at least not to care about, the corrections that economists are gradually making to the simplified models.)

Complexity and business

Similarly, a growing minority of business/managerial economists consider Taylorist scientific management (which has made it to the present day through various mutations and enrichments) as still plagued by extreme mechanism and unaware of the lessons coming from complexity science (deterministic chaos, emergent behavior): they therefore believe that it should be replaced with models inspired by non-linear dynamical systems, non-equilibrium thermodynamics, agent-based simulations and other modern “complex” tools. (Even though, as in econophysics, none has proved applicable yet to business economics).

In a 2009 book (Difendersi dalla complessità. Un kit di sopravvivenza per manager, studenti e perplessi, Franco Angeli) I analyzed this phenomenon and showed that, while bringing (like econophysics) real problems to the table, it is still immature and based on a distorted understanding of the relevant scientific concepts and on profound misconceptions concerning their applicability. Business economists dealing with complexity already miss their target in the early steps of their analysis, as they move from the assumption that science should get rid of mechanistic approaches. Apart from the fact that Laplacian mechanism is not, as we have seen, but a caricature of the nineteenth century’s scientific mainstream (widely surpassed, e.g., by quantum mechanics), the anti-mechanistic obsession of these scholars is grounded on a lack of understanding of the mentality and instruments of science: their publications systematically refer to myths such as «exact sciences», «the determinism of mathematics»5, and «reductionism» in the sense of «breaking down a system into parts to analyze them one by one» (confusion between reductionism and analysis)6.

Another recurrent mistake, nearly systematic in this school of thought, is the confusion between epistème and tèkne. Almost all authors miss the distinction between matters of principle and practical, technological issues. For example, from the observation that in the field of epistemology and science non-determinism is ruling, they conclude that forecasting is a scientistic obsession, when not an exercise in futility. However the truth is that even in complex application domains now considered classics, such as meteorology, we continue to make forecasts, and they get ever better. Macroeconomy projections, like GDP or deficit at year end, are routinely made because they are necessary for governance, and deviations from actual values are seldom dramatic. (It should be recalled that predictions in economics are usually expressed as three distinct scenarios depicted as scissors diagrams: the fact that the media only report and discuss the central curve attests not the forecasts unreliability but rather the public opinion’s inability to digest them). In high energy physics, it is not uncommon to come across causes occurring after the relevant effects: this intrigues physicists, but engineers do not draw the immediate consequence that time travel is feasible.

We know that all models could be improved, but we use what we have, until more precise ones emerge. These, in turn, will of course be pale approximations of reality: the vision of science as ultimate and granitic truth pervades the literature we are discussing, in clear contradiction with the essence of the scientific approach, which builds on the recognition of uncertainty in Nature and consequently assumes the incessant dynamism of provable knowledge (episteme).

Some see in scientific management an approach similar to the megalomania of the mechanistic extremists me mentioned earlier: «give me some basic laws of economics and powerful-enough computers, and I will manage the company, its ecosystem, the entire world economy». This attitude would indeed be foolish and dangerous. However, as we realize the complexity and the unpredictability inherent in virtually all everyday situations, we would not dream of coming to the conclusion that since everything is complex and unpredictable, we might as well give up that bit of “Taylorism” which can be useful. When using common sense to prescribe a diuretic, your doctor does not think «and to hell with physiology texts!». When starting a campaign for a new service or product, the CEO does not think «software can not give me any useful indication anyway. I’m not going to use it». When issuing a tax reform decree, the Finance minister does not say «Ladies and gentlemen, this is black magic, a matter of luck. Forget financial skills, computers, and econometric models».

For sure, inaccurate and superficial versions of scientific thought can always be found: but choosing these as targets and invoking a «Copernican revolution», as many business economists dealing with complexity do, may serve to épater les bourgeois in boards of directors but will not bring scientific results of sort. The academic management and organization science literature must strengthen its understanding of non-linear science, which it is infatuated with but does not yet master. (It is not surprising that, downstream, popular literature and consulting practices can sometimes appear naïve).

Starting to see light

The road will likely consist in abandoning the evocative but misleading epistemological debates, and focusing instead on useful techniques to tackle the growing non-linear distortion of the business world, as it happens in econophysics, leaving the reforms of the scientific paradigm to whomever should be concerned (tèkne tòn teknòn kai epistème tòn epistemòn…). On that road, I am particularly interested in two approaches which I came across in recent years.

One is Ontonix (a company in which I have no involvement but admiration for the genius of its founder, Jacek Marczyk), which has developed a holistic risk-monitoring software. What I like about Ontonix, despite a few humble methodological reservations of mine, is their pragmatism and quantitative orientation. Ontonix do not “talk” of complexity: they measure it, based on a conceptual framework in which it is seen as an intrinsic property of systems, like temperature or pressure. Physical quantities, says Ontonix, attain scientific dignity when they are measurable. Epistemological objections can be raised concerning the definition of complexity, as well as methodological ones related to the metrics. But who cares about the hair in the soup, if we are offered an inexpensive tool that can provide, with a surprisingly small organizational effort and a simple user interface, an assessment of the systemic risk of our business? I do recognize a breakthrough in Ontonix, the first measure ever of a company’s «stability rating».

An equally pleasant and enriching encounter was a 2009 paper by Sergio Barile, a professor in Rome’s La Sapienza University: “Verso la qualificazione del concetto di complessità sistemica” (Towards the characterization of the concept of systemic complexity), which I believe was published for the first time in Sinergie, Rivista di studi e ricerche (N. 79/09): one of my top-three reads of all time in the field of micro-economic complexity. I was impressed by the lucidity of the analysis, the precious and rare ability to illustrate concepts by offering real-world situations, and the acumen with which the author places the role of the observer centrally to the notion of complexity. Although Dr. Barile has more recently doped his work with some exoteric stuff that leaves me perplexed (such as “syntropy” and “anticipated potentials”), he is a creative and fascinating author and I would not be surprised if he came up with some interesting contributions.

From these signs I can tell that we are making progress, although I do not expect complexity theories and technologies to mature and penetrate the mainstream of business management meaningfully any earlier than 2025.

The complex twentieth century

gaddaIn 1946, one hundred years after Edgar Poe, a Florentine literary magazine issued in installments Quer pasticciaccio brutto de via Merulana (That Awful Mess On Via Merulana), the detective novel that does not end because reality is too complex to be reduced to logic: life is chaos, a confusion of events and contributing factors of which police commissioner Ciccio Ingravallo knows he can not possibly get on top. In fact, he

«claimed […] that unexpected disasters are never the consequence or anyway the effect of a singular cause: they are rather like a whirlpool, a cyclonic depression in the consciousness of the world, toward which a wide variety of convergent causes have conspired. He also talked of a knot or tangle or snarl, or gnommero, which in Roman means a ball of thread. […] The view that we should “reform our sense of the category of cause” which we drew from philosophers such as Aristotle or Immanuel Kant, and replace it with the plural causes, was for him a central and persistent idea: almost a fixation».

It is the same vision as the author’s, Carlo Emilio Gadda: the world as a system of systems, in which every single system affects the others and is affected by them; a world that the Milanese engineer-philosopher always tried to depict like a maze, a ball, without mitigating its inextricable complexity and without concealing, as Italo Calvino pointed out, the plurality «of the heterogeneous elements that combine to determine each event. […] Gadda knew that “knowing is to insert something in the real world: it is, therefore, distorting reality”»7. Much like Poe’s story reflects the positivist culture of its time, Gadda’s pops up right in the middle of the century of complexity and even anticipates some of its developments.

With The Name of the Rose, in 1980, Umberto Eco wrote a relativistic detective story, a metaphor of the interpretation of a text by the reader: a sign, a sentence, a plot have a meaning and a significance depending on the context in which they take place. What is true in a frame of reference may not be so in another. The clues and events that occur before William of Baskerville’s eyes have a meaning only within their respective contexts, and in order to unravel the mystery the monk must continually realize what context is relevant to interpret this or that sign. He is rigorous, analytical and Aristotelian, but is also Galilean to the extent that he can use empirical experience and recognize the effects of a change in coordinates. In the end his deductions turn out to be partially incorrect, however they still allow him to solve the plot and achieve some truth, despite the fact that truth onlyreveals itself «at times (alas, how mysterious) in the error of the world, so that we must spell out its faithful signs, even when they seem obscure and almost entirely woven of an evil will».

In Eco’s view, the reader always plays an active role in creating the meaning of a literary work: William interprets the events that occur in the convent much as a reader interprets a text and, in doing so, changes it and makes it his own. Even here, thus, to know is to put something in the real and to distort the real, as Gadda said. This is always the case. It was for Niels Bohr. It is in the epistemology of complex systems: according to some authors, to talk about the complexity of a system only makes sense if an observer is brought in. Poincaré’s three-body system, for example, while subject to tranquil deterministic laws, can become unstable: the set consisting of Sun, Moon and Earth can stage a chaotic ballet in phase space. Yet none of the three bodies, taken individually, and none of the three pairs, individually observed, ever becomes chaotic. Is the phase space of a system an institution of Nature or a construct of the researcher, only existing in the model?

In Poe’s story, the truth is there waiting to be unveiled, provided that it be consistently and wisely engaged. For Gadda, no truth is possible, because the tangle of causes hides it in a vortex of chaos. For Eco, truth has many faces: it is relative to the observer and the context. As we know, they all are right.

PAOLO MAGRASSI, CREATIVE COMMONS NON-COMMERCIAL – NO DERIVATIVE WORKS

1 Lorenz, E. (1963), “Deterministic Nonperiodic Flow”, Journal of the Atmospheric Sciences, Vol. 20, pp. 130-141

2 Anderson, P. W. (1972), “More Is Different”, Science, New Series, Vol. 177, No. 4047, pp. 393-396

3 Bouchaud, J. P. (2008), “Economics Needs a Scientific Revolution”, Nature, Vol. 455, pag. 1181. As I noted already, the limitations of rational expectations and efficient markets have been known for decades to economists. Stiglitz’ works are as old as 1975, and they are predated by those of, e.g., Herbert Scarf (1960) or Hugo Sonnenschein (1972).

4 A short review of these can be found in Magrassi, P. “A Call to Arms and a Blessing for 21st Century Information Technology: the Complexity Challenge”, Proceedings of the 4th European Conference on Information Management, Lisbon, 9-10 September, 2010, reachable on the web.

5 As we know, mathematics is actually rich with approximations, estimates, guesses. And at its core, namely theorem proving, is essentially an indeterminate exercise.

6 In the referenced book, on pp. 95-99, I showed the flimsiness and lack of scientific soundness of the most-cited paper on complexity theory and management, which at the time had already received over 1100 citations in Google Scholar. As to the confusion between analysis and reductionism, it can be traced back to an old-fashioned vitalism according to which complexity was an exclusive feature of living organisms and the structure of the latter might only be described using non-physical laws. Reductionism and vitalism are two extremes that were resolved in the Sixties by P.W. Anderson on one side and the advent of molecular biology on the other, however their scars sometimes re-emerge.

7 Italo Calvino, Lezioni americane, Garzanti 1988, pages 101 and following.

My struggle with the many views on complexity continues. I have now managed to synthesize five “megaviews”.The taxonomy is certainly not definitive, but if anything it’s a step ahead compared to my 2008 book as well as to Seth Loyd’s old 32 different definitions…

Here are the five major meanings of “complexity” that you will find in any literature, whether scientific or fictional, and their respective overlaps. Even the colloquial use of the term is included here: when just anybody utters the word, one or more of these meanings is implied.

Nonlinear (or dynamical) meaning: The effects caused by mutual interactions between the components of a [nonlinear] dynamical system. These may include feedback, emerging behaviour, self-organization, unpredictability and non-deterministic chaos.

Computational (or static) meaning: The computability of a function or algorithm; the time and/or effort it takes to solve a problem. The intersection with nonlinear complexity occurs, in my opinion, via the concept of Kolmogorov entropy.

Epistemological meaning: This is originally due to Edgar Morin (and in lesser part to Isabelle Stangers and Ilya Prigogine) but is being refined. See. e.g., Minati, Pessa, Collective Beings, Springer, 2007. Sample issues: What does “system” mean? What does “component” mean? What does it mean that a component may belong to both complex and non-complex systems at the same time? How does the observer modify the object of her investigation?

C.E.Gadda (1893-1973)

Literary meaning: See e.g. Jorge Luis Borges or Carlo Emilio Gadda. Gadda, for example, published in 1946 a detective story (Quel pasticciaccio brutto de via Merulana) that does not come to an end: the mystery cannot be solved because reality is too complex. Detective Ciccio Ingravallo cannot reach the ultimate causes or motives of the murder, because causes not only produce effects but also are affected by these and  influence each other. Gadda’s literary view overlaps with both the nonlinear meaning and the computational one (reality as a “ball of thread”), although this latter intersection is not represented in my graphic -the nonlinear intersection is prevalent anyway.

Hyperbolical (or fideistic) meaning: Most books, papers and writings of sort on complexity for management, organizational theory, psychology. This literature attempts to draw from both the epistemological and the nonlinear one, but in most cases doesn’t get much of either and usually degenerates in a banal anti-scientific approach. On a mission from God to liberate the world from determinism and reductionism, its authors believe that all is chaotic, no predictions whatsoever can be made and Nature escapes human insight entirely.

Paolo Magrassi Creative Commons Non-Commercial – Share Alike
They want to teach managers “complexity”.

How silly.  

For one thing, nobody knows what to do, pragmatically, about complexity.

Secondly, you can’t teach non-linearity to people who know nothing about math, systems theory, linearity, probability, calculus, etc.

Despite claims of “scientific management” and “taylorism”, the scientific approach never made it to the hearts and minds of managers. Less than 5 percent of them have been trained scientifically. The vast majority merely go through a technical and professional education: accounting, marketing, vanilla statistics, “strategy”, etc.

Teach them scientific methods and approaches. Teach them how to be rigorous; how to be open-minded; how to be inquisitive; how to be curious; how to communicate unambiguously. (N.B.: the latter requisite calls for language mastery in addition to logic).

And also teach them the grand metaphors that Art has been creating ever since the advent of homo sapiens. Through art, literaure, music, we find out as much about humanity, and the real world, as with science itself -if not more.

(This does not mean running those horrifying seminars where CEO’s are thought how to use Art in Management: it means that, when they are young, to-be managers should be thought literature, art, music, etc.)

If we have a flourishing of (mostly nonsensical) papers and books on “complexity theory” and management, why do we not have a Non-Euclidean Management Theory yet?

Geometry offers plenty of occasions for elucubrations of all kind, and surely a rich management theory, post-tayloristic as well as post-prigoginian, could be developed out of that.

Take the simple shapes we study in secondary school, such as circles, triangles or squares. If you think about it for a while, you will admit that you actually never encountered any of them in Nature.

The most technologically-advanced and accurately crafted circle is far from perfect: inspect it with a sufficiently precise microscope, and you will notice the imperfections. Today, we could build a circle using a scanning force microscope (SFM): a nano tool allowing to move individual atoms. But in the end, what we have is still a relatively rough contour: optical microscopes would show a perfect circle, but an SFM would reveal the trick!

Perfect figures only exist in geometry books. And they do not belong in this world: they are idealizations of reality, platonic ideas.

Equally, mathematical models, economic theories, information systems and organizational models of all sorts are not but idealizations of the real enterprise.

What to make of this simple fact of life?

Can a management theory based on things that really don’t exist bring us any useful practice? When we study or approach a business, are we looking at the real thing or merely at a useless idealized model? Is management theory (whether “complex” or not) a world of cartoons?

And mind you, this is only the starting point of the new Theory, the tip of an iceberg. Indeed, not only the real enterprise is hardly reducible to idealized models: but, even if you limit yourself to dealing with the models, well, they do not obey the ordinary rules that we learned at school.

Circles, triangles and squares belong in a non-Euclidean world, where straight parallels cross, the shortest path is a curve, and what you see is not what you get.

We need a new Geometry of Management, a realistic one. A Non-Euclidean Management Theory.

Here is an argument that can be found in some “epistemology” publications: those books and papers, you know, where one is supposed to show he’s read Kant and Popper and will pretend that the stuff he deals with is so difficult that cannot possibly be rendered in plain English.

(Don’t go searching: to find the 1% publications that are beautiful and worth reading, you’d have to sort through a load of futile stuff, serving the sole purpose of adding page count for their authors and publishers).

The argument, usually served in obscure and bombastic wording, is actually quite simple and can be expressed as follows.

Things that exist in Nature aren’t complex or simple per se: they merely exist. It’s only after a human observer looks at them, that some such qualifier can be attached to them.

Take, as an example, Poincaré’s 3-body problem. The fact, that is to say, that the system composed of Earth, Moon and Sun can become unstable and unpredictable. Well, the argument goes, neither Earth, Moon or Sun is unstable and unpredictable: it’s only the combination of the three, once we regard them as a system, that behaves that way.

The conceptual process is this: You consider the three bodies as a system, then you design a state space where the behavior of the system in time can be represented, and then you discover that curves in such state space are chaotic. You conclude that the system’s dynamic is chaotic.

However, none of the participants in the artificial system you have studied is chaotic. And it does not even know that it belongs to such system! For example, Earth might believe that she belongs in the Solar System or the Milky Way system: neither of which is the 3-body system you’re ascribing her to. Sun, who is more pretentious, perhaps believes to be a system in itself, with no need or wish to be mixed with inferior participants.

For that matter, if you study the Earth-Moon (or Sun-Earth, etc.) system, you are bound to discover that it never becomes chaotic!

So, what to make of the statement: there exist “systems” that are complex (a.k.a. deterministically chaotic or showing emergent behaviour)?

Do such systems really exist in Nature or do they just belong in our imagination? Or again: should we rather not talk about “systems” but “systems with their observers”?

Some argue that, because of this issue, no “simple” system can be conceived and, effectively, a system should be defined as a set of parts that, when acting as a whole, produces effects that the individual parts cannot. According to this view, there is no such thing as a simple system, and complexity is in itself a definition for the “system” concept.

The issue is tricky. We shall leave it at that for the time being.

Those interested in the idealism vs. realism issue in science, will find a good compendium of it in the first chapters of Roger Penrose’s The Road To Reality, Vintage Books, 2004.

Those who want to think more about the meaning of the systemic approach, especially in relation to complexity and emerging behavior, should read Minati, Pessa, Collective Beings, Springer, 2007.

 

See here.