I love Nouriel Roubini wholehartedly. His blog posts and interviews give me amusement and pleasure, for when you’re done with it, nothing worse may happen to you. 

Dr. Doom has figured out exactly how to impress the media and the politicians.

After «predicting» the subprime affair, the 2008 financial disaster and the subsequent economic crisis, he’s long been «predicting» the collapse of the Eurozone: I predict that one day he’ll prove right, as much as you would’ve in 10 B.C., had you «predicted» the collapse of the Roman Empire.

I have never, ever heard anything constructive from Dr. Doom. I wonder what consulting engagements Roubini Global Economics may get, other than ominous hoodoo speeches or funeral ceremonies… 🙂

PS: «To predict» an economic event means to specify its date and magnitude with sufficient accuracy so as to justify the cost of the response.

Advertisements

A short selection of poor chaps who use our original concepts and definitions without quoting the source:

http://www.mixura.it/news.php

http://www.talentilucani.it/index.php?option=com_content&view=article&id=480:leconomia-nel-caos&catid=92:italia-cultura&Itemid=309

https://docs.google.com/viewer?a=v&q=cache:3ImugYvKtHQJ:www.designformale.it/Lab%2520V%2520Anno/Materiali10-11/Complessita1.ppt+complesso+complector&hl=it&gl=it&pid=bl&srcid=ADGEEShf_M1vImb2NQBW0FE8Df1eE3YsDGxAhlmX4Gg6Axz_lsDlX0bXoBDVSOo_ohuvHwR_PLIKaT6GoQkiUSqDyShxlY0qP4oOj77CNw7X3GFZlgXs5xXr7LoS49sEcGs4zhc4KcWo&sig=AHIEtbS8ceyLXrakS7co9896I47SeVnX1Q

http://webcache.googleusercontent.com/search?q=cache:qtcFfTRN6y4J:www.biscardini.it/uploads/materiale/dispensa_epistemologia_decisione_complessit_.rtf+complesso+complector&cd=25&hl=it&ct=clnk&gl=it

http://www.farwebdesign.com/cerbero/?paged=40

http://daubau.it/enciclopedia/Complessit%C3%A0

In a nutshell

Posted: March 11, 2012 in Uncategorized
Tags:

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy

I think that scientific complexity is no cure for the limitations of the current management / business disciplines, because:

  • Management is inherently unstructured. It is only (perhaps) 25% about scientific methods and related techniques (whether “linear” or not), while the other 75% is about persuasion, communication, intuition, empathy, leadership, unstructured knowledge, empirism, luck. This soft fabric is hard to teach and the related talents are typically acquired via experience: mostly, they cannot be conveyed via software or formal models and languages.
  • The management community has an insufficient command of the 25% part which is “scientific”, i.e. controlled, verifiable, repeatable and quickly teachable. For example, less than 5% of managers participating in manufacturing ecosystems master statistics or linear programming, the basic tools of the trade, and have to rely entirely on specialized consultants to manage the relevant software. As another example: bankers or CFOs do not understand the technicalities of the debate concerning the math models of creative finance. And so on.

At business, I contend, the problem is not that the world be «unknowable» and/or «unpredictable»: the problem is that too many people ignore even that very small part of it which is controllable.

I do see the risk of «illusion of control», «scientism» and «mechanism» that the management literature talks about: however I do not view these excesses as the fault of the approach, of the underlying “science”, but rather as the result of limited comprehension of said approach.

I therefore do not see how getting involved in abstruse nonlinear concepts and related machinery could in any way bring help; as much as I do not believe that driving lessons should take place at Monza on Formula One cars…

Hence my view of “complexity” subjects as sexy diversions from the actual challenges.

In cathedra pestilentiae

Posted: March 9, 2012 in Uncategorized

An academic invited me as a contributor, with a dozen other authors, to a book which he will edit. 

As the workplan, which he pretended he had put together on purpose, he sent us a two-page text made at 70% by my own words (with no credits or citations), taken from a book I published several years ago.

🙂

I could have reminded him of McLuhan’s aphorism: Only puny secrets need protection. Big discoveries are protected by public incredulity. However the risk was that he might take me seriously…

Abstract: Management and organisation research has been too alacritous espousing non-linearity, a.k.a. complexity, and trying to adapt it to its purposes. Complexity has yet to become fully mainstream even in the natural sciences and, at any rate, a prerequisite for business people to become involved in discussions of emerging behaviour, statistical mechanics, chaos theories and the like, is that they first become acquainted with decision science, systems theory, linearity, probability and statistics. In addition, business complexity and colloquial complexity are not always the same (in fact, they seldom are) as non-linear complexity; epistemological issues do not necessarily have practical consequences; and not all theoretical impossibilities do translate into practical impotence.

1 Introduction

The 20th century findings that in science exact solutions cannot be found (‘unknowability’) and exact predictions cannot be made (‘unpredictability’), have been interpreted in various ways outside the hard-science community.

In social sciences, some authors (Anderson and McDaniel, 2000; Mitleton-Kelly, 2003; Peroff, 1999; Stacey, 1995) see them as manifestations of the unfitness of the scientific method to tackle human organisations, in consonance with the old- fashioned vitalistic belief (Arecchi, 2004) that complexity is an exclusive feature of the living and may only be described using non-physical laws.

Other scholars have suggested (e.g., in the call for papers of this special issue of the journal) that non-quantitative paradigms of thought are necessary. More-moderate researchers maintain that business and organisation managers need to adopt a non mechanistic mindset and absorb a new scientific paradigm in order to face the complex world (Anderson, 1999; Kiel, 1994; McMillan, 2008; Nunes Amaral and Uzzi, 2007).

All three schools of thought equate the complexity of the business world (globalisation, innovation, hyperconnectivity, collaboration, quasi-pulviscular enterprise ecosystems, increasing uncertainty) to the inherent complexity, a.k.a. non linearity, of nature. And all assume that business managers should become involved in scientific discussions of linearity and non-linearity, emerging behaviour, statistical mechanics, deterministic and stochastic chaos, because the current tools of the trade of statistics, decision science, linear programming, optimisation, and actuarial finance are hopeless in the face of today’s complexity.

2 Complexity and non-linearity

Complexity is a rich and consequently ambiguous word. Of course we have complex numbers in mathematics, but that is not what is being referred to here. Another flavour of the term, indirectly connected to ours, is computational complexity, that is the effort it takes to solve a problem, sometimes also called complexity theory (Homer and Selman, 2011; Jones, 1997) or static complexity. However the most intriguing meaning is that of complexity related to non-linearity in dynamical systems/networks, because of its deep epistemological implications.

Non-linearity was first explored by scholars like Henri Poincaré (Poincaré, 1890), Aleksandr Lyapunov and Vito Volterra. Then by the early proponents of the systemic approach, such as Alexander Bogdanov, Norbert Wiener, Ludwig von Bertalannfy, Warren Weaver. But in the 1960s two fundamental findings, along with the advent of electronic computers, changed this field of study from ultra-specialised to almost mainstream.

Mathematician Edward N. Lorenz, who was running a meteorological model simulation on a laboratory computer, suddenly found out that a very tiny change in the initial conditions of the run could modify the end result substantially -a result famous as ‘butterfly effect’ afterwards. This allowed him to prove (Lorenz, 1963) that infinitesimal changes in a dynamical system’s initial conditions may cause finite changes over time; or, which is the same, that two systems A and A’, no matter how similar their initial conditions, may become increasingly different as time elapses. This implies that making predictions is in principle impossible, because if we fix our attention on system S the possibility exists that in the future it will behave as S’.

In about the same years, physicist Philip W. Anderson closed the door to reductionist dreams, that is the hope of understanding the world by only studying microscopic physics, as he proved that just like a colony of ants or a flock of birds sometimes does things which are not explained by individual attitudes, equally so elementary particles, when observed not one by one but rather as sets of interacting agents (‘systems’), may exhibit behaviours that are unpredictable by the physical laws governing the motion of the individual particle (Anderson, 1972). It is therefore necessary to find the laws for the aggregates, the systems, as opposed to just the ‘elementary’ ones.

These two findings, along with the works of many other scholars, including Ilya Prigogine, Stephen Smale, Robert May and Benoît Mandelbrot, led to the consolidation of a new scientific view grounded on non-determinism and anti reductionism. Some of the keywords in this context are deterministic chaos, emergence (emerging behaviour), systems far from thermodynamic equilibrium, self-organisation, organised disorder (Weaver, 1948). All ultimately are manifestations of non-linearity.

Systems are non-linear when they violate the superposition principle (Feynman et al., 1964), that is when the system’s response caused by two or more stimuli is not merely the sum of the responses which would have been caused by each stimulus individually (contrary to a widespread cliché, this does not imply that the systemic response is ‘bigger’ than the sum of the individual responses: it can be either bigger or smaller, depending on whether positive or negative feedback takes place. In fact, it could in principle be quantitatively equal, although still logically different). In the most general definition, the superposition principle also subsumes homogeneity, meaning that if the input is multiplied or divided by some quantity the output will increase or decrease by the same measure. Qualitatively, we could say that in a homogeneous system a modification to the components is proportionally reflected in a modification of the whole.

Computers are responsible for turning complexity from curiosity, which it was in the first half of the 20th century, to big issue. Already in the early 1970s computing was allowing researchers to run numeric simulations of problems that could not find exact solutions due to the complexity, or lack thereof, of their mathematical models, a challenge typical of non-linear systems. And indeed more impetus came to the complexity challenge in the form of complex adaptive systems (CASs), first exemplified by John Horton Conway’s fascinating Game of Life (Gardner, 1970), then systematised by John Holland in a foundational book (Holland, 1975).

3 The fortune of complexity in management theory

In the 1990s the complexity challenge leaked from life sciences and physics, as a handful of books were published that attempted to divulge it. Perhaps most notable among the earliest such books were Mitchell Waldrop’s Complexity (1992) and Kevin Kelly’s Out of Control (1994). Many people, in several different languages, read or had access to those books, which in turn made reference to earlier works less palatable for the public at large because of their difficulty, like Stuart Kauffman’s The Origins of Order (1993).

Other works followed shortly thereafter from eminent scientists, such as Murray Gell-Mann’s provocative The Quark and the Jaguar (1994) or Ilya Prigogine’s The End of Certainty (1997). These scholarly writings, while actually read only by few, were repeatedly quoted in countless books, papers and essays of management and organisational science content. In Europe, the popularisation of the complexity challenge was further pushed by the elaborated elucubrations of Heinz von Foerster (von Foerster and Pörksen, 2002), Francisco Varela (Varela and Maturana, 1980) and Edgar Morin, a philosopher, sociologist, filmmaker and intellectuel engagé who has published, starting in 1977, a huge corpus on epistemology, transdisciplinarity and complexity (Morin, 1997–2004).

There followed a flourishing of organisation and business management publications, most of which were unaware of the deeper scientific aspects of non-linearity but aspired to importing some of the related concepts into management theory, in search for a new management science paradigm (Kuhn, 1962) suitable to the hyperconnected, globalised business world.

One notable example would be ‘The Art of Continuous Change: Linking Complexity Theory and Time-Paced Evolution in Relentlessly Shifting organisations’ (Brown and Eisenhardt, 1997), published by Administrative Science Quarterly and the most referenced work ever in the management and organisation field on the subject, with 2,258 citations up to December 9, 2012. Its title notwithstanding, the 35-page paper does not begin talking about complexity until page 30, it is pervaded by a fundamental confusion between complexity and dynamism (which is what it is really about), and makes ample use of imprecise and analogical references to subjects, like chaos or dissipative structures, that the authors clearly did not master.

This kind of approach, based on essential ignorance of the underlying scientific terms and/or too quick in importing into management theory methodologies that may not be applicable there and tools that are unproven even in more-suitable contexts, permeates the ‘complexity and management’ literature ever since, and still constitutes the mainstream. There are, of course, more-credible approaches -see for example (Anderson, 1999)- however they still are minority in this literature.

Irrespective of its actual necessity, the process of importing the complexity challenge (in the scientific sense, not the colloquial one) into management theory has not progressed substantially since its beginning and will need to mature. Some of the misunderstandings that still plague it are presented in Sections 4 through 8.

4 Theory and practice

If a system/problem is not linear, it cannot be easily disassembled into separate blocks, because their mutual interactions have an impact on the behaviour of the whole. Furthermore, a non-linear dynamical system often cannot be reduced to a mathematical model or, as a minimum, models cannot be reused across classes of phenomena, contrary to what happens when linearity is postulated. Furthermore, as Lorenz showed, predicting the future state of a non-linear system is impossible, at least in principle and over long time intervals. Finally, it can be shown that non-linearity gives rise to recursive (sometimes imprecisely referred to as circular) cause-effect relationships, much harder to deal with than ‘B follows from A’ entailments.

These four features place the complexity challenge at the heart of the scientific method, and make it epistemologically fascinating. But they have been represented by many in the social sciences as the decline of the scientific approach: since the domain of human organisations is non-linear, the argument goes, and because science is based on linearity, scientific management is a relic of the past and ‘new knowledge paradigms’ and ‘new languages’ are needed (Anderson, 1999; Burnes, 2004; Kiel, 1994; Grobman, 2005; Hughes, 2003; Maguire and McKelvey, 1999; McMillan, 2008; Nunes Amaral and Uzzi, 2007). A vision expressing much more than a reform or a refusal of Taylorism, and that not infrequently inclines to summarily dismiss anything bearing the ‘scientific’ qualifier next to it.

This view should be carefully weighted against the inescapable difference between theory and practice, epistème and tèkne. Technology is always grounded on approximations, it is never exact, infinitely precise; and it operates despite the awareness of possible theoretical limitations underneath. Real-world scientific applications all live within the limits of their tolerable precision, which is deemed sufficient until proof to the contrary. They are sufficiently precise, good-enough. (This happens in theoretical science, too: we know with only limited precision the value of pi, the speed of light, the Planck constant, the electric charge of the electron or its mass, and many more physical quantities).

Examples of situations where practice and applications/technology proceed
irrespective of potential theoretical impediments include, but are not limited to, the following:

• Euclidean geometry. In principle, straight lines do not exist, parallels cross, and so on. This has had and is having profound influence in mathematics as well as in physical models of nature: but it does not disturb the nights of most engineers and 99% of the advanced technology we have, from recombinant DNA to supercomputers to exotic financial products, would be exactly the same even if we knew nothing about hyperbolic and elliptic geometries.

• Dimensions of the universe. Well into the 19th century it was believed that the ‘real world’ had three dimensions. This evolved to four with the added time dimension of relativity. It is now up to 11 or so in string theory, and the discussion is very dynamical, with the number changing as physicists run their major congresses annually or post work-in-progress on arXiv.org. This is an extremely important fundamental scientific discussion, but it has zero impact on daily life and technology.

• Theory of relativity. In the vast majority of everyday situations, including those involving sophisticated technologies, we do not worry about the consequences of relativity, because the objects which we deal with do not move at speeds approaching that of light or travel intergalactic distances. The effects of relativity, such as for example the decreasing length of travelling objects or the expansion of time experienced by a traveller, only become significant under those circumstances, and are negligible in most mesoscale situations. (There also exist, on the other hand, a few common domains where we do take relativity into account: for example, GPS devices).

• Quantum mechanics. Heisenberg’s uncertainty principle implies that it is impossible to measure the present position of an object while simultaneously also determining its future motion. The object in question, however, is intended to be so small (like for example an electron or a proton) as to require especially accurate measuring apparatus. The same problem does still occur, but is totally negligible and uninteresting, when the energy of the objects being observed is far larger than Plank’s constant, which is an extremely small quantity, a millionth of a billionth of a billionth of a billionth of a Joule*second. (One Joule is the energy released by dropping a small apple from one metre high onto the ground, like in Isaac Newton’s famous anecdote). With ordinary objects in the mesoscale, which is where humans belong, the uncertainty principle is irrelevant. Its importance is significant philosophically, because it means that nature is fundamentally non-deterministic: but technology-wise, in most cases it is but a curiosity.

4.1 Non-linear adjustments

Like it happened with relativity and GPSs, there may well be cases when the intimate non-linearity of nature (linearity is not but a human artefact, a first-order approximation by which we model natural phenomena) pops up with a perceptible practical impact, ceasing to be a mere epistemological issue. It could, for example, become relevant to professionals, in areas such as economics, finance, healthcare or business management. In that case, rather than surrendering to a-scienific approaches, applications would need to be adjusted, while waiting for science to come up with a comprehensive and formal treatment of non-linearity.

Econophysics provides one well-documented example of attempted adjustment subsequent to new findings. Econophysics builds on the suspect that some of the cornerstones of the dominant economic paradigm, such as the efficient markets hypothesis and near-equilibrium assumptions, be unsuitable to model the world of macroeconomy, give rise to paradoxes and contradict real-world events. Hence, econophisicists have tried to build entirely new ways to model the economy, such as spin glasses (Anderson et al., 1988), heterogeneous mean-field approximation (Mézard and Montanari, 2009) or thermodynamics (De Laurentis, 2009); but they also have proposed workarounds in the shape of numerical simulations (Macal et al., 2004) or they have pursued a combination of the two approaches, like it happens in CAS and agent-based modelling (Chakraborti et al., 2010; Stiglitz and Gallegati, 2011; Westerhoff, 2004).

One problem, in both econophysics and organisation or business management, is that the scientific approach is not at its best in these domains because they do not lend well to controlled experimentation. Just as one cannot fail a big bank or raise a specific country’s inflation by a factor of five to see what happens, so it is hard to arrange controlled experiments in business. Drawing conclusions from casual empirical observations is not what is meant by controlled experimentation in Galilean science, where

a) side conditions are carefully managed by researchers
b) experiments are run with the purpose of testing theories
c) they can be repeated and verified by others.

In organisation and management research, like in economics, few such controlled
experiments are possible, and the consequence is that the discipline tends to be either axiomatic, like economics is (Bouchaud, 2008), or merely speculative, like management theory (McKelvey, 1999b). Not unlike ancient Greek philosophy: scholars just drew conclusions observing natural events, for organised experimentation was considered ‘monstruous’.

In this writer’s opinion, this is a more-severe limitation of the scientific approach to business, and deserving far more attention and research, than those usually brought up by the mainstream of organisational and management scholars, who criticise the scientific approach for being too linear and/or too structured and/or obsessed by control and man-as-machine Tayloristic views.

5 Uncertainty and forecasting

The business atmosphere is very different from the one between the two world wars, when management science was born, and five-year business plans have long been dismissed in favour of a more-dynamic view of strategic planning. Although it is suggestive and intuitive that the increased dynamism may be a consequence of the high connectivitiy of the business world, and that this may lead to emerging behaviour (Magrassi, 2010a), still it is not a self-evident conclusion that planning of any sort should or could be replaced with chaos theory or unproven numerical simulations of business scenarios.

Rather, according to a consensus of business- and macro-economists, high dynamism and uncertainty require that strategic planning be accompanied by tactical agility in order to adjust to rapidly varying conditions (Davenport and Short, 1990; Hammer, 1990; Sull, 2009; Weill et al., 2002), that measures be taken to contain the interconnectivity driving complexity and risk (Battiston et al., 2011; Haubrich and Lo, 2011), and that tools like business intelligence/analytics and information/predictive markets (the ‘wisdom of crowds’) be used to scan the enterprise ecosystem.

The view, often to be found in the management literature (Plsek, 2001; Sanders,
1998), that quantitative forecasting should be abandoned because non-linear systems can suffer dramatic changes even if very small ones are applied to initial conditions (‘unpredictability’) is a typical case of an epistemological fact taken as an urgent real-life issue. For example, although the finite sensitivity to infinitesimal initial conditions modifications was first proven in meteorology, weather forecasts have become better and better ever since, and they are improving.

Modelling the business environment as a complex adaptive system or adopting chaos theory-derived mathematics are useful intellectual exercises and promising experiments but still unproven as business planning or forecasting tools: hence, CEOs and entrepreneurs should rather be taught and trained in readily available and effective organisational, financial, and software-based measures for facing uncertainty, which they do not know well.

6 Reductionism

Easy to be found in the management and organisational literature concentrated on
complexity is the abjuration of reductionism and the affirmation of the irreducibility of complex situations to simpler models (Stacey, 1995; Ruhl, 1995; Wood and Caldas, 2001; Donald, 2010; Menkes, 2011; Cravera, 2012). For example, valuation and performance-measurement metrics such as activity-based costing (ABC), balanced scorecard (BSC), economic value added (EVA), return on investment (ROI) or discounted cash flow (DCF) are dismissed as mechanistic and as refusals to face the complexity of the underlying scenarios.

In actuality, simple estimating or measurement tools like those can always be used in two ways. One is the naive, reductionistic belief that isolating one parameter from a complex scenario will always lead to meaningful results. Quite another one is the consciousness that inter-parameter actions can lead to surprising events. A thorough physician, when she prescribes a drug, is aware that it can have effects on parts of the body apparently unrelated to the one being cured, and she is ready to simultaneously administer another drug just to prevent those secondary effects from becoming serious.

In principle, the human body is an organic whole, not reducible to a collection of separate and distinct parts (anatomy can be reductionist, while physiology may not). However, a theory never aims to ‘represent or explain the full complexity’ of a phenomenon (McKelvey, 1999a); rather, it ‘abstracts certain parameters and attempts to describe the phenomenon in terms of just these abstracted parameters’ (Suppe, 1977). Consistently with Plato’s recommendation that ‘before tackling big and difficult problems, the small and easy ones must be solved’ (Sophist 218d), scientific progress has always used imperfect and simplified models of reality, knowing that more accurate and complex ones were possible. This was, e.g., the case with medecine, or with logic: for almost thirty centuries formal logic has developed based on two-valued propositions, true/false, empty/full, reach/poor and so on, with many-valued logic undeveloped until well into the 1900s; this has not prevented a formidable scientific and technological progress, included in mathematics and computation.

Furthermore, the world of business and of human organisations in general is strongly influenced by forces like persuasion, communication, intuition, empathy, leadership, unstructured knowledge, empiricism and luck. Any attempt, reductionist or not, to describe formally and quantitatively such a heavily unstructured world is an over-simplification and is prone to error. The ABC, BSC, EVA, ROI and DCF of sort are mere first-order approximations: they need to be complemented by more-sophisticated corollaries like multifactor optimisation and/or business analytics software tools and/or quantitative methods for assessing and managing non-financial (‘intangible’) assets (Cravera, 2012; Lev, 2001; Magrassi, 2005). But even then, descriptions, analyses and measures will be approximate.

6.1 Analysis is not reductionism

The antireductionistic critique is often accompanied by diffidence with regards to ‘divide et impera’ approaches. However, care should be taken -as it is often not the case- not to confuse reductionism with analysis: ‘Cartesianism’ is not to be trashed but rather complemented with holistic thinking.

When we analyse, we try to understand the roots of a problem, the components of a system: we always want to know what they are and the laws they are subject to. If we are reductionists, we will assume that knowing the laws of the smallest components (or ultimate causes, for a problem) is sufficient to understand the system; if we are not reductionists, we will assume that components/causes, via mutual interactions, may give rise to phenomena that, in order to be explained, need the formulation of additional laws, because those concerning the components are (not useless but) insufficient.

Emergent complex phenomena are not violations of the microscopic laws: they
simply ‘do not appear as logically consequent’ on them (Anderson, 1995). If we wish to proceed holistically, we need both analysis and synthesis, bottom-up and top-down, as neither is sufficient to describe a (complex) system. The knowledge, however partial, of a system’s inner structure is a powerful complement to the black-box approach. For example, if in Poincaré’s three-body problem (Poincaré, 1890) we had no idea of which forces are at work, we would know and understand much less than we do with our actual knowledge: three planets subject to gravitation are a different thing from three electrons subject to electric fields or three companies competing, and ‘a system with three components’ is a way insufficient description.

7 Hierarchy, heterarchy, self organisation

The advent of powerful collaboration technologies has brought to the foreground the powers of distributed organisations, whether business or social. Nowadays it is clear that hierarchical organisations and traditional leadership roles should and can be augmented with elements of heterarchy and cooperation: software and the internet have presented us with suitable enabling technologies.

A good fraction of the complexity-oriented management literature, however, tends to overstate the role of self-organisation, showing the belief that there are superior organisational models awaiting to be imported from thermodynamics or from engineering practices such as CASs: these models are referred to as if they had been tested and were ready to be put into practice in business and management, where they could liberate us from bureaucracy and rigid hierarchy (Griffin, 2002; McElroy, 2003; Morrison, 2011; Nonaka, 1988). Unfortunately, this is not true yet.

A system self-organises if its internal structure varies with no intervention from external agents. For example, if we take the system consisting of N gas molecules and compress it with an autoclave in a fishbowl, we are changing its configuration, but we are doing it by exercising the influence of an external agent: pressure. However when, during turbulence and phase transitions (liquid to gas, solid to liquid, gas to liquid, crystal to amorphous, etc.), systems decrease their entropy, that is to say they go from disordered to ordered, then we are facing spontaneous organisation phenomena. There are many physical and chemical instances of self-organisation, sometimes easy to recreate in a laboratory. The stars and galaxies are born as a result of such processes. And there even are applications of self-organisation, two familiar examples being lasers and liquid crystals.

When it comes to business organisations, the potential of cooperation, commons-based peer production (Benkler, 2006; von Hippel, 2005) and, in controlled and limited circumstances, self-organisation is, while often overstated (Magrassi, 2010b), hard to confute. Yet, the biggest advances in any of these fields are unlikely to come from non-linear technology, at least for the next ten years and with few exceptions. Furthermore, no one has ever proved that there exists a social or business organisational model, of any kind, which be superior to others.

8 Emergence (a.k.a. emerging behaviour)

Emergence is the result of interactions between the components of a system: these interactions do render the essential non-linearity of all systems apparent. All ‘systems’ and ‘problems’ we encounter in nature are non-linear, and perhaps the best definition of the word system is indeed that of ‘a set of parts that, when acting as a whole, produces effects that the individual parts cannot’ (Minati and Pessa, 2006). Systems only become linear when we model them as such for application purposes, within specified performance or time limits (e.g.,: hi-fi amplifiers are linear only within the audible range of frequencies).

Examples of physical situations that can be ascribed to emergence, or at least can offer an intuitive grasp of the role of emergence outside of living and social systems, include the following:

• The particles that make up atoms do not have a colour. Protons or electrons are not green or yellow or red, because they do not absorb or emit visible light. Groups of atoms though, i.e., aggregates of those particles, do have colours.

• Many properties of condensed matter (ordinary matter), such as viscosity, friction or elasticity, are extraneous to the composing atoms and molecules. They emerge as properties of large aggregates of molecules.

• Aggregates of atoms, like the ordinary matter that we experience every day, do not seem to obey the laws of quantum mechanics. Quantum decoherence, that is the
offsetting of phase angles among elementary particles in a system, is the
phenomenon that causes this, making classical physics emerge out of an underlying quantistic world.

• The laws of elementary particles are indifferent to the direction of time. If one changes from positive to negative the sign of variable t in Schroedinger’s equation, nothing changes in the results. That is to say, at the microscopic level nature looks the same whether we go forward or backward in time. This is not what we observe at the mesoscale, where ‘an omelette never returned to being an egg’. The variable we call time can be defined as an effect, not a cause, of increasing entropy: the arrow of time is an emerging property of statistical mechanics.

The very fact that emerging behaviour is a physical feature, attests its importance at the epistemological level: on its grounds, a radical critique of reductionism can be developed, showing that the laws of particle physics are insufficient to explain the behaviour of aggregates of electrons or atoms, as much as those of chemistry are not enough to explain the behaviour of molecule aggregates, and that at each geometrical level of nature (quark, neutron, nucleus, atom, molecule, virus, life cell, animal, group of people, etc.) new sets of laws may appear that, while compatible with the lower-level ones, introduce new knowledge.

8.1 Reductionism and emergence in the scientific mainstream

The notion of emergence in nature, however, was never communicated effectively
outside the community of solid-state physics, despite P.W. Anderson gaining a Nobel Price in 1977 for related works. The news that usually make it out of the world of physics are those concerning the two extreme fields of elementary particles and astrophysics, because of the grandiose scale, the cost and the media impact of projects such as linear super-accelerators or spacecrafts. News from other sub-domains of physical research rarely reach the attention of social scientists or even life scientists.

The mesoscale is the geometrical level of matter where neither age is much relevant (as it is for galaxies) nor it is useful to regard structures as groups of elementary particles (like in an atom), because these are far too many and statistical means or higher-level laws become necessary. This sub-domain of physics has always been a hotbed for powerful applications, such as X-rays or transistors or lasers, but it was never regarded as a source of better explanations of nature like it happened with subatomic physics or astrophysics.

There even exist significant reductionist ‘pockets of resistance’ within the particle physics community: i.e., there are scientists who do not rule out the possibility that a wide-ranging ‘elementary’ theory might one day explain all natural phenomena, including those which we now call ‘complex’.

It is therefore not surprising that the consciousness of emergence as a physical
phenomenon has not made it yet to the mainstream of complex studies outside of physics. This still creates the impression, in the social sciences spectrum, that emergence be a phenomenon limited to aggregates of living organisms and animal organisations and, sometimes, it resuscitates the vitalistic belief that the laws of the living contradict, rather than augment, those of physics.

9 Discussion and conclusions

Management and organisation research has been too quick espousing the complexity challenge and trying to adapt it for business and managerial domains. In doing so, it has overlooked a number of factors (we are referring to the mainstream and bulk of research. Exceptions exist):

• Business complexity is not always the same as non-linear complexity. Globalisation, ever-accelerating innovation, hyperconnectivity of businesses and consumers, collaboration, quasi-pulviscular enterprise ecosystems, increasing uncertainty: each of these factors may a driver of non-linearity but is, in its own merit, deserving study and care. And in fact it is not infrequent to hear one of these factors labelled as ‘complexity’, like e.g., in (IBM, 2010): often, this has nothing to do with non-linearity, emerging behaviour, chaos, etc.

• Epistemological issues do not necessarily have practical, noticeable consequences, and not all theoretical impossibilities do translate into practical impotence. Furthermore, while nature is inherently complex, and a wide-ranging effort is needed to accommodate for non-linearity in our theoretical models (a ‘new scientific paradigm’), this does not mean that in many circumstances systems cannot be modelled as linear in a first approximation: good-enough applications in many technological fields have been derived along that line. Technology, and science itself, is always the result of some simplifications.

• Planning and forecasting, whether statistically or deterministically, have not become impossible or meaningless activities. They give useful results in many if not most situations, provided of course that they be used professionally and with awareness of the limitations. These limitations are not just the consequences of reductionism or the effects of deterministic chaos: more often they stem from shallowness of our practices, such as when we isolate and optimise individual factors knowing that a multiplicity of interacting ones exist, or when we pick poor statistical models among the many available just for the sake of quick results (witness the discussion of Gaussian vs. leptokurtic, heavy-tailed Pareto distributions).

• Despite its investiture as serious field of study in the 1960s, the complexity challenge has yet to become fully mainstream even in the natural sciences.

• The effectiveness of technologies such as CASs is still largely unproven in the management domain. While the approach seems promising, practical and reusable results are missing.

Finally, the receptivity of the audience ought to be taken into account. It does not seem reasonable to instill notions of non-linear dynamical systems or non-equilibrium thermodynamics in professionals, managers and entrepreneurs who are not yet well versed on systems theory, linearity, probability, statistics and logic. Complex reasoning and related experiments (based e.g., on CASs) should be limited to selected areas of medicine, finance and sophisticated engineering fields where the linear simplification is threadbare.

On the other hand, it is not obvious that, in complex domains, cause and effect reasoning and linear modelling should be abandoned and replaced with mere simulations: it seems plausible that a synergistic integration between the two approaches should be sought for. Specifically, while the scientific approach to the very unstructured world of business is certainly insufficient, it is not obvious that a model-based quantitative approach to management theory and practice be wrong or ineffective or outdated.

References

  • Anderson, P.W. (1972) ‘More is different’, Science, New Series, Vol. 177, No. 4047, pp.393–396.
  • Anderson, P.W., Arrow, K. and Pines, D. (1988) The Economy as an Evolving Complex System, Addison-Wesley, Redwood City, California.
  • Anderson, P.W. (1995) ‘Physics: the opening to complexity’, Colloquium Paper, Proceedings of the National Academy of Science, July, Vol. 92, p.6653, USA.
  • Anderson, P. (1999) ‘Applications of complexity theory to organisation science’, Organisation Science, Vol. 10, No. 3, pp.216–232.
  • Anderson, R.A. and McDaniel, R.R. (2000) ‘Managing health care organisations: where professionalism meets complexity science’, Health Care Management Review, Vol. 25, No. 1.
  • Arecchi, F.T. (2004) Caos e complessità nel vivente, Multimedia Cardano, pag. 67.
  • Battiston, S., Delli Gatti, D., Gallegati, M., Greenwald, B. and Stiglitz, J.E. (2011) ‘Default cascades: when does risk diversification increase stability?’, ETH Risk Center, Working paper Series, ETH-RC-11-006.
  • Benkler, Y. (2006) The Wealth of Networks, Yale University Press, New Haven, CT.
  • Bouchaud, J.P. (2008) ‘Economics needs a scientific revolution’, Nature, Vol. 455, p.1181.
  • Brown, S. and Eisenhardt, K. (1997) ‘The art of continuous change: linking complexity theory and time-paced evolution in relentlessly shifting organisations’, Administrative Science Quarterly, Vol. 42, No. 1.
  • Burnes, B. (2004) ‘Kurt Lewin and complexity theories: back to the future?’, Journal of Change Management, Vol. 4, No. 4.
  • Cravera, A. (2012) ‘The negentropic role of redundancy in the processes of value creation and extraction and in the development of competitiveness’, Emergence: Complexity and Organization, Vol. 14, No. 2.
  • Chakraborti, A., Muni Toke, I., Patriarca, M. and Abergel, F. (2010) ‘Econophysics review II. Agent-based models’, Quantitative Finance, Vol. 11, No. 7.
  • Davenport, T.H. and Short, J.E. (1990) ‘The new industrial engineering: information technology and business process redesign’, Sloan Management Review, Vol. 31 No. 4.
  • De Laurentis, G. (2009) ‘Ontologia applicata ai mercati finanziari’, [online] http://www.matematicamente.it (accessed December 2012).
  • Donald, N. (2010) ‘Systems thinking, complexity theory and transnational management’, Otago Management Graduate Review, Vol. 8
  • Griffin, D. (2002) The Emergence of Leadership. Linking Self-organisation and Ethics’, Routledge, New York.
  • Feynman, R.P., Leighton, R. and Sands, M. (1964) The Feynman Lectures on Physics, Addison-Wesley, Vol. 1, pp.25–33.
  • Gardner, M. (1970) ‘The fantastic combinations of John Conway’s new solitaire game «life»’, Scientific American, No. 223, p.120.
  • Grobman, G.M. (2005) ‘Complexity theory: a new way to look at organisational change’, Public Administration Quarterly.
  • Hammer, M. (1990) ‘Reengineering work: don’t automate, obliterate’, Harvard Business Review, July.
  • Haubrich, J.G. and Lo, A. (2012) Quantifying Systemic Risk, US National Bureau of Economic Research, [online] http://www.nber.org/books/haub10-1> (accessed December 2012)
  • Holland, J. (1975) Adaptation in Natural and Artificial Systems, University of Michigan Press.
  • Homer, S. and Selman, A.L. (2011) Computability and Complexity Theory, Springer, ISBN: 978-1-4614-0682-2 (Online).
  • Hughes, O.E. (2003) Public Management and Administration, 3rd ed., Palgrave, London.
  • IBM (2010) Capitalizing on Complexity, Global CEO study.
  • Jones, N.D. (1997) Computability and Complexity, MIT Press, Cambridge, Mass.
  • Kiel, D. (1994) Managing Chaos and Complexity in Government, Jossey-Bass, p.4.
  • Kuhn, T. (1962) The Structure of Scientific Revolutions, Univ. of Chicago Press.
  • Lev, B. (2001) Intangibles: Management, Measurement, and Reporting, Brookings Institution Press.
  • Lorenz, E. (1963) ‘Deterministic non-periodic flow’, Journal of the Atmospheric Sciences, Vol. 20, pp.130–141.
  • Macal, C.M. et al. (2004) ‘Modeling the restructured illinois electricity market as a complex adaptive system’, 24nd Annual North American Conference of the USAEE/IAEE Energy, Environment and Economics in a New Era, 8–10 July, Washington, DC.
  • Magrassi, P. (2005) ‘Assessing the business impact of IT in the knowledge economy’, in Proceedings of the 12th European Conference on IT Evaluation, Academic Publishing Ltd., p.307.
  • Magrassi, P. (2010a) ‘How non-linearity will transform information systems’, Proceedings of the 4th European Conference on Information Management, Lisbon, 9–10 September, also in arXiv:1208.5316, 23 August 2012.
  • Magrassi, P. (2010b) ‘Free and open source software is not an emerging property but rather the result of studied design’, Proceedings of the 7th International Conference on Intellectual Capital, Knowledge Management and Organizational Learning, Hong Kong Polytechnic, Hong-Kong, China, 11 November. Also on arXiv:1012.5625, 27 Dec 2010.
  • Maguire, S. and McKelvey, B. (1999) ‘Complexity and management: moving from fad to firm foundations’, Emergence, Vol. 1, No. 2.
  • McElroy, M.W. (2003) The New Knowledge Management, Butterworth Heineman, p.47.
  • McKelvey, B. (1999) ‘Complexity theory in organisation science: seizing the promise or becominga fad?’, Emergence, Vol. 1, No. 1, p.15.
  • McKelvey, B. (1999b) ‘Complexity theory in organisation science: seizing the promise or becoming a fad?’, Emergence, Vol. 1, No. 1, p.21.
  • McMillan, E. (2008) Complexity, Management and the Dynamics of Change: Challenges for Practice, Routledge, p.32.
  • Menkes, J. (2011) ‘Management thinking may be blinding leadership’, Harvard Business Review, Blog, 8 June.
  • Mézard, M. and Montanari, A. (2009) Information, Physics & Computation, Oxford University Press, Oxford, UK.
  • Minati, G. and Pessa, E. (2006) Collective Beings, p.23, Springer, New York.
  • Mitleton-Kelly, E. (2003) Complex Systems and Evolutionary Perspectives on Organisations: The Application of Complexity Theory to Organisations, Elsevier, ISBN 0-08-043957-8.
  • Morin, E. (1977–2004), La Méthode, Six-volume box by Seuil Opus, published 2008.
  • Morrison, K. (2011) ‘Leadership for self-organisation: complexity theory and communicative action’, International Journal of Complexity in Leadership and Management, Vol. 1, No. 2, p.145.
  • Nonaka, I. (1988) ‘Creating organisational order out of chaos: self-renewal in Japanese firms’, California Management Review, Spring.
  • Nunes Amaral, L.A. and Uzzi, B. (2007) “Complex systems – a new paradigm for the integrative study of management, physical, and technological systems’, Management Science, Vol. 53, No. 7.
  • Peroff, N.C. (1999) ‘Is management an art or a science: a clue in consilience’, Emergence – A Journal of Complexity Issues in organisations and Management, The New England Complex Systems Institute, Vol. 1, No. 1, p.100–102.
  • Plsek, P.E. (2001) ‘Redesigning health-care with insight from the science of complex adaptive systems’, in IOM Committee of Quality of Health-Care in America, Crossing the Quality Chasm: A New Health System for the 21st Century, National Academy Press, Washington.
  • Poincaré, H. (1890) ‘Sur le problème des trois corps et les équations de la dynamique’, Acta Mathematica, Vol. 13, pp.1–270.
  • Ruhl, J.B. (1995) ‘Complexity theory as a paradigm for the dynamical law-and-society system: a wake-up call for legal reductionism and the modern administrative state’, Duke Law Journal, Vol. 45, No. 45, p.851.
  • Sanders, T.I. (1998) Strategic Thinking and the New Science: Planning in the Midst of Chaos, Complexity and Change, Free Press, New York, London.
  • Stacey, R.D. (1995) ‘The science of complexity: an alternative perspective for strategic change processes’, Strategic Management Journal, Vol. 16.
  • Stiglitz, J.E. and Gallegati, M. (2011) ‘Heterogeneous interacting agent models for understanding monetary economies’, Eastern Economics Journal, No. 37.
  • Sull, D. (2009) ‘How to thrive in turbulent markets’, Harvard Business Review, February.
  • Suppe, F. (1977) The Structure of Scientific Theories, University of Chicago Press, p.223, quoted in McKelvey, B. (1999).
  • Varela, F. and Maturana, H. (1980) Autopoiesis and Cognition: The Realization of the Living, Reidel, Boston.
  • von Foerster, H. and Pörksen, B. (2002) Understanding Systems: Conversations on Epistemology and Ethics, Kluwer Academic/Plenum Publishers, translated by Karen Leube from Wahrheit ist die Erfindung eines Lügners: Gespräche für Skeptiker, Carl-Auer-Systeme Verlag, 1998.
  • von Hippel, E. (2005) Democratizing Innovation, MIT Press, Cambridge, MA; London.
  • Weaver, W. (1948) ‘Science and complexity’, American Scientist, Vol. 36, p.536.
  • Weill, P., Subramani, M. and Broadbent, M. (2002) ‘Building IT infrastructure for strategic agility’, Sloan Management Review, Fall.
  • Westerhoff, F. (2004) ‘The effectiveness of Keynes-Tobin transaction taxes when heterogeneous agents can trade in different markets: a behavioural finance approach’, Scientific Commons.
  • Wood, T. and Caldas M.P. (2001) ‘Reductionism and complex thinking during ERP implementations’, Business Process Management Journal, Vol. 7, No. 5, pp.387–393.

 

FYI, the discussion of Gaussian vs. other forms of stable statistical distributions wast not born in a recent best-selling book.

It was brought up by Vilfredo Pareto a century ago and then again, in specific relation to finance, by Benoît Mandelbrot in the 1960’s, when he suggested that markets do not follow a normal Gaussian but rather a leptokurtic, heavy-tailed Pareto distribution.

This is what happens when the constraint of finite variance on the participating stochastic variables (e.g.; securities prices, individual risks, et cetera) is relaxed: from the Central Limit theorem, we derive no longer a Gaussian but rather other stable distributions (although not necessarily a “power law“, as is believed in popular versions of the theory).

If Mandelbrot is right, the current forecasting models (e.g. those for risk assessment, which failed in 2008) are wrong and it would be little surprise that they be not very good at anticipating even highly-impactful events.