If I am wrong, please let me know…

]]>Haptic experience more significant than sound.

See this.

]]>Today, we could build a very smooth circle, such as a bearing for nanomachines, using a scanning force microscope, a device allowing to move individual atoms. We could patiently shape the external surface of the bearing for long hours using our high-tech tool. But in the end, what we have is still a relatively rough contour: optical microscopes would show a perfect circle, but another scanning force microscope would reveal the trick.

The fact of the matter is that perfect figures, such as circles, triangles or squares, only exist in geometry books. They do not belong in this world: **they are idealizations of reality, archetypes, models**.

Does that mean that geometry is a foolish thing, or a useless pastime like a sudoku?

Uhm, not really. Until they learned basic geometry stuff such as Pythagoras theorem or trigonometry, humans could not safely compute distances, build calendars, divide up land properties, sail a ship in seas unknown or develop firearms. The first prosperous, technology-based societies of Mesopotamia, India or China were born after geometrical and mathematical knowledge had been accumulated and formalized.

If you want to measure a field of rectangular shape, you will take its length, then its width and multiply them. A center-pivot irrigation field has the shape of a circle: its area is equal to π times the diameter. If a grain silo is conical, its volume is to be computed as one-third the height times the base area. Neither fields or silos are perfect geometric figures: no exact circles or right angles. **But the formulae still hold**. If you have a square and multiply the length of the side by itself, you get the area of the surface, irrespective of whether the square at hand is a wheat field or an idealized figure in an Euclidean school book.

The only thing that really matters is the **precision** you require. If what you have is a real-world wheat field and you get more and more precise measures of the side thanks to improving metering technology, then you will be getting more and more accurate measures of area, but the process is never ending unless you decide that a certain precision is enough (and crop fields are measured in acres, not square inches anyway).

And, more importantly, the process, the method, the algorithm is the same: side times side equals area, or π times diameter equal circumference, irrespective of **whether the object is ideal or real**.

That is why geometry and mathematics have developed so much. They seem to be dealing with idealized, esoteric things and platonic ideas, but the methods we learn and devise within them **are good in reality as well**, they do have real-life applications.

(In the past two centuries, mankind has developed a whole lot of mathematical concepts that, unlike idealized circles or triangles, seem not to bear any connection whatsoever with the physical world. But that is not a problem. To begin with, every now and then we realize that this or that abstruse piece of math has a useful application; secondly, even “useless” mathematics teaches scholars new tricks of the trade. And, last but not least, mathematics is beautiful!).

When applied in the real world, as we saw, the rules of mathematics may give raise to some errors. **Enter applied mathematicians and engineers**. These people play a slightly different role than pure mathematicians: they know the tricks of applying math rules to real objects. Engineers never deal with perfect circles, ideal straight lines or truly squared angles.

Sometimes, imperfections do not bother them; sometimes they do. The former case corresponds to situations where the differences between ideal triangles or circles and their real-world counterparts do not matter, like when measuring fields in acres.

Or take quantum mechanics: we know that Nature is quantistic. However, in many meso-scale situations (i.e., neither at an extremely small or at an extremely large one), we ignore quantum mechanics and continue designing applications in accordance with old good Newton-Laplace mechanics, even though the numeric results we get may be slightly imperfect.

Same story with Einstein’s Relativity. We know that Nature is relativistic. However, as long as things move at speeds much smaller than light’s and do not travel intergalactic distances, engineers ignore the relativistic effects because they are negligible for most practical purposes. Formula One cars, aircraft or super tall skyscrapers are built with such approach. GPS systems in our cars and smart phones, on the other hand, are equipped with hardware and software that take Relativity into account, otherwise they would come up with significantly wrong distances. Therefore, here is one example of a situation where engineers are not happy with the classical, non-relativistic approximation: a case where, in a sense, the difference between a real and an ideal triangle is important.

Geometry itself, the world of **idealized shapes**, is more complicated than we assumed above. Circles, triangles and squares only approximately obey the ordinary rules that we learned at school, because they actually belong in a non-Euclidean world, where straight parallels cross, the shortest path is not a straight line, and what you see is not what you get.

Whenever we say that area equals base times height, we are **approximating the truth** somewhat, assuming a right angle where one really is not and a straight line where there is a curve. The geometry that 99.9% of people know and use, and with which the Great Pyramid of Giza or the Burji Khalifa Dubai tower have been built, **is slightly wrong, both in principle and in practice**!

The point I believe I have made is simple: technology is grounded on approximations. It is never exact, infinitely precise. Real-world scientific applications all live within the limits of their tolerable precision, which is deemed sufficient until proof to the contrary. They are sufficiently precise, good-enough. (In fact, this even occurs in theoretical science, where being as precise as possible is more important: we do not know with infinite precision the value of π, or the electric charge of the electron, or its mass).

Examples of situations where theory and practice (i.e. applications, technology) diverge include:

- Euclidean Geometry, as discussed above. In principle, straight lines do not exist, parallels cross, and so on. This has had and is having profound influence in mathematics as well as in physical models of Nature: but it does not disturb the nights of any engineer and 99,999% of the advanced technology we have, from DNA to supercomputers to exotic financial products, would be exactly the same even if we knew nothing about hyperbolic and elliptic geometries;
- Dimensions of the Universe. Well into the 19th century it was believed that the “real world” had three dimensions. This evolved to four with the added time dimension of Relativity. It is now up to eleven or so in string theory, and the discussion is very dynamical: the number changes as physicists run their major congresses annually and it jumps up and down weekly in arXiv.org. This is an extremely important fundamental scientific discussion: but it has zero impact on daily life and technology;
- Theory of Relativity. In the vast majority of everyday situations, including those involving sophisticated technologies, we do not worry about the consequences of relativity, because the objects which we deal with do not move at speeds approaching that of light or travel intergalactic distances. The effects of Relativity, such as for example the decreasing length of traveling objects or the expansion of time experienced by a traveler, only become significant under those circumstances, therefore they are negligible in most mesoscale situations. On the other hand, as we discussed already, there also exist common situations where we better take Relativity into account: for example, GPS devices;
- Quantum Mechanics. One of the basic principles in physics, called Heisenberg uncertainty principle, implies that it is impossible to measure the present position of an object while simultaneously also determining its future motion: if we know
*exactly*where it is, then we will not know where it goes, while if we know how it is going then we can not tell*exactly*where it is. The object in question, however, is intended to be so small (like for example an electron or a proton) as to require specially accurate measuring apparatus. The same problem**does still occur, but is totally negligible**and uninteresting, when the energy of the objects being observed is far larger than Plank’s constant, which is a small quantity, equal roughly to a millionth of a billionth of a billionth of a billionth of a Joule*second. (One Joule is the energy released by dropping a small apple from one metre high onto the ground, like in Isaac Newton’s famous anecdote). With ordinary objects in the mesoscale, which is where humans belong, the uncertainty principle is irrelevant. Its importance is significant philosophically, because it means that Nature is fundamentally uncertain and undeterministic: but technology-wise, in most cases it is but a curiosity.

Like it happened with Relativity and GPS’s, there may well be cases when the intimate non-linearity of Nature pops up and ceases to be simply a sophisticated epistemological matter. It could, for example, become relevant to the life of professionals and managers: witness the discussion in **econophysics**. However, this has not happened yet. Or, if it has, we do not know: no one has as of yet presented a credible account of how non-linearity is affecting businesses and professions.

Pop complexity pundits **confuse the epistemological issues with the practical consequences**.

They hear about someone having proved that a butterlfy flapping could cause a tornado ten thousand miles afar? Well, they will assume that to be a dominant phenomenon in metereology.

Equally, they assume that getting “surprises” from complexity (such as, e.g., weird emerging behavior) is standard, and forget that *the most* surprises we simply get from plain ignorance.

**PAOLO MAGRASSI, 2011 CREATIVE COMMONS NON-COMMERCIAL – NO DERIVATIVE WORKS**

It goes like this.

The properties of a **linear** system are additive: the effect of a collection of elements is the sum of the effects when they are considered separately, and overall there appear no new properties that are not already present in the individual elements. But if there are elements/parts that are combined and depending on one another (nonlinearity), then the complex is different from the sum of the parts and new effects start to appear.

In other words: Since a complex system, by definition, does not obey the superposition principle, its behavior as a whole does not reflect that of the composing elements. The system’s response R to the simultaneous application of stimuli S1… Sn is different from the sum of the individual responses to each stimulus when applied in sequence, R1+…+Rn.

However this does in no way imply that the systemic response be larger or smaller than the sum of the individual responses: it can be either, depending on whether positive or negative feedback takes place. Or **it could be numerically equal, although it would still remain logically different**. (For that matter, the assumption that an *additive* property be implied in all cases, is arbitrary).

You are encouraged to use the synergy metaphore statement to tell wheat from chaff (Matthew, 3:12) in complexity literature.

**PAOLO MAGRASSI, 2011 CREATIVE COMMONS NON-COMMERCIAL – NO DERIVATIVE WORKS**

Technically a meaningless term by now, *smartphone* is used as a marketing buzzword to make the consumer feel *smart* if she keeps up employing her financial resources and personal time to consume online.

The object of the consumption are hardware gadgets, connection time, apps (mostly designed for the dummies who can’t use the web), online entertainment, and especially in-app purchasing, one of the killer marketing applications of the 2000s, first popularized by Apple.

In the process, people also consume most of their cognitive bandwidth, which, consistently with what Jon Zittrain anticipated, is directed to playing the games conceived by astute marketeers, and almost never aimed at expanding one’s competence.

As with most digital technologies, one to five percent of people are leveraging smartphones to gain power and/or expand knowledge, while the other 95% are but blind consumers. And the consumer is “a prey in the Supranet jungle”…

Mastery of technology, whether it be digital, financial, biotech, or materials’, is what generates the increasing income inequality to be observed worldwide. Take a look at the portion of people who can use technologies (instead of just being used through them), and you’ll get a proxy of the portion of people who are getting richer and richer.

[Written on the day that “smartphone sales surpassed feature phones”, whatever that means]

]]>Since economics is **not** a Galilean sience, it all boils down to mere intellectual, axiomatic speculations.

Hence, I can only tolerate, and do I admire, scholars (Nobels included) who speak to me with that consciousness, and discuss economics like philosophy or math can be discussed. And I particularly like and support those who work towards providing an **experimental** base to economic research.

All others, however cloacked in formulas and while often intelligent people, mostly appear as funny clowns to me.

]]>The authors believe to have proven that their graphical approach (GA) leads to the isolation of a minimum set of sensors (i.e., a subset of the system’s internal variables) necessary and sufficient to describe the dynamics of a complex system.

For linear dynamical systems, the minimum sensor set derived from the GA would only be necessary and not sufficient. But for nonlinear dynamical systems the GA sensor set is also sufficient. According to the authors, this stems from the fact that, unlike linear systems, nonlinear systems contain zero or almost zero symmetries in the state variables.

Any symmetries in the state variables that leave the inputs, outputs, and all their derivatives invariant make the system unobservable (i.e., you can’t look at outputs and say something positive about the system’s state): a dynamical system with internal symmetries can have an infinite number of temporal trajectories that cannot be distinguished from each other by monitoring outputs.

A complex system, on the other hand, is more essential, it has a personality (no symmetries): and this is why its behavior can be captured by a subset of the internal variables, i.e., by monitoring only some outputs.

The paper does not offer rigorous proof of the sufficiency of the GA-selected sensors. The authors have simply run a total of circa 1000 numerical simulations in several complex domains (such as Michaelis–Menten, Lotka-Volterra, and Hindmarsh–Rose) and found the GA-selected subset to be a sufficient descriptor.

The graphical approach **reduces observability** (a *dynamical* problem)** to a property of the static map of an inference diagram**: and such maps are available for an increasing number of complex problems, like the three mentioned above.

The graph is obtained as follows.

Like in the life-sciences example offered in the paper, consider a number of chemical substances

A, B, C, D, …

some of which are reacting with each others. Reactions, i.e., will be of the kind

A+B+C –> D+F+J

D <–> E

and so on. You may therefore write, using mass-action kinetics, balance equations representing all reactions: the equations will contain the substances’ concentrations as variables (xA, xB, xC, xD, …) and a number of rate constants k1, k2, …, as many as there are reactions.

From there, an inference diagram is built by drawing a directed link

xi –> xj

if *xj* appears in the right-hand side of *xi *‘s balance equation.

Then, strongly connected components or SCCs are identified as the largest subgraphs such that there is a directed path from each node to every other node in the subgraph. Among these, “root” SCCs are those SCCs that have no incoming edges. At least one node is chosen from each root SCC, to ensure observability of the whole system.

These findings are likely to benefit various domains of public interest, such as medicine or economics and other social sciences.

There also are several lessons here for pop-complexity fans to learn: e.g., complexity can be managed, and it can be done using a scientific instead of a fideistic or animistic approach.

**Paolo Magrassi 2013 Creative Commons Attribution-Non-Commercial-Share Alike**

I am a bit surprised that, in the book presentation, Microsoft Research labels data-intensive science as «emerging» though.

Bionformatics emerged 20 years ago, and its underrated logical foundation, agent-based modelling, came up in the early Seventies.

Even large chunks of mathematical research have been data-intensive and computer-driven since decades (e.g., the Great Internet Mersenne Prime Search project is 17 years old).

More reasons to read the book then…

]]>Ontonix is leading this field. While I do not necessarily espouse all their views or agree 100% with their methodology, it is easy to recognize them as being miles ahead of the clumsy talk surrounding “complexity” in its various forms. At Ontonix, complexity is not bla-bla: it is a measurable feature of any system.

Ever since I first reviewed them years ago, their tools have progressed enormously and many are available online for free trial. Just read founder Jacek Marczyk’s commentary on the *Dreamliner* woes here and get a grasp.