Mary Tsingou circa 1955“Every writer creates his own precursors,” said Borges. And it doesn’t just happen in art: it also happens in science.

In the field of nonlinearity, known in popular literature by the ambiguous term ‘complexity’, almost no one noticed the scope of Poincaré’s work on the three-body problem until after Lorenz in 1963. And prior to the advent of ‘chaos theory’ in 1977, the Fermi – Pasta – Ulam problem was not out of the circles of super experts.

The extent of nonlinearity in the natural sciences could not be discovered without computers: Poincaré himself, at the end of the nineteenth century, was perplexed by his results and could not go further because of the inhuman implied calculation effort. The encounter between computers and natural sciences was provoked by Enrico Fermi in the early fifties.

To explore his visions, he did not struggle to involve Stan Ulam, a brilliant Polish mathematician and polymath who like Fermi emigrated to America due to racial laws, and John Pasta, a physicist in his thirties who was becoming one of the pioneers of computer science under the guide of giants like Nick Metropolis, the man who at Fermi’s hint created the Montecarlo Method and the architect of the MANIAC computer, conceived with John von Neumann.

The Fermi / Pasta / Ulam paradox was being born, as the result of a computer-simulated mechanical experiment poised to keep physicists busy for over half a century.

The simulation was made possible by a program written in machine language for the MANIAC by Mary Tsingou, a mathematics graduate student who, like many young women since the Manhattan Project in 1943-45, had been hired in Los Alamos as a computer (sic) herself.

In 1953, however, the computer had also become available in the form of an automaton, no longer only a girl, and a young woman willingly took on the task of instructing him to simulate a physics experiment.

No alt text provided for this image

This is the flow chart, i.e. the algorithm of the program. Most likely mainly a Pasta creation, its development is sure to have had Tsingou deeply involved. So how come her name disappeared from all subsequent mentions of the crucial experiment?

The internal report of the Los Alamos Laboratory says that it was “written by Fermi, Pasta, and Ulam” on the basis of a work done by “Fermi, Pasta, Ulam, and Tsingou” (E. Fermi, J. Pasta, and S. Ulam, ‘Studies of the Nonlinear Problems’, I, Los Alamos Report LA-1940, 1955).

That is, Mary Tsingou had not participated in the drafting of the report and this caused her name to disappear from the authors of the experiment in the subsequent citations in the scientific literature.

Had the report not just been filed internally to the Lab, but published in a scientific journal, something which was never done because Fermi died in November 1954, Tsingou would have been given the usual citation credits. Coding the first computer program ever that simulates a physics experiment is not a triviality. It wouldn’t be today.

The boys’ invasion

Tsingou’s name was revived in 2008 by French physicist Thierry Dauxois in ‘Fermi, Pasta, Ulam and a mysterious lady‘ in Physics Today. She did graduate in Ann Arbor and went on to work as a programmer for the government and the military, including assisting John von Neumann in a study and becoming one of the earliest FORTRAN virtuosos. She reportedly lives in Los Alamos today.

From the beginning of computing until well into the 1980s, women filled a higher proportion of programming jobs than today. Very early on, like post-war times at Los Alamos, they were often preferred for coding jobs because it was believed that a meticulous mentality was required and this was supposed to be a female’s specialty.

Getty imagesIn the United States, by 1960 more than one in four programmers were women and by 1983, 37 percent of all students graduating with degrees in computer and information sciences were female. This was the historical peak though.

Early forms of personal computing were emerging, and they were used almost exclusively by boys. The Commodore and TRS wiz-kids started flocking to Computer Science 101 classes and in ten years the percentage of women in computing degrees was down to 28 percent. It has kept going down ever since and we are down to about 18% today in the US.

This has turned computing and the IT industry in general into an all-male business, although it looks like it is less due to women’s disaffection than the mighty influx of men.

It is believed that to increase gender diversity in information technology, remedies include giving computer devices and game consoles to girls as well instead of just boys, and inserting as many female board members and executives as possible in IT enterprises.

Impostures intellectuelles.jpgI was first amused then horrified and finally saddened by reading Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, where physicists Alan Sokal and Jean Bricmont review some of the complete nonsense written on scientific subjects by French intellectuals of great media and often academic success, roughly abscribed, in the authors’ words, to «the admittedly neboulous Zeitgeist that we have called “postmodernism”».

To my fault and dishonor, the book reached my attention only today, more than twenty years after its first publication in French in 1997.

From the late Sixties to the early Nineties, French authors Gilles Deleuze, Félix Guattari, Jacques Lacan, Julia Kristeva, Luce Irigaray, Bruno Latour, Jean Baudrillard, Paul Virilio and many of their sometimes religiously devoted disciples wrote totally meaningless lengthy pages trying to import mathematical and physical concepts into their disciplines, such as psychoanalysis, linguistics or philosophy.

In so doing, they showed superficial and often outdated knowledge of most of the terms employed, such as e.g. function, abscissa, chaos, infinite, velocity, topologyuniversal constant, transfinite cardinals, irrational numbers, imaginary numbers, Gödel‘s theorems, Riemann’s or euclidean geometry and many more. Surprisingly, scientific naïveté did not prevent them from stuffing many of their writings, including the most important and celebrated, with loads of scientific terminology.

Perhaps they did that with the honest intention of showing the applicability of certain scientific concepts in their disciplines. In such case, however, they should have as a minimum explained (if not proved) such presumed applicability. But they never did it.

Help yourself through this PDF of the book or, as I warmly suggest, get a fresh copy from the library or bookstore, and watch

>>> Lacan meander in “psychoanalytic topology” (pages 19-24); confuse irrational numbers with imaginary numbers (25); write ludicrous formulae (which make the authors suspect that he must be pulling his reader’s leg, page 26); build up an imaginary link between mathematical logic and linguistics (30); and essentially show off a very superficial erudition (36);

>>> Deleuze and Guattari spit out words like chaos, abscissa, function or particle accelerator in complete disregard of ther respective scientific meanings and without any purpose other than perhaps a mysterious metaphorical one which they do not explain (pages 156-157); ignore the evolution of calculus in the previous two centuries (160-161); delirate about biology (166-167);

>>> Kristeva confuse the set [0,1] of Boolean logic with the interval [0,1] containing infinite real numbers (pages 39-40); apply the axiom of choice, which she does not understand, to the study of poetic texts, offering no justification whatsoever, whether literal or metaphorical (43-44); and ultimately «attempt to impress the reader with fancy words that she obviously does not understand» (page 48);

>>>> Irigaray think that Einstein was interested in “accelerations without electromagnetic reequilibrations” (a nonsense concept, page 107); confuse special relativity and general relativity (107); argue that E = mc2 is a “sexist equation” because “it favors the speed of light over other [undefined, NdT] speeds that are vital for us” (109); ravage of fluid mechanics (110-116); mess with the ABC of mathematical logic (117-120);

>>> Latour discuss Relativity without understanding the concept of frame of reference (125-128); and close a messed up essay on the subject claiming to have “taught something” to Einstein (130)

Once caught with their pants down, the “postmodernist” stars replied that scientific terminology in their writings is used metaphorically and should not be taken literally. At times, they went as far as to claim that their texts are actually neither straight nor metaphorical and that “hard science” criticists simply do not have the cultural instruments to understand.horror

Alas, for those of you who are at least in their sophomore year studying math or physics or chemistry, there is an extremely simple way to sort out that polemy: just read the pages I pointed to above

Eventually, the stars’ defenders stated that Sokal and Bricmont’s book was part of a wider American conservative attack on leftist French intellectuals. Um… What’s clear to me, and to any other reader who can tell a Taylor series from a Mercedes Benz, is that even if the book were handcrafted in Langley, VA, the original texts contained therein have never been disowned: that is, the CIA might have fabricated the commentary, yet the source French text is there for us to contemplate, unfortunately shall I say. Deleuze’s Logique du sens or Lacan’s séminaires have never been republished after removing their loads of pseudoscientific stupidities.

Authors Sokal and Bricmont are striken by the fact that those intellectuals did not bother to offer explanations of how the various scientific concepts they brought up (however ackwardly) could be applied to their disciplines: how those dei ex machina might turn out useful in psychoanalysis or linguistics, that is to say.

On my part, I was shocked that someone who deliberately abuses concepts which she is clueless upon and stuffs them into texts merely aimed to épater les bourgeois may not only become famous in the media but win tenures at major Parisian universities 😓. Or are these excesses over, after the fury of the Seventies?

And then, the million-dollar question: are those occasional slips? Accidents that can happen to anyone, including great people, even in writing, and that do not stain one’s complete works?

What the hell! We knew it all the time that people like Lacan or Deleuze were not interested in science, but who cares? It did not prevent them from being important and profound. I too suspect, with Polonius, that Truth may be a Liar; with Dylan, that Truth is a drunken speech; with Hamlet, that there are more things in heaven and earth than are dreamt in science.

Yet: one who can be a studied scoundrel for 50 pages, can then be credible for the remaining 250?

And: if they did not reverse a little after the book in question, does it mean (a) that they really believed the crap they had written or (b) that they were so glorified and ostentatious to consider themselves safe from radical criticism?

(And then Barthes, Derrida, Foucault, incensing them for those nonsense-laden works… My Goodness! 😭)

Post scriptum –

Physics hystorian Mara Beller wrote (on Physics Today in 1999) that it was not entirely fair to blame contemporary postmodern philosophers for drawing nonsensical conclusions from quantum physics, since many such conclusions were drawn by some of the leading quantum physicists themselves, such as Bohr or Heisenberg when they ventured into philosophy. True. But neither Bohr or Heisenberg got their fame and academic positions for writing nonsense: they started bullshitting after becoming legends…

downloadAs we already discussed when we mentioned the guy who claimed he could distinguish real Pollocks from fakes by looking at their fractality, here goes another lot of scholars who maintain they can tell wheat from chaff in the commercial art world (a proven method would lead to lucrative consulting fees): their paper seems to me less a breakthrough work than an abracadabra written in poor English about a quantity, complexity, which the authors can neither measure nor define.

If I am wrong, please let me know…

circleYou are unlikely to have ever met a perfect circle. One with absolutely no irregularities on the edge of its circumference, regardless of how close you look at it. Even the most technologic­ally-advanced and accurately crafted circle is far from ideal: inspect it with a sufficiently precise optical tool, and you will notice the imperfec­tions.

Today, we could build a very smooth circle, such as a bearing for nanomachines, us­ing a scanning force microscope, a device allow­ing to move individual atoms. We could pa­tiently shape the external surface of the bearing for long hours using our high-tech tool. But in the end, what we have is still a relatively rough contour: optical microscopes would show a per­fect circle, but another scanning force micro­scope would reveal the trick.

The fact of the matter is that perfect figures, such as circles, tri­angles or squares, only exist in geometry books. They do not belong in this world: they are ideal­izations of reality, archetypes, models.

Does that mean that geometry is a foolish thing, or a useless pastime like a sudoku?

Uhm, not really. Until they learned basic geometry stuff such as Pythagoras theorem or trigonometry, humans could not safely compute distances, build calen­dars, divide up land properties, sail a ship in seas unknown or develop firearms. The first prosper­ous, technology-based societies of Mesopot­amia, India or China were born after geometrical and mathematical knowledge had been accumulated and formalized.

If you want to measure a field of rectangular shape, you will take its length, then its width and multiply them. A center-pivot irrigation field has the shape of a circle: its area is equal to π times the diameter. If a grain silo is conical, its volume is to be computed as one-third the height times the base area. Neither fields or silos are perfect geometric figures: no exact circles or right angles. But the formulae still hold. If you have a square and multiply the length of the side by it­self, you get the area of the surface, irrespective of whether the square at hand is a wheat field or an idealized figure in an Euclidean school book.

The only thing that really matters is the preci­sion you require. If what you have is a real-world wheat field and you get more and more precise measures of the side thanks to improving metering technology, then you will be getting more and more accurate measures of area, but the process is never ending unless you decide that a certain precision is enough (and crop fields are measured in acres, not square inches anyway).

And, more importantly, the process, the method, the algorithm is the same: side times side equals area, or π times diameter equal cir­cumference, irrespective of whether the object is ideal or real.

That is why geometry and mathematics have developed so much. They seem to be dealing with idealized, esoteric things and platonic ideas, but the methods we learn and devise with­in them are good in reality as well, they do have real-life applications.

justin(In the past two centuries, mankind has developed a whole lot of mathem­atical concepts that, unlike idealized circles or triangles, seem not to bear any connection what­soever with the physical world. But that is not a problem. To begin with, every now and then we realize that this or that abstruse piece of math has a useful application; secondly, even “use­less” mathematics teaches scholars new tricks of the trade. And, last but not least, mathematics is beautiful!).

When applied in the real world, as we saw, the rules of mathematics may give raise to some errors. Enter applied mathematicians and engin­eers. These people play a slightly different role than pure mathematicians: they know the tricks of applying math rules to real objects. Engineers never deal with perfect circles, ideal straight lines or truly squared angles.

Sometimes, imper­fections do not bother them; sometimes they do. The former case corresponds to situations where the differences between ideal triangles or circles and their real-world counterparts do not matter, like when measuring fields in acres.

Or take quantum mechanics: we know that Nature is quantistic. However, in many meso-scale situations (i.e., neither at an extremely small or at an extremely large one), we ignore quantum mechanics and continue designing applications in accordance with old good Newton-Laplace mechanics, even though the numeric results we get may be slightly imperfect.

Same story with Einstein’s Relativity. We know that Nature is relativistic. However, as long as things move at speeds much smaller than light’s and do not travel intergalactic distances, engineers ignore the relativistic effects because they are negligible for most practical purposes. Formula One cars, aircraft or super tall skyscrapers are built with such approach. GPS systems in our cars and smart phones, on the other hand, are equipped with hardware and software that take Relativity into account, otherwise they would come up with significantly wrong distances. Therefore, here is one example of a situation where engineers are not happy with the classical, non-relativistic approximation: a case where, in a sense, the difference between a real and an ideal triangle is important.

Geometry itself, the world of idealized shapes, is more complicated than we assumed above. Circles, triangles and squares only ap­proximately obey the ordinary rules that we learned at school, because they actually belong in a non-Euclidean world, where straight paral­lels cross, the shortest path is not a straight line, and what you see is not what you get.

towerWhenever we say that area equals base times height, we are  approximating the truth somewhat, assuming a right angle where one really is not and a straight line where there is a curve. The geometry that 99.9% of people know and use, and with which the Great Pyramid of Giza or the Burji Khalifa Dubai tower have been built, is slightly wrong, both in principle and in practice!

The point I believe I have made is simple: technology is grounded on approxima­tions. It is never exact, infinitely precise. Real-world scientific applications all live within the limits of their tolerable precision, which is deemed sufficient until proof to the contrary. They are sufficiently precise, good-enough. (In fact, this even occurs in theoretical science, where being as precise as possible is more important: we do not know with infinite precision the value of  π, or the electric charge of the electron, or its mass).

Examples of situations where theory and practice (i.e. applications, technology) diverge include:

  • Euclidean Geometry, as discussed above. In principle, straight lines do not exist, parallels cross, and so on. This has had and is having pro­found influence in mathematics as well as in physical models of Nature: but it does not dis­turb the nights of any engineer and 99,999% of the advanced technology we have, from DNA to supercomputers to exotic financial products, would be exactly the same even if we knew nothing about hyperbolic and elliptic geomet­ries;
  • Dimensions of the Universe. Well into the 19th century it was believed that the “real world” had three dimensions. This evolved to four with the added time dimension of Relativ­ity. It is now up to eleven or so in string theory, and the discussion is very dynamical: the number changes as physicists run their major congresses annually and it jumps up and down weekly in This is an extremely important fundamental scientific discussion: but it has zero impact on daily life and technology;
  • Theory of Relativity. In the vast majority of everyday situations, including those involving sophisticated technologies, we do not worry about the consequences of relativity, because the objects which we deal with do not move at speeds approaching that of light or travel inter­galactic distances. The effects of Relativity, such as for example the decreasing length of traveling objects or the expansion of time ex­perienced by a traveler, only become significant under those circumstances, therefore they are negligible in most mesoscale situations. On the other hand, as we discussed already, there also exist common situations where we better take Relativity into account: for example, GPS devices;
  • Quantum Mechanics.  One of the basic principles in physics, called Heisenberg uncer­tainty principle, implies that it is impossible to measure the present position of an object while simultaneously also determining its future mo­tion: if we know exactly where it is, then we will not know where it goes, while if we know how it is going then we can not tell exactly where it is. The object in question, however, is intended to be so small (like for example an electron or a proton) as to require specially accurate measur­ing apparatus. The same problem does still oc­cur, but is totally negligible and uninteresting, when the energy of the objects being observed is far larger than Plank’s constant, which is a small quantity, equal roughly to a millionth of a billionth of a billionth of a bil­lionth of a Joule*second. (One Joule is the en­ergy released by dropping a small apple from one metre high onto the ground, like in Isaac Newton’s famous anecdote). With ordinary ob­jects in the mesoscale, which is where humans belong, the uncertainty principle is irrelevant. Its importance is significant philosophically, be­cause it means that Nature is fundamentally un­certain and undeterministic: but techno­logy-wise, in most cases it is but a curiosity.

Like it happened with Relativity and GPS’s, there may well be cases when the intimate non-linearity of Nature pops up and ceases to be simply a sophisticated epistemological matter. It could, for example, become relevant to the life of professionals and managers: witness the dis­cussion in econophysics. However, this has not happened yet. Or, if it has, we do not know: no one has as of yet presented a credible account of how non-linearity is affecting businesses and professions.

Pop complexity pundits confuse the epistemological issues with the practical consequences.

They hear about someone having proved that a butterlfy flapping could cause a tornado ten thousand miles afar? Well, they will assume that to be a dominant phenomenon in metereology.

Equally, they assume that getting “surprises” from complexity (such as, e.g., weird emerging behavior) is standard, and forget that the most surprises we simply get from plain ignorance.


wheat and chaffOne of the recurring myths of pop complexity is that in a complex system «the whole is greater than the sum of its parts». Whenever you read this anywhere, you can rest assured that the author is confused about the meaning of complexity.

It goes like this.

The properties of a linear system are additive: the effect of a collection of elements is the sum of the effects when they are considered separately, and overall there appear no new properties that are not already present in the individual elements. But if there are elements/parts that are combined and depending on one another (nonlinearity), then the complex is different from the sum of the parts and new effects start to appear.

In other words: Since a complex system, by definition, does not obey the superposition principle, its behavior as a whole does not reflect that of the composing elements. The system’s response R to the simultaneous application of stimuli S1… Sn is different from the sum of the individual responses to each stimulus when applied in sequence, R1+…+Rn.

However this does in no way imply that the systemic response be larger or smaller than the sum of the individual responses: it can be either, depending on whether positive or negative feedback takes place. Or it could be numerically equal, although it would still remain logically different. (For that matter, the assumption that an additive property be implied in all cases, is arbitrary).

You are encouraged to use the synergy metaphore statement to tell wheat from chaff (Matthew, 3:12) in complexity literature.


sdI can hardly think of anyone dumber than the average smartphone enthusiast.

Technically a meaningless term by now, smartphone is used as a marketing buzzword to make the consumer feel smart if she keeps up employing her financial resources and personal time to consume online.

The object of the consumption are hardware gadgets, connection time, apps, online entertainment, and especially in-app purchasing, one of the killer marketing applications of the 2000s, first popularized by Apple.

In the process, people also consume most of their cognitive bandwidth, which, consistently with what Jon Zittrain anticipated, is directed to playing the games conceived by astute marketeers, and almost never aimed at expanding one’s competence.

As with most digital technologies, one to five percent of people are leveraging smartphones to gain power and/or expand knowledge, while the other 95% are but blind consumers. And the consumer is “a prey in the Supranet jungle”…

Mastery of technology, whether it be digital, financial, biotech, or materials’, is what generates the increasing income inequality to be observed worldwide. Take a look at the portion of people who can use technologies (instead of just being used through them), and you’ll get a proxy of the portion of people who are getting richer and richer.

[Written on the day that “smartphone sales surpassed feature phones”, whatever that means]

Don’t know about you folks, but personally I am fed up with economists coming up with breakthrough forecasting methods (and particularly Black Swans, Dragon Kings and all the sexy outfit good to gain press coverage).

Since economics is not a Galilean sience, it all boils down to mere intellectual, axiomatic speculations.

Hence, I can only tolerate, and do I admire, scholars (Nobels included) who speak to me with that consciousness, and discuss economics like philosophy or math can be discussed. And I particularly like and support those who work towards providing an experimental base to economic research.

All others, however cloacked in formulas and while often intelligent people, mostly appear as funny clowns to me.

I am most grateful to physicist Ijaz Durrani for referring me to a paper recently published by Liu, Slotine and Barabasi in the Proceedings of the US National Academy of Sciences,entang addressing the observability of nonlinear dynamical systems.

The authors believe to have proven that their graphical approach (GA) leads to the isolation of a minimum set of sensors (i.e., a subset of the system’s internal variables) necessary and sufficient to describe the dynamics of a complex system.

For linear dynamical systems, the minimum sensor set derived from the GA would only be necessary and not sufficient. But for nonlinear dynamical systems the GA sensor set is also sufficient. According to the authors, this stems from the fact that, unlike linear systems, nonlinear systems contain zero or almost zero symmetries in the state variables.

Any symmetries in the state variables that leave the inputs, outputs, and all their derivatives invariant make the system unobservable (i.e., you can’t look at outputs and say something positive about the system’s state): a dynamical system with internal symmetries can have an infinite number of temporal trajectories that cannot be distinguished from each other by monitoring outputs.

A complex system, on the other hand, is more essential, it has a personality (no symmetries): and this is why its behavior can be captured by a subset of the internal variables, i.e., by monitoring only some outputs.

The paper does not offer rigorous proof of the sufficiency of the GA-selected sensors. The authors have simply run a total of  circa 1000 numerical simulations in several complex domains (such as Michaelis–Menten, Lotka-Volterra, and Hindmarsh–Rose) and found the GA-selected subset to be a sufficient descriptor.

The graphical approach reduces observability (a dynamical problem) to a property of the static map of an inference diagram: and such maps are available for an increasing number of complex problems, like the three mentioned above.

The graph is obtained as follows.

Like in the life-sciences example offered in the paper, consider a number of chemical substances

A, B, C, D, …

some of which are reacting with each others. Reactions, i.e., will be of the kind

A+B+C –> D+F+J

D <–> E


Liu, Slotine, Barabasi: “Observability of complex systems”, PNAS 2013

and so on. You may therefore write, using mass-action kinetics, balance equations representing all reactions: the equations will contain the substances’ concentrations as variables (xA, xB, xC, xD, …) and a number of rate constants k1, k2, …, as many as there are reactions.

From there, an inference diagram is built by drawing a directed link

xi –> xj

if  xj  appears in the right-hand side of xi ‘s balance equation.

Then, strongly connected components or SCCs are identified as the largest subgraphs such that there is a directed path from each node to every other node in the subgraph. Among these, “root” SCCs are those SCCs that have no incoming edges. At least one node is chosen from each root SCC, to ensure observability of the whole system.

These findings are likely to benefit various domains of public interest, such as medicine or economics and other social sciences.

There also are several lessons here for pop-complexity fans to learn: e.g., complexity can be managed, and it can be done using a scientific instead of a fideistic or animistic approach.

Paolo Magrassi 2013 Creative Commons Attribution-Non-Commercial-Share Alike