Monday, February 6, 2012

Think Complexity: Part Three

My new book, Think Complexity, will be published by O'Reilly Media in March. For people who can't stand to wait that long, I am publishing excerpts here.  If you really can't wait, you can read the free version at thinkcomplex.com.

In Part One I outline the topics in Think Complexity and contrasted a classical physical model of planetary orbits with an example from complexity science: Schelling's model of racial segregation.

In Part Two I outline some of the ways complexity differs from classical science.  In this installment, I describe differences in the ways complex models are used, and their effects in engineering and (of all things) epistemology.

A new kind of model


Complex models are often appropriate for different purposes and interpretations:

Predictiveexplanatory: Schelling's model of segregation might shed light on a complex social phenomenon, but it is not useful for prediction.  On the other hand, a simple model of celestial mechanics can predict solar eclipses, down to the second, years in the future.

Realisminstrumentalism: Classical models lend themselves to a realist interpretation; for example, most people accept that electrons are real things that exist.  Instrumentalism is the view that models can be useful even if the entities they postulate don't exist. George Box wrote what might be the motto of instrumentalism: ``All models are wrong, but some are useful."

Reductionismholism: Reductionism is the view that the behavior of a system can be explained by understanding its components. For example, the periodic table of the elements is a triumph of reductionism, because it explains the chemical behavior of elements with a simple model of the electrons in an atom.  Holism is the view that some phenomena that appear at the system level do not exist at the level of components, and cannot be explained in component-level terms.

A new kind of engineering

I have been talking about complex systems in the context of science, but complexity is also a cause, and effect, of changes in engineering and the organization of social systems:

Centralizeddecentralized: Centralized systems are conceptually simple and easier to analyze, but decentralized systems can be more robust.  For example, in the World Wide Web clients send requests to centralized servers; if the servers are down, the service is unavailable.  In peer-to-peer networks, every node is both a client and a server.  To take down the service, you have to take down every node.

Isolationinteraction: In classical engineering, the complexity of large systems is managed by isolating components and minimizing interactions.  This is still an important engineering principle; nevertheless, the availability of cheap computation makes it increasingly feasible to design systems with complex interactions between components.

One-to-manymany-to-many: In many communication systems, broadcast services are being augmented, and sometimes replaced, by services that allow users to communicate with each other and create, share, and modify content.

Top-downbottom-up: In social, political and economic systems, many activities that would normally be centrally organized now operate as grassroots movements.  Even armies, which are the canonical example of hierarchical structure, are moving toward devolved command and control.

Analysiscomputation: In classical engineering, the space of feasible designs is limited by our capability for analysis.  For example, designing the Eiffel Tower was possible because Gustave Eiffel developed novel analytic techniques, in particular for dealing with wind load.  Now tools for computer-aided design and analysis make it possible to build almost anything that can be imagined.  Frank Gehry's Guggenheim Museum Bilbao is my favorite example.

Designsearch: Engineering is sometimes described as a search for solutions in a landscape of possible designs.  Increasingly, the search process can be automated.  For example, genetic algorithms explore large design spaces and discover solutions human engineers would not imagine (or like).  The ultimate genetic algorithm, evolution, notoriously generates designs that violate the rules of human engineering.

A new kind of thinking

We are getting farther afield now, but the shifts I am postulating in the criteria of scientific modeling are related to 20th Century developments in logic and epistemology.

Aristotelian logicmany-valued logic: In traditional logic, any proposition is either true or false. This system lends itself to math-like proofs, but fails (in dramatic ways) for many real-world applications.  Alternatives include many-valued logic, fuzzy logic, and other systems designed to handle indeterminacy, vagueness, and uncertainty.  Bart Kosko discusses some of these systems in Fuzzy Thinking.

Frequentist probability Bayesianism: Bayesian probability has been around for centuries, but was not widely used until recently, facilitated by the availability of cheap computation and the reluctant acceptance of subjectivity in probabilistic claims.  Sharon Bertsch McGrayne presents this history in The Theory That Would Not Die.

Objective subjective: The Enlightenment, and philosophic modernism, are based on belief in objective truth; that is, truths that are independent of the people that hold them.  20th Century developments including quantum mechanics, Godel's Incompleteness Theorem, and Kuhn's study of the history of science called attention to seemingly unavoidable subjectivity in even ``hard sciences'' and mathematics.  Rebecca Goldstein presents the historical context of Godel's proof in Incompleteness.

Physical lawtheorymodel: Some people distinguish between laws, theories, and models, but I think they are the same thing.  People who use ``law'' are likely to believe that it is objectively true and immutable; people who use ``theory'' concede that it is subject to revision; and ``model'' concedes that it is based on simplification and approximation.

Some concepts that are called ``physical laws'' are really definitions; others are, in effect, the assertion that a model predicts or explains the behavior of a system particularly well. I discuss the nature of physical models later in Think Complexity.

Determinismindeterminism: Determinism is the view that all events are caused, inevitably, by prior events.  Forms of indeterminism include randomness, probabilistic causation, and fundamental uncertainty. We come back to this topic later in the book.

These trends are not universal or complete, but the center of opinion is shifting along these axes.  As evidence, consider the reaction to Thomas Kuhn's The Structure of Scientific Revolutions, which was reviled when it was published and now considered almost uncontroversial.

These trends are both cause and effect of complexity science.  For example, highly abstracted models are more acceptable now because of the diminished expectation that there should be unique correct model for every system.  Conversely, developments in complex systems challenge determinism and the related concept of physical law.

The excerpts so far have been from Chapter 1 of Think Complexity.  Future excerpts will go into some of these topics in more depth.  In the meantime, you might be interested in this timeline of complexity science (from Wikipedia):

No comments:

Post a Comment