Reductionism and Complex Systems Engineering

Reductionism is a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents (wikipedia).

This notion is the foundation for the engineering of large systems – essentially, design a system so that it is composed of discrete parts which, because they are smaller, are easier to understand and construct, define interfaces that allow these parts to work together, build the parts of the system then integrate these to create the desired system.

Most researchers in software engineering have taken a reductionist perspective and their work has either been around finding better ways to decompose problems or systems (e.g. work in software architecture), better ways to create the parts of the system (object-oriented techniques) or better ways of system integration (e.g. test-first development).

At one level, this has been quite successful – for sure, the software systems that we can now build are much more reliable than the systems of the 1970s and 1980s. From another perspective, the approach has been less effective – there are no general interface standards that have allowed a software components industry to emerge, except perhaps in very specific domains.

Reductionism as a basis for software engineering is based on 3 fundamental assumptions:

  1. That the creator of a system has control over all of the parts of the system and therefore can decide whether or not to do work to change a part or make it work with another part.

  2. That the system is being developed in a rational world and the design decisions will be primarily based on technical criteria.

  3. That the system is being developed to solve a definable problem and that system boundaries can be established.

Of course, we know these are optimistic assumptions in practice and the reality is that they are hardly every true. Consequently, there are difficulties and problems in constructing large software systems because they use unknown components, because decisions are driven by a political agenda and because the problem being addressed either hasn’t been properly defined or can’t be properly defined.

The majority of software engineering research is based on a reductionist perspective and new techniques are developed to decompose systems into their parts, construct and validate these parts and then assemble them into a system. A good example of this is model-driven architecture which raises the level of abstraction at which we design a system and provides some automated support for part implementation and assembly.

However, researchers are often disappointed at the slow take up of new methods in software engineering. The problem here is that they subscribe to the reductionist assumptions of rationality – because something is better from a rationalist analysis, they can’t really understand why it is not adopted. Of course, the reality is that we don’t live in a rational world and that most decisions are made on the basis of prejudice and evangelism, rather than rational analysis.

Nevertheless, reductionism has served as reasonably well. We can and do build large and complex software systems although these often take longer and cost more than originally estimated. But the larger and more complex the system, the less valid the reductionist assumptions and so reductionist approaches are less useful.

I now believe that we are facing a situation where the reductionist methods that have sort of worked for software have reached their limits and the type and complexity of systems that we are now building requires us to think differently about systems engineering.