In 1981, Peter Checkland, a systems engineer published a book that all software engineers should read. It’s called Systems Thinking, Systems Practice and it documents the fundamental basis of Checkland’s Soft Systems Methodology. All software engineers should read this book because, although it doesn’t mention software, it explains why software engineering is not like other engineering disciplines.
Essentially, Checkland introduces a number of system types including natural systems (the climate system), manufactured systems (which he calls designed systems) and human activity systems (which we now usually call socio-technical systems). For the purposes of the discussion, the system types that matter are designed physical systems and abstract systems.
Designed physical systems are systems created by people that are subject to constraints imposed by physical laws. These constraints form the basis of theories about the system so that we can reasonably accurately model, for example, the stresses on a structure imposed by wind pressure.
Checkland introduces another type of system called an abstract system. Abstract systems, such as a mathematical model, are human inventions. They are not subject to physical laws so there are no underlying theories that can be devised to support their construction. We can design an abstract system that models a universe in which the speed of light is not an ultimate constraint and explore what this would mean.
This is fundamentally important because software is the embodiment of an abstract system. Essentially, we take some physical system (such as the climate) or human activity system (such as the payment of taxes), create an abstract model of this system and build it in software. But, critically, the model is not constrained by these systems – people decide how to interpret them for modelling.
Sometimes, there is a relatively unambiguous understanding of the system that is being modelled in software. Therefore, it’s not very difficult to reach agreement on the specification of a software system that computes the wind pressure on a structure. Sometimes, the software specification is ‘imposed’ – a company creating software products or apps decides on the specification of these systems.
However, when we create models of complex human-activity systems in software, then it is much harder to reach agreement on a specification. Different stakeholders interpret the system in their own way and the system is subject to changing demands from the external world. The specification is neither unambiguous nor stable. Unlike physical systems that are constrained by physical laws, there are no universal laws of human activity that can limit the bounds of the software system. Unlike physical systems where we can measure which system is the strongest, lightest or fastest, there is no objective way of establishing unambiguous success criteria for a software system and measuring the system against these criteria.
For this reason, I think the search for a ‘general theory of software engineering’ is bound to fail. It’s an attractive idea but it stems fundamentally from a hard, reductionist engineering mindset that does not recognise that complex human activity systems cannot be formalised in the same way as physical systems.
This is something that, as software engineers, we have to live with. We need to find new ways to cope with complexity – the old ways will not do.