Three factors that inhibit the take-up of software engineering research (April 2022)

I wrote in a recent article about my views on why software engineering research has had very little direct impact on software engineering practice. I identified three short-comings in the research community that I believe has contributed to this namely short-term thinking, a mistaken belief in reductionism and a failure to cooperate rather than compete.

To be fair to the research community, however, the problems of research impact are not all of their own making. There are a number of external factors that continue to inhibit the industrial take-up of software engineering research. As well as the community addressing its own problems, these external factors also have to be mitigated in some way if real progress is to be made in getting social value from software engineering research.

Significant factors that contribute to the limited take-up of research are:

The development funding gap.

The research pipeline as seen in other disciplines starts with experiments around some particular problem. If these are successful, further development work is required to scale-up these experiments and to demonstrate that the initial experimental results are valid on real problems. A high percentage of initially promising results are discarded at this stage simply because they do not scale. Finally, there is a transition to widespread use in practice, starting with early adopters and over time becoming more generally used. 

The critical stage is the second stage where scaling-up is investigated. This is an expensive but also a risky stage. Because of the risks, industry is reluctant to pay for these experiments; because of both the costs and the fact that research funding agencies are tasked with funding research rather than development, it is very difficult to obtain funding for so-called development activities. I was one of a number of people in the UK who tried to convince the UK research funding agency to reconsider this position in the 1990s but to no avail.

The consequence of this is that initially promising research results are abandoned as the funding agencies move on to the ‘next big thing’. Industry, quite rightly, ignores the research as there is no evidence that it scales to practice.

There’s money to be made from poor software engineering.

It has been accepted since the 1970s that the cost of operating and maintaining large software systems significantly exceeds the costs of developing these systems. This would suggest that a focused research effort in this area would be valuable. However, because of the business model that is used for custom software development, there is no real economic incentive for software development companies to adopt practices that reduce software maintenance costs. Essentially, development and maintenance are usually classed as separate projects and costed separately. There is no incentive to reduce maintenance costs if this involves an increase in development costs.

The only way round this that I can see is that software is paid for as a service with no distinction between development and maintenance. This would incentivise software development companies to improve their practice.

Non-existent or weak regulation.

Regulators exist to protect society from risks due to capitalism. It is simply not acceptable for companies to behave without regard to the needs or society for safety, privacy and fairness. However, regulation of the software industry is very weak to the extent that most software licenses have clauses that say (I paraphrase) “We know this software had bugs and may fail catastrophically. We take no responsibility for this or any subsequent losses”.

We need regulation to change the current attitude to software development so that developers take responsibility for their development failures. How to do this without significant increases in bureaucracy and stifling innovation is an interesting challenge.