“Prediction is very difficult, especially if it’s about the future.” (Nils Bohr)
Those of us making predictions should always bear this in mind.
Chris Parnin wrote an interesting and provocative blog post recently when he came up with a set of predictions that were related to the future of software engineering over the next 50 years. I found these to be thoughtful and stimulating – some I agreed with, others may be true in some domains. Some others, like his comments on neural embodied and augmented programming, I don’t know enough about to come to a judgment. I’ve commented on his blog on some of these predictions and I won’t discuss them further here.
Inspired by Chris’s post and the imminent end of 2013, I thought I would put together my own list of predictions for the future of software engineering. I don’t think that I have the imagination to look forward 50 years so I have chosen a 20-year timeframe. We will see huge changes in software over the next 2 decades which, I think, will require us to rethink software engineering and face up to some of the really hard problems of complexity that will arise.
I also think that it is inevitable that a computer system will pass the Turing test and it won’t be as big a deal as it was once made out to be. Computer-based systems will be better than people in an increasing number of areas (not just chess and quiz shows). So-called ‘artificial intelligence’ will simply be mainstream engineering. I don’t know a great deal about AI but I’m fairly sure that this will mean we need new techniques to test and verify systems using this technology.
Many of the changes in software engineering over the next 20 years will be driven by the increasing complexity of the systems that we build. This does not just mean larger systems (although mega-systems will be commonplace) but smaller systems will also be much more complex. Complexity is inevitable because connectivity will be universal and when we integrate bits of software that we don’t own and don’t understand then we inevitably get complexity.
So, my top 5 complexity related predictions are:
1. Living with failure
Large systems are so complex that I think it will be impossible for all of the parts of these systems to be working all of the time. Future systems will cope with and automatically recover from failure and carry on delivering essential services while the causes of the failure are being repaired.
2. Less than 5% of software will be written from scratch.
What has really made a difference to software development over the past 30 years is software reuse. The vast majority of business software is not specially developed but involves tailoring and configuring sets of existing systems. Web applications are built using frameworks which include a huge amount of pre0built infrastructure support and there are libraries for most of the widely used languages to do all sorts of things.
This trend will continue with the development of ‘plug and play’ infrastructures to connect apps and with product families for embedded systems. For the vast majority of systems, there will be no need to ‘write a program’ in a standard programming language.
3. Software health monitoring
In the 1970s, it was predicted that, because of the inadequacies of program testing, formal verification of programs would take over from testing as the principal approach to program V & V. Developments in test automation and testing methodologies, such as test-first development, meant that these predictions were wrong. But they were right about the inadequacies of testing and, as we create systems that are increasingly complex, with only partial specifications then testing simply won’t be good enough.
Rather, we will see a move from development time to run-time assurance with software health monitors becoming the norm. These will continuously analyse interactions between software elements according to the needs of user groups and will both predict problems and automatically invoke recovery actions when things start to go wrong.
4. Simulations will drive decision making
As both computing capabilities and data increase, more and more complex system simulations will be developed to aid decision making. Software engineering as a discipline hasn’t paid much attention to this type of software and I suspect that many current systems are very buggy and unreliable. Better engineering methods and particularly better V & V for simulations is a major SE challenge ( BTW, test first development doesn’t work for this kind of system).
5. Software certification and regulation
Sadly, I think it inevitable that in the near future, a software issue in a robot will lead to a child being killed or seriously injured. I also think it inevitable that there will be a serious software-related failure of our critical infrastructure (maybe started by a cyberattack) that will lead to large-scale casualties and hundreds of millions of dollars of damage.
This will finally prompt politicians to introduce much stricter regulation of software in critical systems and the major cost for these systems will be achieving certification rather than software development. This will also create huge economic opportunities as continuing to run old and insecure systems will simply not be acceptable.
But some things won’t change. Here’s my top five predictions for 2034 for things that will still be the same:
1. There won’t be a fundamental scientific basis for software engineering
The lack of such a scientific basis is bemoaned by many commentators and is the ‘Holy Grail’ of the SEMAT project. I don’t anticipate that there will ever be such a scientific basis (or indeed that academics will stop searching for it)
The reasons for this are:
(a) Software is a human invention rather than something in the world and there are no underlying ‘laws’ that constrain its behaviour. This is in contrast to other types of engineering where methods and techniques are constrained by the laws of physics.
(b) Software is about automating what Checkland called ‘human activity systems’. This means that the dominant influence on software is human behaviour be that the behaviour of developers, users, managers, politicians, etc. We can’t explain human behaviour scientifically so we can’t have a reliable scientific basis for software.
This lack is not something to worry about – we’ve done a remarkably good job in software creation over the last 50 years and there’s no reason to think that the lack of a scientific basis is a constraint.
2. Computers won’t be programming themselves
This prediction stems from the fact that software is about human activity systems rather than something governed by ‘scientific laws’. One of the things that I don’t think automated systems will be able to do in 20 years is understand people and their motivations and this is what you need to do if you are building software, be it simple apps or worldwide control systems.
2. Parallel programming will still be hard
We have struggled with parallel programming for 30 years. We can handle problems that are naturally parallel, such as search, but have made virtually no real progress in making it easier to parallelise other types of computation.
I think this is because our brains have evolved to be goal-oriented – look for food, shelter, etc. and, from an evolutionary perspective, trying to focus on multiple goals leads to being eaten. This makes it really hard to ‘think parallel’ and maybe a better approach is to focus how to reconceptualise problems as search problems (e.g. search-based software engineering) so that we can use known parallel techniques.
3. We won’t all be developers
Development technology will evolve to the extent that building apps will be possible with very little platform knowledge. Many people will try this out but few will stick to it. Automating their lives isn’t a high priority for normal people.
5. Governments will still get software wrong
Politicians will not understand software and many government software projects will continue to be disasters. Need I say more.