Feed on
Posts
Comments

In a recent post on this blog, I set out my thoughts on how I think our education system has to change to cope with a future where increasing automation will change and, in many cases take-over, current jobs. In that piece, I made the point that we don’t know what we will need to know. Therefore, it is more important to teach people how to learn than to impart specific information.

I have spent my career in computing. As a discipline, it has gone through wave after wave of changes since I wrote my first program in 1970. Consequently, I have been and am still involved in a continuous learning process. I like to think that I have ‘learned to learn’ and I have been reflecting recently on what this means and how I now approach the issue of learning something new.

My first experience of learning to learn came about when I changed discipline from physics to computer science. I was fortunate after graduating to attend a project-based MSc conversion course in St Andrews rather than the more common topic/exam-based courses. The department that I worked in was small and MSc and PhD students and staff intermingled and had time to chat.

My initial impression there was that I had no idea what people were talking about. However, I now realise that by simply sitting listening to conversations about programming languages, compilers and operating systems I was becoming attuned to the area, learning vocabulary and working out who was an expert at what.

When you are learning about a new area, I think that this process of familiarisation so that you are comfortable about what words mean and what the area is and isn’t is important. It isn’t about specific knowledge but more about feeling comfortable to seek out that specific knowledge from some other sources.

Nowadays, this familiarisation process is easier than it used to be and you don’t have to be lucky enough to work in a friendly environment as I did. What you need is information fragments that you can assimilate and, luckily, there are now web pages, videos and blogs that present enough information for you to get a general feel for what an area is about.

Motivation is another key issue in learning to learn. Our general approach to secondary and university education has been that ‘experts’ decide what is worth knowing and deliver this to ‘learners’. These learners often have no idea why they are learning something. For the vast majority of information delivered this way, it is retained for as long as it seems useful (i.e. to pass the exam) then is almost instantly forgotten.

For me, I now only learn something when I see the point of doing so. I’ve been learning recently about the science underlying cooking because there are certain dishes that I think I will cook better if I have this understanding. This helps make the motivation concrete as I can test what I’m learning. This again, seems to me to be important; I don’t want other people to test me using their parameters, I want to be able to test myself.

I find it quite hard to articulate what, for me, is the next essential element to learning to learn. The best description that I can think of is ‘consequence-free mucking around’. It’s not experimentation as that implies a more coherent and structured process. Young children do this all the time and they are, by far, the best learners. For adults, it means learning in an environment where time pressures do not predominate and where there are no adverse consequences to getting things wrong. If you are learning about technology, it’s about working in a transparent environment where you can see the results of what you are doing, even if these are not the ones that you want.

The final element that has been important to me in learning to learn is external support. We will always come across barriers – things we don’t understand or, especially in computing, technologies that don’t work as we expect. One way of getting this external support is by talking to an expert who can help you solve your problem. However, unless they are attuned to learning, they will probably tell you the answer without telling you where you are going wrong.

Discussing a problem with peers who have encountered the same problem is a more effective learning technique than simply asking an expert to fix things. We are now fortunate that the web and social media makes this so much easier than it used to be and, in almost all areas, you can find people who have comparable problems and discuss how they addressed these problems.

However, I must admit there are some technology problems where rational thought and analysis are unlikely ever to lead to the solution and the best way is simply to ask an expert so that you can get on with something more interesting.

So, for me as a learner, the essential elements are:
1. Familiarisation
2. Motivation
3. Mucking around
4. Support

I certainly can’t claim that this is all there is to learning to learn but I think these are fairly general requirements. I’ll discuss how these might translate to a more formal educational environment in a future post.

Afterword
I read around the area of learning to learn before writing this post and I anticipated that I would simply be restating what is common knowledge in the field. Most articles emphasised the importance of reflection in learning but the authors of these articles never seemed to back this up with examples where they reflected on their own learning. So, there was a good deal less common knowledge than I expected (although I accept that it could be that my learning about learning to learn was deficient).

I also learned that you should not waste time reading articles or tutorials on learning to learn (such as this one from the Open University). These are not about learning to learn but about learning how to do their courses which is not at all the same thing.

Tags:

An introduction to the Cloud for people who don’t care how it works but want to know how it might be useful.

There are basically two ways in which computers are used:

1. Personal computers which are used by one person (or sometimes a family) and which can store data, run applications and so on.

2. Shared computers where lots of people share the computer and which, usually, provide quite specific services. You will often have an account and have to log on to shared computers but this is not always necessary. Shared computers run software programs that are called ‘servers’. So, a web server is used to distribute web pages to anyone who connects to it (this is an example of a server were you don’t need an account); a print server may be used in an office to allow several people to share a printer; a phone company may offer a customer account server where you can login and check the calls you have made, pay your bill and so on.

Until quite recently, when a company used servers, the computers that ran the server programs had to be in the company’s computer room. This is still the case for many servers – when you connect to your bank, you are connecting to a server that is in the bank’s IT centre.

Servers provide ‘services’ such as an ‘account service’ on demand to users. But sometimes there is very little demand and, when that’s the case, the computer on which the server runs is doing very little. Companies are paying for computers which for a large part of the time are actually doing nothing.

In the early 2000s, this was a major issue for Amazon. They had to have thousands of computers to cope with the high demand at peak times, such as Christmas. This cost a lot so they wondered how they could get some return on this investment.

So, they decided that they could provide ‘servers for rent’ where anyone who needed a server could rent it from Amazon for as long as they needed it and they only paid for the time that they used. This was only possible because of the growth of relatively high-speed Internet connections. People who wanted to rent a server, accessed them over the Internet.

Amazon devised some clever technology that supported the renting of servers. The computers that ran the servers enabled by this technology became known as the ‘Cloud’. Other companies jumped on this bandwagon and there are now a number of cloud service providers.

So, in a nutshell, the Cloud is a (huge) cluster of computers that runs software that makes it possible for people to rent rather than to buy these computers. 

This means that people who run servers, with services that you can use, have a much easier life. They don’t have to predict how many users they will have – they simply rent more computers to run their services as demand increases. Cloud software makes this possible in seconds, compared to the days or weeks needed to buy and install another computer.  Consequently, lots more companies are now offering services to the public.

Why use the cloud?

If you run a business, using the Cloud means that you can rent rather than buy the computers that run servers saving up-front capital investment and paying for computers that are idle for a lot of the time. But individuals don’t run servers so what does the cloud mean for them?

People don’t need servers but, to do the things they want to do, they sometimes need to have a ‘service’. An example of a service is a ‘calendar service’ which maintains an electronic diary for you. Of course, this can run as a program on your PC but what if you want to access your diary from your phone or share it with your family? However, if this is a cloud service, this means that it runs on a server in the cloud so can be accessed from anywhere with an Internet connection.

So, in another nutshell, the benefit of the Cloud for individuals is that it allows them to access and share services between mobile devices and other laptop and desktop computers. It also allows you to access specialised services that you might only use occasionally so that you don’t want to buy a program specially to do this (e.g. a service that allows you to create a photo album).

If you use a web browser to read your mail or a calendar on your phone and PC, you are already using cloud services. The ‘rent’ that you are paying is that the companies running these services get information from you that they use to target advertising, etc. So, what people often mean when they say ‘what can the cloud do for me’ is ‘should I pay for some cloud services’.

For individuals, there are two main types of paid-for cloud services:
1. Storage and sync services such as Dropbox, iCloud, OneDrive and Google Drive. These vary in detail but they all let you store information on the cloud (actually on a storage server) and sync that information across registered connected devices. All offer small amounts of ‘free’ storage as a taster then have subscription plans that allow you to rent more space.
2. Software distribution services such as Adobe’s Creative Cloud that allow you to rent software from them such as Lightroom and Photoshop. You may a monthly fee and the software is automatically updated with new versions. They usually also offer some specialised storage and sync services, included in the monthly fee.

Whether these are useful for you depends on a number of things.

1. Do you care about if the information on your mobile devices and computers is consistent? Some people enforce their own separation e.g. ‘I never edit photos on my phone’ so it really doesn’t matter.
2. Do you care about having the most recent versions of the apps that you use on your computer?
3. Do you have enough disk space on your computer? If not, you can move rarely used information to the cloud (but it may be cheaper to buy another disk). For example, if you make and edit videos, the clips etc. take up lots of space but you rarely access them after you have completed the final version of the video. This is an example of something you might store on the cloud ‘just in case’ you ever need them again.
4. Are you disciplined in doing backups? Services such as Dropbox allow you to save the entire content of your hard disk on a cloud storage server so if you lose everything (or even lose a few files), you can restore these from the cloud. Restoring a whole disk would take a long time (I estimate 2 weeks at UK broadband speeds) but this might be better than losing everything.

Personally, I pay for Dropbox storage and syncing which costs about $8/month for more storage than I will ever use. The syncing is the most critical for me and Dropbox is, in my view, does this better than the others. Dropbox on its own is NOT a backup service – these exist but I have no experience of them. If you use Apple devices, iCloud is seamless but I don’t like the fact that you can’t access individual files from the Finder.

I also pay Adobe for a Creative Cloud photography service, which gets me the latest versions of Lightroom and Photoshop. This is about $10/month.

Ultimately, I believe that we will all only use cloud storage and that local disks will be obsolete. But this won’t happen until we have much faster broadband which won’t be for a while in the UK. I think that all ‘paid for’ apps will move to a cloud distribution model within the next few years.

Tags:

I have written recently on how I think work will change in future. Production costs for goods and services will decline significantly as work is automated but there will be an enormous dislocation in the jobs market with millions of jobs being automated out of existence and other jobs will change as automated assistants are introduced.

This does not have to mean the ‘end of employment’. New opportunities in, as yet, unthought of areas will come into existence and we can, if we choose to do so, invest the proceeds of automation in job creation rather than the enrichment of the already wealthy. But, to do so, we will need to completely rethink what we mean by ‘education’.

Formal education, in most countries, is generally seen as something that happens to people in the first quarter of their life. After people reach their early-twenties, support for formal education ends. Their is little or no government support and, often, students older than 25 are simply not catered for by college and university courses.

This model is simply unworkable in a society where automation is developing and more and more ‘traditional’ jobs are automated. If people who are displaced by automation are to have a meaningful existence, there must be opportunities for them to learn new skills and to retrain in new areas.

The first key requirement, and perhaps the most challenging requirement for the ‘education system’ to take on board is the need for continuing, lifelong adult education. Without this, we are condemning a significant and increasing fraction of our fellow citizens to a life of unemployment.

The reality of lifelong education is that people will have to take responsibility for their own education. Yes, there must be government and employer support but, much more than now, self-learning will become essential.

Our current educational system, at least in the UK, has never been much good at helping people to ‘learn to learn’. If anything, this situation has got significantly worse as educational institutions are judged on how many of their students achieve high grades in national examinations. This government idiocy, which is a largely meaningless measure of the quality of a school or university, has meant that students are taught to pass exams, not to develop skills in learning.

The second key requirement, therefore, for a future education system is teaching students how to ‘learn to learn’. Without these skills, students will continue to be deluded by superficial educational packages offered by charlatan ‘education providers’.

Much of our educational system currently involves the imparting of ‘information’. While there is no doubt whatsoever that some information is vital. a great deal of educational time is spent telling students about things they could simply find out by other means. ‘Information-oriented’ jobs are candidates for automation so we need to re-orient education away from information towards (a) core skills and (b) creativity.

So my third key requirement for education is to define and focus on core skills, with a particular emphasis on encouraging and developing creativity in students.

Core skills are an essential basis for learning and for making sense of the vast volumes of information that we are faced with; creativity is one thing that distinguishes people from machines. The ‘creative industries’ will be one area where there is scope for expanding employment opportunities and we need to provide lots more opportunities for students to develop their creative abilities.

What constitutes ‘core skills’ is a controversial issue and I won’t go into this in much detail in this post. I’ll mention only one here which is sadly lacking even in many so-called educated people such as journalists and that is what might be called ‘critical numeracy’.

Critical numeracy is the ability to look at conclusions drawn from data and assess whether or not these conclusions are reliable. It means understanding that the value of an ‘average’ can be quite different depending on whether ‘average’ refers to mean or median; it means understanding that statistics is about populations and that because 3% of a population has condition X, this is NOT the same as saying that an individual has a 3% chance of developing condition X.

I will revisit these requirements and discuss some of their implications in future posts.

The end of a year often encourages thinking about the future. My reflections here, however, are not purely objective as my daughter is expecting her first child and our first grandchild in Spring 2017.  Assuming that we are not wiped out by runaway climate change or nuclear war, how will the world of work look for that child when he or she becomes an adult?  

I believe it will be radically different and that we have to start planning for that future now. Sadly, I see no evidence whatsoever that today’s politicians, whatever their leanings, understand this and they remain stuck in a 20th century mindset. 

I came into computing in 1970, part way through the first ‘information revolution’. New technology always changes the world of work and the introduction of computers into industry displaced a large number of people from clerical jobs. However, 1970s computers needed a lot of support and new jobs were created. This, along with rapid economic growth meant that that we did not see a large rise in automation-created unemployment.

The second information revolution happened in the 1980s with the advent of microprocessors. Not only did this lead to personal computers, it also made manufacturing automation economically realistic for all sizes of business. This led to a loss of manufacturing jobs but PCs made the expansion of the service economy possible so many new jobs were created. However, those displaced from manufacturing generally had the wrong skills and background for these service jobs. A new workforce was created, which had a far higher proportion of women. In many traditional manufacturing communities, dependent on heavy industry, the women rather than the men became the main breadwinners. Some of these communities have never recovered from the disruptions that this caused.

The third revolution, which is still going on, was facilitated by the ubiquity of the Internet. The made globalised supply chains possible with the displacement of the majority of the remaining manufacturing jobs from the West to Asia. More skilled jobs were lost but the Internet allowed a further expansion of service industries. Again, we saw widespread changes in employment but not a net loss of jobs.

So, successive automation revolutions have created as many jobs as they have displaced. Many commentators therefore predict that the next emerging information revolution, facilitated by artificial intelligence will have comparable consequences. Yes, jobs will be lost but new jobs will be created partly in new areas and partly because of the economic growth stimulated by new technology.

Whilst I’m all for optimism, I think this view of technological change is unrealistic. The fourth information revolution will affect the service sector and the remaining manufacturing industry and vast numbers of people will lose their jobs. Initially, this will largely affect the least skilled jobs – Amazon shelf pickers, supermarket checkout operators, call centre workers and taxi drivers. But by the time my grandchild reaches adulthood, the effects will be felt in skilled jobs that require deep information processing – lawyers, doctors, HR managers, accountants and so on.

Anyone who denies these job losses will happen is simply kidding themselves. The key question is not ‘will we lose jobs’ but ‘can we create new jobs to replace these’. Of course, new jobs will be created – just as the internet led to jobs in search engine optimisation and web design, the AI revolution will create jobs like AI managers and regulators, robot teachers and so on. Hopefully, technology-faciltated economic growth will be used to increase the provision of socially valuable jobs in areas such as healthcare and education and will support the creative industries.

But there is really nothing on the horizon of the scale of the service industry. I find it very hard to believe that enough jobs will be created to replace those displaced. Remember that, while technology may not replace highly skilled jobs, it will create definitely create assistants for them. Take education for example. I think we’ll have the opportunity to improve education but this may not lead to more teaching jobs. Instead, automated systems will take over lots of tasks such as grading assignments leaving teachers to the arguably more important jobs of stimulating their students’ interest in a subject.

I don’t believe that my grandchild will inhabit a world of robot teachers and machines that care for the young and old. Whether automation will ultimately make these possible I don’t know but I believe that, as a social species, we will reject these technologies. But for sure, he or she will have to be highly educated and adaptable to find a niche in 2040s society. And, they will have to change their job regularly throughout their working life (as I have had to).

It is a truism of course that all predictions are difficult, especially those about the future. We can’t say for sure what the future will bring and what new technologies will emerge. But we know from our history of computing technology that the changes will be profound and they will take place over 10s not 100s of years. If our children and grandchildren are to have a future where they are happy, healthy and fulfilled, we have a responsibility to think about and promote the social changes that will be required to make this possible. 

This post has been prompted by a discussion about wind power that I had recently. I have made clear that I am totally opposed to the construction of wind farms on ‘wild’ land. However, I am not opposed to wind turbines in principle. I think there is a place for a (relatively small) proportion of our energy supply to be met by wind energy. After making this point, it was recommended that I look at a number of articles which, it was claimed, ‘proved’ that wind power could never be economic.

I looked at these articles and the thing that I found most striking was that the authors looked at wind power in isolation rather than as part of a wider power generation system. They either were ignorant of or deliberately chose to ignore, critical factors that, in practice, meant their analyses are of no real value.

The energy supply system in a developed economy is a complex system. A complex system is one where there are a large and dynamically changing number of components with relationships between these components. Because of the dynamic nature of these systems, it is theoretically impossible to have complete knowledge of them so we can never produce a completely accurate mathematical model of their behaviour or other characteristics, such as whole life-cycle costs.

Complex systems generate enormous volumes of data. This data is rarely consistent and is frequently contradictory. Because of this, commentators with an axe to grind can usually cherry pick this data to support their own views. Hence, there are pro/anti nuclear articles which appear to be based on objective data analysis. When the alternative perspective is pointed out to those with one set of views, their reaction is often to ignore it or to rubbish the alternatives.

To provide a national energy supply, we cannot simply make decisions based on the cost of a generation technology – we must also consider at its availability (can it deliver power when needed) and its political and environmental risks. Availability is a key issue as continuity of supply is essential for the functioning of our society.

For example, we need to take into account the possibility of a nuclear accident, the possibility that political factors will cut off imported gas supplies, the possibility that weather conditions will mean that there is a widespread still weather so wind turbines don’t work and so on. There are lots of other issues and risks – some of which we DO NOT KNOW.

We can consider all of these other factors as RISKS – things that might happen which will lead to additional costs. A conventional economic approach tries to work out these costs and includes them in cost computations. However, if you try and do this in a large complex system such as energy supply, there are so many uncertainties that the resulting conclusions are completely unreliable.

In complex systems, there are things which are unknown and actually unknowable. Many classes of risk fall into this category. So, any assessment of the risk that e.g. Russia will cut off gas supplies to Western Europe so causing a huge spike in gas prices is no more than a guess. Therefore, because we cannot reliability assess risks or the consequences if these risks arise, we simply CANNOT accurately model the whole life-cycle costs of ANY power generation technology.

But we need to build a reliable power generation system so what do we do? We use two techniques that are fundamental to building any reliable system – redundancy and diversity. In power system terms, this means building more capacity than we need because we know we will have to cope with outages and using different technologies to generate electricity. This means that if there are problems that affect one kind of technology, then we don’t lose everything.

This need for diversity means that the Scottish Government policy that a very large proportion of our energy needs will be met from wind energy is highly risky and means that Scotland sometimes has to rely on imported energy. It also means that France’s reliance on nuclear energy is unwise, although less so because so much work has been done in understanding nuclear risk. It means that government energy subsidies for e.g. nuclear energy may make sense even although there seems to be ‘cheaper’ generation alternatives.

This post is already long enough and I don’t want to go into my opinions of what technologies we should be using or to go into more detail of complex systems issues. But there are two takeaways from this:

  1.  Most so-called economic analyses of power generation technologies are incomplete and only consider the price of these technologies. These should be an input to the decision making process but should never be the sole driver of that decision making.
  2. There is no such thing as an objective view of a complex system. We all can only have incomplete views that are biased by our existing knowledge and prejudices. We cannot ‘prove’ that people who argue for/against wind power, nuclear power, etc are right or wrong. They see the system in different ways and simply don’t listen to contrary arguments because these do not fit in with their world view.

It is perfectly valid to write articles, papers and blogs which discuss the costs of power generation technologies. But, some people who write such articles then go on to draw more general conclusions about these technologies in a broader power generation system e.g we should never build wind turbines. If these articles don’t acknowledge systemic uncertainties and risks and take these into account, then I advise you to treat their conclusions with the contempt that they deserve.

So what do I know?
I worked in the engineering of reliable, safety-critical systems (e.g. air traffic control systems) from 1990 to 2005. From about 2003 until I retired in 2014, I studied and analysed large scale, complex socio-technical systems. My father, as a consulting engineer, worked on every large and many small power stations built in Scotland between 1950 and 1975. We had lots of conversations about power stations as I was growing up and I think he hoped I would be a power engineer. To his disappointment, I followed the siren song of software engineering. Sadly, he didn’t live long enough to realise this was probably the right decision.

Tags: , ,

In an article entitled ‘From Software Development to Software Assembly’ (currently accessible w/o login) in the most recent issue of IEEE Software, Harry Sneed and Chris Verhoef argue that increasing software maintenance costs combined with skill shortages mean that organisations have to move from original software development to much more extensive use of pre-packaged, off-the-shelf software. Fewer people are required to maintain such systems and overall costs are reduced.

Now I have been reading articles like this which advocate universal software reuse for at least 30 years and it is so self-evidently sensible that the key question we have to ask is why hasn’t this happened?

Well, to a significant extent it has happened. Fears of a ‘millennium bug’ in 2000 prompted many organisations to ditch their existing software and replace this with packages from companies such as SAP and Oracle. Increased use of these ‘enterprise systems’ has continued . My experience is that in sectors , such as healthcare and education, where software provides an essential service but is not an integral part of business innovation, use of packaged software for new systems is now pretty well universal.

However, the authors are right that there is still a lot of software that COULD be standardised but which is being developed as ‘original programs’ rather than using off-the-shelf packages. Given the benefits of standardisation set out in the IEEE Software article, why does this practice continue?

I think that there are four main reasons for this:

  1. Packaged solutions include embedded assumptions about business processes and organisations have to adapt their processes to fit in with these assumptions. In some case, this adds cost rather than reduces cost. I worked with a project where £10 million was spent on a packaged solution that, after 4 years, delivered less functionality that the systems it replaced. The key problem here was the mismatch between the process assumptions in the software and the processes used. Trying to change the processes used to fit the software was practically impossible as it would have required major regulatory change.
  2. The move towards agile development is not really consistent with ERP system procurement. The message that organisations have picked up from the agile community is that requirements and software to implement these requirements can be developed incrementally. So, there may never be a ‘complete’ requirements specification for the system or, if there is, it isn’t available until the software is finished. This precludes expensive packaged software as it is essential to understand almost everything that the software is required to do before committing to an ERP system. I don’t know how easy it is to use modern ERP systems for agile development but I suspect that agile approaches are not widespread amongst ERP system users.
  3. The short-term model of business where shareholders expect very rapid ROI means that managers, in many businesses, are not rewarded for savings that might take years to accrue. In fact, they resist risky package software because they know they’ll be blamed for cost overruns but will be doing some other job by the time any costs are saved. This has been the principal reason why there has never really been widespread investment in engineering for maintainability.
  4. Increasingly, businesses and other organisations rely on software for innovation. Gaining a competitive edge is harder when everyone is using the same package and it’s easy to replicate what competitor’s have done. Therefore, even if there are long term costs, the ability to create new software that’s different may lead to significant short-term benefits.

There are also other social and political reasons why standardised software is rejected by both developers and managers but I don’t think these are as important as the reasons above.

Will the situation, driven by skills shortage, change as Sneed and Verhoef suggest? Maybe skills shortages will be the catalyst for change but I am not optimistic about this. Software-driven innovations will become increasingly important for all industries and this factor alone means that we’ll be developing and not simply assembling software for many years to come.

Tags:

Yesterday (August 16th 2016), Ford set out their goal of producing a high-volume, fully autonomous vehicle for ride sharing by 2021. This led to a number of follow-up articles including this one, which suggested that Ford’s vision would be counter-productive and would lead to a major drop in the number of privately owned vehicles and hence major problems for large car makers.

This is consistent with Uber’s long term goal of replacing private transport with on-demand cars and, certainly, fully autonomous vehicles would significantly reduce Uber’s costs. But it seems to me that the notion that taxi services such as Uber will make a huge difference to the number of private cars is based on a technical rather than a socio-technical analysis.

Let’s start with the technical analysis. The utilisation of private cars is incredibly low – mostly cars are parked and according to the above analysis the capacity utilisation is about 3%. It makes no sense economically to have a car if services like Uber are readily available with the low costs that fully autonomous vehicles will allow. So, your car will take you to work but instead of parking will then go off and take someone else to work.

BUT, if you live in a city with reasonable public transport and taxi services, it already makes no sense economically to have a car. The costs of car ownership significantly exceeds the costs of using public transport and taxis. Yet people still insist on having a car. Why?

  1. A significant number of people don’t live in cities but in suburbs or outside of the city. The problem with services like Uber is that in less populated areas, there isn’t enough demand for local vehicles so you have to wait a while for a car to arrive (typically, where we live on the outskirts of a city, we have to wait 10-20 minutes for a taxi). People who can afford it have a car because they don’t like the inconvenience of waiting. This will not change with driverless cars.
  2. People who live in a city often have leisure interests that take them outside of the city – they go walking in the hills, sailing, take their kids to the beach and so on. Taking public transport is a pain and, while renting a car is possible, it precludes last-minute decision making. Maybe if there was widespread coverage from companies like Uber outside the cities people would use them but I suspect they would still prefer the convenience of their own vehicle.
  3. A key benefit of car ownership is that it allows you to be reactive. You discover that you are lacking a vital ingredient for a recipe you are cooking, so you jump in your car to get it from the local shop; your son calls saying his football game has finished early and needs to be picked up earlier than planned; the sun is shining so you decide to drive to a local forest for a walk in the woods. The key point here is once you make a decision, you don’t want to wait and you want to be sure that transport is available – something that an external service will never be able to guarantee.

These reasons are all about convenience – we are willing to pay significantly for convenience and Uber-like services, autonomous or not, won’t change this.

Some time ago, I was involved in discussions about designing a computer-based system for so-called congestion pricing for road usage. The idea was to have a dynamically changing price for driving so that people changed their behaviour and staggered their journeys to work. We concluded it would not work for one simply reasons – kids all start school around about the same time.

While, for some jobs, the hours of work could certainly be staggered, this is not really an option for kids going to school. Many parents drive their children to school on their way to work or leave for work after their children leave for school. They don’t particularly want staggered working hours because they have to fit their working time around school days. So, we reckoned that while congestion pricing would theoretically work, its effect in practice would be quite limited.

The same issue affects demand for Uber-like services. To meet the peak demand in the morning and evening, car services would have to hugely over-provision and have an immense amount of spare capacity sitting around most of the time waiting for a call. The economics of this don’t make any sense so there will never be enough taxis to meet the personal transport demands at peak times. So people will continue to have a car because they can’t take the risk of not getting an Uber – their kids would be late to school. And, if they have a car, they will use it if possible.

For a number of reasons, I think their will be a falling demand for personal vehicles in the developed world. Cities, increasingly, are actively discouraging car ownership and better connectivity makes it possible for more people to work from home. Some people would not be significantly inconvenienced without a car and more and more of them will give up their vehicle. But this is not dependent on vehicle autonomy.

I reckon Ford have identified a market opportunity for driverless vehicles for ride sharing although I suspect that, while the technology may be available in 2021, the regulation simply won’t allow this. But, when it happens, I doubt if it will really make much of a dent in private car ownership.

Tags: , ,

As I have said in previous posts, I am convinced that the most pressing educational challenge for the 21st century is to develop effective and efficient ways of adult continuous education. Without these, we will consign an increasing number of our citizens to the ‘digital scrapheap’ as they are replaced by ever-more powerful digital technologies.

To provide this type of education, I see no alternative to using digital learning technologies to provide the educational experience that people need. Face to face delivery simply does not scale to this challenge.

UK Universities have, disgracefully in my view, largely disassociated themselves from both the delivery of adult continuous education and research in this topic. There are, of course, obvious exceptions to this such as the OU, but in general the ‘market-driven’ approach to higher education has driven universities to focus almost exclusively on full-time education for people under 25.

There seems very little research in the general area of adult continuous education so we have to turn to more general research on digital education to try and get some clues how to tackle this problem. A research team that is active in this area is the digital education group at Edinburgh University. They have been researching digital education issues for a number of years and have recently updated their ‘manifesto for online teaching’, which was first proposed in 2011.

Manifestos, by their nature, have to be quite generic (OK, waffly) but I think this is (or could be) a useful document that provides a framework for people working in this area to reflect on what they are doing and plan to do.

However, as I suggest in the title of this post, what has been produced is a bit of a mixed bag. Some statements get it absolutely right, some are, in my experience, quite wrong and some are really just a bit daft.

I won’t go through this manifesto statement by statement but will pick out some examples of the good, the bad and the bonkers.

“There are many ways to get it right online. ‘Best practice’ neglects context.”

This is spot-on and really important. When we are dealing with heterogeneous groups of learners, I don’t think ‘best practice’ makes much sense. Even less so when this ‘best practice’ is derived from work with children and young adults and we try and apply it in an adult learning context. However, I would add ‘and culture’ to this statement.

“Aesthetics matter: interface design shapes learning.”

This is so important and so hard to do. Sadly, the designers of the digital learning systems that I have seen for higher education don’t seem to understand this. Unfortunately, good interface design is very expensive and time consuming and clashes somewhat with agile, iterative systems development that has become the norm for this type of software.

“Contact works in multiple ways. Face-time is over-valued.”

I have no idea what the first clause here means but I think that the second is absolute nonsense. Face time is certainly NOT over-valued although it might be argued that ‘face time’ spent delivering material is of dubious value. But when students have understanding problems, simply talking face to face and understanding their concerns beats all digital interactions. Of course, it doesn’t scale – but that’s not the same thing.

“A routine of plagiarism detection structures-in distrust.”

This is wrong and, in fact, using plagiarism detection promotes trust in the fairness of assessment (I taught courses with plagiarism detection for more than 10 years). The vast majority of students do not plagiarise and resent their peers who do and are rewarded for doing so. Plagiarism detection reassures this larger group that there is a level playing field for everyone. I don’t recall any student ever complaining about plagiarism detection, even those who it detected.

“ Algorithms and analytics re-code education: pay attention!”

This strikes me as complete gibberish – I don’t know of any educational practitioner – teacher or university lecturer – who would have the faintest idea of what this means?

“Automation need not impoverish education: we welcome our new robot colleagues.”

I don’t know if the author here has swallowed some of the more ludicrous statements about AI but really, were are a very long way from having robot colleagues in the sense that I understand the word ‘colleagues’. Automated systems can make some data-driven decisions but they can’t understand the concepts of policy, culture, empathy, tolerance and politics that are an inherent part of both working with colleagues and involving them in learning processes.

In themselves, these odd statements are not that harmful. However, the danger with including such statements is that educational practitioners who read this manifesto will simply dismiss this as the eccentric ramblings of academics who are divorced from the real world of learning. This would be a pity as digital education can only succeed if we bring together (sceptical) practitioners and researchers at all levels.

I have been thinking for some years about how we can use digital learning effectively for adult learners, specifically on how to learn computer programming. This manifesto, in spite of its flaws, was helpful as it articulated some of the issues that I have struggled with. I don’t have any answers here but when I finally get round to creating something, I will turn to it as a checklist to help me assess how I’m getting on.

Tags: ,

I’ve written this post from a Scottish perspective and specifically discuss the digital skills shortage in Scotland. However, I think that the problem is much wider, government responses everywhere are equally unimaginative and that innovative approaches to continuous adult education are required across the world.

It is generally accepted that there is a worldwide shortage of people with ‘digital skills’ . Specifically in Scotland, it has been suggested that, in a tech sector that employs about 84, 000 people, there is a requirement for about 11, 000 more people in the near future. Whether or not this number is accurate I don’t know  (I suspect it’s a bit high) but for sure we need more people who are trained to work in the tech industries.

However, in Scotland and I think elsewhere in the world, both the government and industry response to this shortage has been profoundly unimaginative and based on ill-informed guesswork rather than an analysis of the real requirements of companies and a ‘joined up’ view of skills education.

One problem that I have with the current response is that the term ‘digital skills’ is bandied about without ever really saying what this means. It is generally taken  that ‘digital skills’ equates to programming. However, if we look at ‘digital skills’ as a whole, there’s a lot more to it than simply programming. What is the balance of requirements for people with conventional programming skills (e.g. Java), real-time systems engineering (which needs hardware as well as software understanding), web development, digital marketing, systems architecture, etc.? Maybe such information exists (I haven’t seen it) but if it does, those who simply equate digital skills and programming don’t seem to have paid much attention to it.

The most visible response of governments to the ‘skills shortage’ is to promote the teaching of computer science in schools. This has led to a new computer science curriculum and most recently in Scotland, an announcement that a new fund is to be made available to widen access to extracurricular technology activities.  As a software engineer, both of these seem to me to be absolutely good things – the more young people that understand computers and how to program the better.  However, I do not believe for one minute this will make much difference to the digital skills shortage.

I don’t know what it’s like in other countries but the Scottish government has NOT introduced any programmes to encourage computer science graduates into teaching; teachers’ salaries have been effectively frozen for the past  years and fall far behind the salaries that CS graduates can attract in industry.  Why should a graduate spend another year in training for a starting salary that is 25% or more less than they can immediately after graduation.

So – who will teach computer science in schools? Undoubtedly, existing teachers who will do their best but who will have, at best, some in-service CS training and nothing like the knowledge of a computer science graduate.  We could see a situation where poorly-trained teachers put off more students than they inspire.

In Scotland, Scottish university students do not pay fees but the number of local students funded by the government is capped. There has been no significant increase in the number of places in computer science degrees for Scottish students (in spite of a rise in applications) so, if the school initiatives enthuse students, where are they to go?  It’s not enough to stimulate new applications, applicants need to believe that there will be a university place for them.

Across the world, there is a reluctance for young women to sign up for computer science degrees. I’ve written before about this problem and I believe that this is the most pressing issue that we should be trying to address.  The usual response to this problem seems to be simply promoting role models for women to encourage more women into the tech industry. This can’t do any harm but has so far had very little effect and I doubt if it really will make much difference. What could make a difference is making it easier for students who start in some other discipline to switch to computer science and to encourage a wider range of ‘joint degrees’ where students study CS alongside some other subject. But this needs dedicated funding that will encourage universities to provide such courses and encourage women to take them.

All of this exemplifies the lack of joined up thinking in governments who simply do not understand that education has to be looked at as a whole; tinkering with one bit without considering the big picture is unlikely to have the effects desired.

Furthermore, education is not just a stage in life between the ages of 4 and 22. If we are to adapt to a rapidly changing world, education HAS to be a lifelong process with opportunities for people to learn new skills throughout their career.

In this respect, I had some hope that the Scottish Government might just understand the problem when they announced £6.6 million pounds of funding in 2014 to help tackle the digital skills shortage.  This was an opportunity to develop an imaginative response to the skills shortage but, sadly, it was not to be. What did we get? Another coding bootcamp, that offers intensive courses to train about 150 people per year to develop web applications in Ruby.  Hardly an innovative or imaginative approach.

Maybe this will address the market need for Ruby programmers? But what about the systems engineering skills that manufacturing companies need, what about mainstream software development, what about maintaining the millions of lines of existing code, which is a nightmare for all companies?  We have missed an opportunity to understand what skills (not just programming) that are really required by industry and to develop a world-leading approach to meet these requirements.

Effective adult education needs to recognise that education and work go together.  A full-time process for adult education is neither practical nor effective. To make a real difference to the number of people with digital skills, we need a different approach. We need to innovate and to provide a vehicle that blends electronically-mediated and face-to-face learning to allow ‘continuous’ education’ – to bring people into the tech industry and to update the skills of people already there.

While I applaud the efforts to widen the teaching of computer science in schools, I think that industry and government are kidding themselves if they believe that this will solve the digital skills shortage. We need a committed programme of adult education to encourage and support people who want to develop new skills. We need to abandon the idiotic notion that only young people can program and provide opportunities for all ages to learn digital skills. Industry needs to develop approaches to recruitment based on testing the skills of applicants and not simply taking the easy way out of rejecting those without formal qualifications.

The digital skills shortage is a short-term problem but the solutions we need for this problem will have longer-term applicability.  The potential of AI to destroy jobs has, in my view, been over-hyped but, for sure, AI-driven automation will lead to major changes in the professional jobs market. We should be planning for re-education now not waiting for the problem to happen.

We need a new approach to digital skills education – the old ways neither work well nor scale to the number of people who we need and who will want tech education in the future.  I don’t have easy answers to the problem but I am convinced that the place to start looking is in the open-source movement.  Open source has revolutionised software and I think that there is the same potential for ‘open-source, continuous education’. This is the only practical way of addressing the short-term problems of a digital skills shortage and a longer term goal of reskilling people as traditional jobs are automated.

I’ll write more about this in future posts.

Tags:

My blog post the other day about giving up on test-first development attracted a lot of attention, not least from ‘Uncle Bob’ Martin, an agile pioneer who wrote an entertaining riposte to my comments on his ‘Clean Code’ blog. He correctly made the point that my experience of TDD is limited and that some of the problems that I encountered were typical of those starting out in TDD.

1. I said that TDD encouraged a conservative approach because developers (at least those who think the same way as me) were reluctant to break a lot of the developed tests. Bob suggested that the problem here was that my tests were too tightly coupled with the code and that if tests are well-designed then this shouldn’t be too much of a problem. Looking again at my tests, I reckon that they are too tightly coupled to the code and they can be redesigned to be more robust.

So, I think that Bob’s right here  –  this is a problem with my way of thinking and inexperience rather than something that’s inherent in TDD.

2. I made the point that TDD encouraged a focus on detail because the aim was to write code that passed the tests. In fact, one of the things I read when getting started with TDD, was ‘Uncle Bob’s three rules of TDD‘:

  • You are not allowed to write any production code unless it is to make a failing unit test pass.
  • You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  • You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

If this isn’t advocating a focus on detail, then I don’t know what it’s saying.

Bob says that ‘Code is about detail; But this doesn’t mean you aren’t thinking about the problem as a whole‘. But how do you think about the problem? Maybe Bob can keep it all in his head but I think about problems by developing abstractions and denoting these in some way. I don’t like notations like the UML so I do it as a program. So how do we think small AND think big without writing code that isn’t about passing tests? Or have you changed your mind since writing your three rules of TDD,  Bob?

3. I made the point that TDD encouraged you to chose testable designs rather than the best designs for a particular problem. Bob was pretty scathing about this and stated unequivocally

“ Something that is hard to test is badly designed”

But we know that systems made up of distributed communicating processes or systems that use learning algorithms are hard to test because they can be non-deterministic – the same input does not always lead to the same output.  So, according to Bob,  system designs with parallelism or systems that learn are badly designed systems.  Bob, I reckon you should take this up with the designers of AlphaGo!

4. I said that TDD didn’t help in dealing with unexpected inputs from messy real data. I don’t think I expressed myself very well in my blog here – obviously, as Bob says, TDD doesn’t defend against things you didn’t anticipate but my problem with it is that proponents of TDD seem to suggest that TDD is all you need. Actually, if you want to write reliable systems, you can’t just rely on testing.

Bob suggests that there’s nothing you can do about unanticipated events except try to anticipate them.  To use Bob’s own words, this is ‘the highest order of drivel’. We have been building critical systems for more than 30 years that cope with unexpected events and data every day and carry on working just fine.

It’s not cheap but we do it by defining ‘a safe operating envelope’ for the software then analyse the code to ensure that it always will operate within that envelope, irrespective of what events occur. We use informal or formal arguments supported by tools such as static analysers and model checkers to provide convincing evidence  that the system cannot be driven into an unsafe state whatever events occur.

That’s how we can send systems to Mars that run for years longer than their design lifetime. Accidents still happen but they are really very very rare when we put our mind to building dependable systems.

Just a final word about the space accidents that Bob quotes. I don’t know about the Apollo 1 fire or the Apollo 13 explosion but the Challenger and Columbia disasters were not unanticipated events. Engineering analysis had revealed a significant risk of a catastrophic accident and engineers recommended against the launch of Challenger in low temperatures. But NASA management overruled them and took the view that test results and operating experience meant that the chances of an accident were minimal. These were good examples of Dijkstra’s maxim that:

Testing shows the presence but not the absence of bugs

I think that TDD has contributed a great deal to software engineering.  Automated regression testing is unequivocally a good thing that you can use whether or not you write tests before the code. Writing tests before the code can help clarify a specification and I’ll continue to use the approach when it’s appropriate to do so (e.g testing APIs).  I don’t intend to spend a lot more time learning more about it or consulting a coach because when it works for me, it works well enough to be useful. And, as a pragmatic engineer, when it doesn’t work for me, I’ll do things some other way.

Understandably, TDD experts promote the approach but they do  themselves a disservice by failing to acknowledge that TDD isn’t perfect and by failing to discuss the classes of systems where TDD is less effective.

We can only advance software engineering if we understand the scope and limitations as well as the benefits of the methods that are used.

Tags: , ,

Older Posts »