Feed on

In my recent blog post on TDD, several people suggested  that some of the problems that I was having with TDD were that I may have a system with a tightly coupled UI and that I was trying to test through that UI.  They suggested that TDD is more effective when the UI is made as ‘thin’ as possible.  The problems of testing GUIs using TDD are well-known so the general idea is to build systems where a simple UI interacts with the underlying system through an API.  Most of the testing required is API testing rather than interface testing.

I’d like to argue here that this notion is only applicable to one class of system and, even when it is possible to structure systems in this way, it means that we cannot use the affordances of the digital medium to provide a more effective user experience.

If we step back and look at what a user interface is, it is essentially a way of rendering and interacting with an underlying object model (data and operations). If you have a loosely coupled object model where interactions with one object have minimal effects on other objects, a small number of operations and a sequential interaction model then you can develop a ‘thin’ interface. Objects data is presented in forms, operations as buttons or menus and the sequence of objects presented depends on the the business process supported.  Quite a lot of business processes can be represented in this way (they started out with paper forms after all) so its not too hard to build web-based interfaces. These interfaces are rarely aesthetically satisfying and often pretty user-unfriendly, especially for occasional users,  but they are sort-of tolerable and people put up with them because they have no alternative.

However, if we want to render object models, such as an image, and present users with richer, perhaps more natural, forms of interaction, then the idea of a ‘thin interface’ has to be discarded. Take a simple document editor as an example. You can render the document using characters in a single font, with markup used to show how it is displayed. You can use an editor like emacs to access this and memorise a range of keyboard commands to do so. On the other hand, you can render the document as an image where you show how it looks (fonts, colours, spacing, etc.) and allow the user to directly manipulate the document through the image – adding text, changing styles and so on. This needs an awful lot more UI code and you need to find a way of testing that code.

For some applications, such as a visualisation application, the base application object model may be pretty simple and easy to test (using TDD or any other approach). The majority of the application code is in the UI and if TDD is difficult to use in this situation, then it really isn’t surprising that TDD isn’t that effective for this type of application.

If we want to make our systems human-centric then we need to have more natural models of interaction with them and it may make sense for interfaces to adapt automatically to user characteristics and actions.   Our ‘thin’ interfaces are still mostly based on form-filling and menu selection because to make them ‘thin’ there has to be very little between the user and the system object model. It’s a complete failure of the imagination of system designers to think that we can’t do better than this. Thin interfaces may make TDD easier but it is really quite disappointing that we are still building interfaces that, frankly, haven’t really moved on from the interfaces that were presented on 1980s VT100 terminals .


My last post on top-down development attracted a lot of attention from the Twittersphere and lots of comments. The vast majority of these were constructive, whether they agreed with me or not.  I am delighted that the post sparked such a response because we can only improve approaches through challenge and discussion.  It’s well worth looking at Robert Martin’s Clean Code blog where he has taken the time to rebut the points I made (Thanks, Bob).  I think he has some things wrong here but I’ll address them in a separate post.

As I make clear in Chapter 8 of my book on software engineering, I think TDD is a an important step forward in software engineering. There are some classes of system where it is clearly appropriate, one of which is web-based consumer-facing systems  and I believe that the use of TDD in such circumstances  makes sense. I think that the key characteristics of  ‘TDD-friendly’ systems are:

  1. A layered architecture. A point made by several commentators was that, even when GUIs are hard to test, a layered architecture overall simplifies the testing process. Absolutely right – when you can structure an architecture with the presentation layer, the application logic layer and the data management layer, these can be tested separately.
  2. Agreed success criteria. When the stakeholders in a system agree on what constitutes success, you can define a set of tests around these criteria. You don’t need a detailed specification but you do need enough information to construct that specification (and maybe represent that as a set of tests)  as you are building the system.
  3. A controllable operating environment. By this, I mean an environment where you don’t have to interact with other systems that you can’t control and which may, by accident or design, behave in ways which adversely affect the system you are development.  In this situation, the problem is designing for resilience and a deep program analysis is much better for this is better than (any kind of) testing.

I started with TDD on a class of system that met these criteria and I liked it. It worked well for me. Then, I moved on to a system which was concerned with visualizing complex linked structures. Now, the thing about visualization is that (a) it’s often much more difficult to have clearly separate layers – the UI is the program and (b) it’s very hard to have pre-defined success criteria – you basically have to program by experiment and see what works and what doesn’t.  TDD didn’t work. Of course, this may be due to my inexperience but I think there is more to it than this. Essentially, I think that if a system does not have the above characteristics, then TDD is inherently problematic.

It is unrealistic to think that all systems can be organised as a layered model. For example, if you are building a system from a set of external services, these are unlikely to fit neatly into layers. Different services may have different interaction models and styles and your overall UI is inevitably complex because you have to try and reconcile these. If you have a system that involves rich user interaction (e.g. a VR system), then most of the work is in the UI code. I’ll discuss the myth of a ‘thin’ UI in a separate post.

It is equally unrealistic to think that we can always have agreed success criteria for a system, just as it’s unrealistic to have complete program specifications. Sometimes, stakeholders who have significant influence choose not to engage in system development but don’t like what they see when they get it. Some problems, like visualisation, are often problems where you work by trial and error rather than around a definitive idea of what the system should do. If you are not sure what you are trying to test, then TDD is challenging. In those circumstances, you build planning to throw at least one away. And, maybe if you finally get agreement, you can use TDD for the final version.

The problem of a controllable operating environment is one that isn’t often mentioned by software engineering commentators.  When you put software into a complex system with people and other hardware devices, you will get lots of unexpected inputs from various sources.  The classic way of handling this is to force a UI on system users that limits the range of their interaction and so the software doesn’t have to handle inputs it doesn’t understand. Bad data is eliminated by ignoring anything that doesn’t meet the data validation criteria defined by the system. So far, so good. You get frustrated users who can’t do what they want through the UI (think of the limitations of e-banking systems). Unexpected data from sensors just gets ignored.

This is all very well until you are faced with a situation where ignoring data means that your system breaks the law; where ignoring sensor data means that systems fail in catastrophic ways and kill or injure people; where stopping users interacting with the system means that they can’t respond to system failure and limit the damage caused by that failure.

So, sometimes, you simply have to deal with ‘unknown unknowns’. By definition, you can’t test for these and if you can’t test, how can you use TDD? Deep program analysis and review is the only way that you can produce convincing evidence that unexpected events won’t move the system into an unsafe state.

I don’t believe that such thing as a universal software engineering method that works for all classes of system.  TDD is an important development but we need to understand its limits. I may have missed it, but I have never read anything by experienced TDD practitioners that discusses the kinds of system where it’s most effective and those systems where it might not work that well.

Tags: , ,

Test-first or test-driven driven development (TDD) is an approach to software development where you write the tests before you write the program. You write a program to pass the test, extend the test or add further tests and then extend the functionality of the program to pass these tests. You build up a set of tests over time that you can run automatically every time you make program changes. The aim is to ensure that, at all times, the program is operational and passes the all tests. You refactor your program periodically to improve its structure and make it easier to read and change.

It is claimed that test-first development improves test coverage and makes it much easier to change a program without adversely affecting existing functionality. The tests serve as the program specification so you have a detailed but abstract description of what the program should do.

I deliberately decided to experiment with test-first development a few months ago. As a programming pensioner, the programs I am writing are my own personal projects rather than projects for a client with a specification  so what I have to say here may only apply in situations where there’s no hard and fast program specification.

At first, I found the approach to be helpful. But as I started implementing a GUI, the tests got harder to write and I didn’t think that the time spent on writing these tests was worthwhile. So, I became less rigid (impure, perhaps) in my approach, so that I didn’t have automated tests for everything and sometimes implemented things before writing tests.

But, I’m not giving up on TDD because of the (well-known) difficulties in writing automated tests for GUIs. I’m giving up because I think it encourages conservatism in programming, it encourages you to make design decisions that can be tested rather than the right decisions for the program users, it makes you focus on detail rather than structure and it doesn’t work very well for a major class of program problems – those caused by unexpected data.

  • Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests.
  • It is easier to test some program designs than others. Sometimes, the best design is one that’s hard to test so you are more reluctant to take this approach because you know that you’ll spend a lot more time designing and writing tests (which I, for one, quite a boring thing to do)
  • The most serious problem for me is that it encourages a focus on sorting out detail to pass tests rather than looking at the program as a whole. I started programming at a time where computer time was limited and you had to spend time looking at and thinking about the program as a whole. I think this leads to more elegant and better structured programs. But, with TDD, you dive into the detail in different parts of the program and rarely step back and look at the big picture.
  • In my experience, lots of program failures arise because the data being processed is not what’s expected by the programmer. It’s really hard to write ‘bad data’ tests that accurately reflect the real bad data you will have to process because you have to be a domain expert to understand the data. The ‘purist’ approach here, of course, is that you design data validation checks so that you never have to process bad data. But the reality is that it’s often hard to specify what ‘correct data’ means and sometimes you have to simply process the data you’ve got rather than the data that you’d like to have.

So , I’m going back to writing code first, then the tests.   I’ll continue to test automatically wherever it makes sense to do so, but I won’t spend ridiculous amounts of time writing tests when I can read the code and clearly understand what it does. Think-first rather than test-first is the way to go.

PS. I’m sure that TDD purists would say that I’m not doing it right so I’m not getting the real benefits of TDD. Maybe they are right. But I have never found zealots to be very convincing.

Tags: , ,

Resigning from the BCS

I have been a Member and a Fellow of the British Computer Society for about 30 years. I joined because I thought that it was important to support professional bodies in computing and software engineering. The degrees of the university I worked for at the time (Lancaster) were accredited by the BCS and, rightly, they expected staff in these institutions to support them.

I never thought that the BCS accreditation process was particularly good or contributed much to the quality of education, nor because of our geographical location, did I manage to attend many BCS meetings. Nevertheless, I continued my membership because of my general feeling that this is something that a professional should do.

However, I have now decided that enough is enough and that it’s time to leave the BCS. There are several reasons for this:

  1. Subscriptions seem to go up every year by more than the rate of inflation and I get very little for this subscription. The BCS’s magazine for members (IT Now) is lightweight and, in my view, not worth reading. Normally, it goes straight into the recycling, unread. Most group meetings, which sometimes sound interesting, are in London and the BCS seems to make no effort whatsoever to cater for members outside the south of England.
  2. As I have written previously, I don’t think that the BCS accreditation process is fit for purpose and I have no wish to support this. In my view, it is now damaging the quality of computer science education.
  3. The BCS Academy was supposed to be a ‘Learned Society’, with a focus on more academic computer science. It doesn’t seem to be particularly active, its ‘Research Symposium’ seems to have died a death and those events which take place are mostly in London or the south of England. I can’t really see anything in this for me.
  4. Most of all, I dislike the BCS’s refocusing on ‘Information Technology’. I am an engineer but the BCS insists on calling me a ‘Chartered Information Technology Professional’. I never asked for this and, like I suspect the vast majority of the population, I have no idea what it means. What is seems to me is a ‘dumbing down’ of the whole discipline – exactly the opposite of what’s needed in my view. To be respected, we need to focus on science and engineering not wiffly-waffly ‘Information Technology’.

So, it’s time to leave. With some regret, as I do approve of the principle of professional institutions. The BCS lost its way some years ago when it rejected the opportunity to merge with the IEE (now the IET) to create a software engineering institution and I wonder if it really has any future?


I wrote a series of posts in 2011 reflecting on the future of textbooks and how I thought they would evolve. I predicted that textbooks would not be supplanted by ‘free’ internet resources but that paper textbooks would disappear in favour of e-books. Almost 5 years on, how did I do? Well, textbooks have certainly survived and we hear less and less about how you can learn things simply by drawing on materials on the internet. But they certainly haven’t become e-books and paper versions of texts continue to dominate, even in areas like computer science.  This survey suggests that students prefer paper books to electronic books.

Why has this happened? Textbooks are the obvious e-book. You rarely read a text in sequence and you access material randomly. You often want to search and link to other resources.  As I said in this post, e-textbooks have huge potential to provide a richer learning experience than paper books.

So what has stopped the development of e-textbooks. I think there are three  factors that are most significant:

  1.  General issues with retention from screen reading compared to paper.  This is not just a textbook issue but for reasons that are unclear, we seem to be better at remembering things when we read on paper rather than on a screen. It’s certainly true for me and I now rarely read non-fiction books as e-books. This is obviously a big deal for students as they use textbooks to help them pass exams and texts so having to obviously want to avoid having to work harder to learn.
  2. The Kindle standard. Amazon dominate e-book publishing and, although there are much richer e-book standards, if you’re not published on Kindle then you are not a serious -e-book player. But the Kindle standard has been designed for cheap, portable devices being used to read novels without illustrations, equations, tables, breakout boxes or other features that are standard in textbooks. Therefore, converting a textbook to work properly on a Kindle takes a lot of work as a simple conversion ends up with a complete mess if your book isn’t just sequential text. In the latest edition of my text, we decided it just wasn’t worth the effort and don’t publish on Kindle.
  3. User perceptions that digital resources should be ‘free’ or at least much cheaper than their paper equivalents.  Internet giants such as Facebook and Google have defined the internet business model to be an advertising model rather than a paid-content model so users are reluctant to pay – I am regularly asked for free digital copies of my book as if somehow the only cost in producing a book is that of paper and distribution.  However. selling adverts around a textbook isn’t ever going to be viable.

    When readers are willing to pay for an e-book, they want to pay less as, obviously, there are no printing or material costs involved. However, if you want to produce an e-book that’s a better learning resource than a paper book, you need to do a lot more work – to build in links, simulations, social networking, multi-media etc. I looked into this and reckoned that producing an e-book that would take advantage of the affordances of the Internet that I discussed in this post would take at least 3 times as much effort as simply producing sequential text. To cover the cost of an e-book that’s more than a PDF, the price would have to be considerably higher than a paper book. I don’t think that there’s a hope that people will pay for this so it simply won’t happen.

There are other reasons too. Publishers are increasingly squeezed financially and are unwilling to take risks and  textbooks are not recognised in the university funding model so academic writers are discouraged from spending time experimenting with new approaches to publishing;

I’d like to see e-textbooks as I described in my 2011 post and I’d like to create this kind of learning resource. But if this is going to happen, we need a reward model for content creators that recompenses them for the work involved and for Amazon to develop a new technical standard for e-publishing.  The Kindle model will certainly evolve but I see no recognition of the fact that high-quality digital content (either created or curated) is expensive.  Until that happens, then I really don’t think we’ll see any real change.



In the first version of this, I got it wrong! It is possible to leave comments but the comment button did not show up on my browser. The post has been amended.

I’ve written here before that, as a loyal reader for 30 years, how I think IEEE Software magazine has lost its way. I made the point that it seemed to be impossible to leave comments on articles so was delighted to see a tweet announcing the arrival of the IEEE Software blog.

The aim, as stated is to publicise research to practitioners:

“At the end of the day, we want practitioners to be able to easily access and apply the latest research advancements…”

This is all very well but, with 30-odd years as a researcher who has always worked with practitioners, the problem is not simply that practitioners don’t access research. There are two other issues that are at least as important:

  1. The best practice in industry is, in my view,  far beyond what’s going on the research labs and practitioners have no idea about that.
  2. A very high proportion of researchers simply do not understand good industrial practice and this is one reason why a great deal of software engineering research has no practical impact.

We need a lot more articles about leading edge industry work as well as research. We need blog posts from practitioners as well as researchers (only one of the contributors was from industry and he was from an industrial research lab).

It’s also a pity that there isn’t a more even gender balance amongst the contributors. I know this is hard to do but unless leaders like IEEE Software set an example, things will not change.

So, as well as communicating academic research to industry, IEEE Software should also be about communicating best practice and to informing researchers about the realities of good practice.

A blog for IEEE Software is something that’s definitely a step forward but its instantiation has a long way to go.

On seeing the blog, I naturally thought that leaving a comment there was the way to communicate with the blog editor. But there seemed to be no way to comment and in the first version of this post, I criticised this. However,  it seems this is a browser incompatibility as the comment facility is actually there and showed up on other browsers.  My advice would be not to use WordPress rather than Blogger as there are fewer of these incompatibilities.

Apologies for the misrepresentation. I have made the substantive points here as a comment.



I have been a subscriber to the IEEE’s Software magazine since it started in 1984. At one time, in my office, I had a 25 year unbroken run of issues on my shelf until, thank goodness, digital libraries meant that I could dispense with the paper copies. It was, and maybe still is, the most readable and useful of the magazines that tries to bridge the divide between researchers and practitioners.

For the past few years though, I have found the magazine to be rather dull. There were occasional good articles and columns but most of the articles were pretty skippable. This is not to say they were badly written or researched – I just didn’t find them worth reading. I didn’t think much about this – if anything, I probably put it down to the cynicism of age.

Then, in the July-August 2015 issue, I read an article by Philippe Kruchten on lifelong learning (http://www.computer.org/csdl/mags/so/2015/04/mso2015040085.html)  which brought home to me why I was disenchanted with the magazine.  For me at least, hardly any of the articles were of any use for learning and professional development. I think that this is a significant change from a few years ago. In the past, there were lots of useful, general articles on software engineering that I learned from and recommended to my students. But now, most of the articles seem to be specialised or are short columns presenting opinions and one person’s experience.

For example, in the same issue as Kruchten’s column, there’s an article on an evaluation of common Python implementations (hardly a topic that’s going to engage the vast majority of software engineers), an article that says there’s no evidence that global software engineering saves money (I would guess few managers who could benefit from this are actually readers of a software magazine) and an article on why API documentation fails (which comes to a conclusion of the completely obvious that it fails when the content isn’t very good).

I have my opinion on why the magazine is publishing fewer articles that are of general interest to engineers, but I don’t think it is useful to go into these here. Rather, I want to make some constructive suggestions that would make IEEE Software a flagship publication that has some real value to the professional and academic software engineering community.

The key, I think, is to take on board what Kruchten is saying and focus the magazine on professional development.  It should be the place that engineers go to to learn about the latest advances in software engineering and related areas. Every issue should have two or three articles that you can recommend to a practitioner or students and say ‘read this and learn something useful’.

What does this mean in practice? I think there are three key changes that are needed:

1.     Tutorial articles. Kruchten asks ‘what do I know about the MEAN stack’. Well, I didn’t know about this term (it means Mongo DB, Express.js, Angular.js, Node.js) but I would like to read a well-written tutorial article about it. General tutorial articles on new areas used to be much more common in the magazine and it’s a real pity they have mostly disappeared.

2.     Review articles.  Review articles looking at available tools and technologies are really helpful for learners of these technologies. For example, as a recent learner of Python, I would really have appreciated a good review article on Python IDEs. Reviews of web material, akin to the ‘Surfing the Web’ column in SEN, would also be incredibly useful.

3.     Practice focused research. At the moment, research articles are mostly useless to practitioners as they fail to relate the research to practice. They present non-conclusions (e.g. we have no evidence that global software engineering works) rather than useful lessons learned and solid information on how to do things better. IEEE Software is not an academic journal but most of the ‘research’ articles are simply rather better edited versions of conference papers. Research articles should be writing for practitioners who want to learn from their research – not for other researchers.

Of course,  I know from my experience as a journal editor, that if you simply rely on submitted articles, you will not necessarily get the kind of articles that you want. Realistically, the changes to the magazine that I think are needed are not going to happen without proactive commissioning of articles and significantly more resources than I suspect are currently available.

If you want good articles, you have to pay for them. Otherwise, you will simply get academics writing to enhance their chances of tenure or promotion. I don’t think that you have to pay commercial consultancy rates but you have to recognise that writing well takes time and offer some reward for those willing to share their knowledge.  Paying $1000 for a well-written tutorial article would be money well spent.

I don’t believe this is unrealistic. My dues for the IEEE Computer Society and subscription to IEEE Software came to $172. According to this site (http://www.payscale.com/research/US/Job=Software_Engineer/Salary), the median salary of a software engineer in the US is $78, 000 so, even for those whose employer doesn’t pay their dues, another $20 or $30 a year will hardly break the bank.

Will this happen? Probably not as the ‘volunteer ethos’ is deeply embedded in the IEEE Computer Society. But I’m afraid that unless there are changes, IEEE Software will become increasingly irrelevant to practising software engineers.

PS I do know how to link text to URLs. But, from a security perspective, I think it safer to leave URLs explicitly visible so that you know what you are clicking on.

A pensioner learns Python

When I was younger, I thought of myself as a pretty good programmer. I started with Algol 60 in 1970 and since then have learned (and forgotten) lots of programming languages. But as my career developed and interests evolved, I spent more time managing people who programmed rather than programming myself and focusing on software requirements and use rather than on development issues. So, I haven’t written any significant programs for many years, with Java the last programming language that I learned about 15 years ago. But, I have always enjoyed programming so now that I’m retired, I decided to see if I could revive my old programming skills.

I never liked Java – it always seemed to me to be a wordy and inelegant language (compared, for example, to Pascal) and I had no wish to program in that language.  I flirted with Ruby but finally decided on Python – it’s a popular language (the 4th most popular according to this survey) and used as the programming language for the Raspberry Pi – the small, ultra-cheap computer that I fancied playing with. Python is a dynamically typed, interpreted language – completely different from Java.

So I started learning Python about 10 days ago and so far have written a few hundred lines of code. My experience is that you can learn enough of a programming language  in a week (if you understand the basic structures, it’s really just an issue of syntax and idiosycrasies) to write useful programs but it takes several months (and thousands of lines of code) to be completely ‘fluent’ and proficient in that language. Python was no different. I had written my first program (bubble sort) within a couple of hours of getting started and, although I still forget to include some of these damned colons, I’m reasonably confident in the basic functionality of the language (but haven’t yet tried some of its more esoteric features).

So, coming from a background in reliable systems, what are my impressions of the language:

1.    It’s a very productive language – you can do a lot with only a few lines of code. It does encourage a ‘program by experiment’ approach to development rather than thinking it through in advance but that’s no bad thing when you are learning a new language. It means you are less likely simply to work in the structures of the language that you already know.

2.     Part of the productivity comes from lots of built-in operations e.g. on strings. But these seem rather arbitrary and simply are ‘things that might be useful’ rather than a coherent and orthogonal set of functionality. I found myself programming operations then sometimes discovering there was a built-in function to do the same thing.

3.     The use of whitespace as a bracketing device is incredibly error-prone. This has been the major source of problems for me as I changed things and forgot to change the indentation. Maybe this could be solved with a better editor but essentially it was a stupid and unnecessary design decision.

4.       My experience with statically typed languages means that I’m suspicious of the dynamic typing and I certainly haven’t used it in a constructive way. I find myself writing str (X), list (X), etc. just to be completely sure that a variable is the type that I expect. Learning to be comfortable with dynamic typing will, I suspect, take some time.

Overall, I’m impressed but with reservations and I’ll certainly be continuing to use the language. I think that programming is possibly a bit like swimming in that once you learn, you never really forget – you just get slow and rusty. I’m at that stage but over the next few months, I hope to get a lot better and more ‘Pythonic’. I’ll post again in a few months about my experiences.

One thing I am completely convinced of, however, is that we would be much better teaching Python rather than Java as an initial programming language. All of this ‘public static Class’ crap is completely confusing for beginners. I think abstraction and program structuring are things that come later and it’s far better to design a programming curriculum around programming by experiment so that students can get immediate gratification and see progress being made.

Tags: ,

I have just completed a HEFCE survey about accreditation of degree courses by the British Computer Society (BCS) that has the aim of soliciting opinion about whether accreditation of courses can be used to enhance graduate employability.  The details of the survey are unimportant but what is certainly not addressed in the survey is whether or not accreditation is worthwhile. It is assumed that accreditation is a good thing and that tweaking the process will improve employability. Unfortunately, I believe that the current accreditation process as adopted by the BCS stifles innovation and provides spurious respectability to degree courses that produce graduates that are simply not very good.

If we look at employability statistics, there is a close correlation between the input qualifications and graduate employment after 6 months.  This is not even slightly surprising – employers want to hire the most able students and hence these graduates are the ones who have no problem in getting jobs. What is covered in their degree course is largely irrelevant so long as there is a sound fundamental understanding of programming, abstraction and algorithms.  With this as a basis, these graduates can easily learn new technologies and will do so several times in their working life. Accreditation of the courses taken by these students is unimportant – I don’t like it because it tends to stifle innovation but, apart from the waste of academic time in the involvement in the accreditation process, it is not particularly damaging.

Not all students entering computer science and related degree courses come with high input qualifications. In fact, probably the majority of CS students are accepted into degree schemes with relatively low entry requirements. It would be invidious to single out particular institutions but the reality is that many universities are financially reliant on maintaining a large intake to their computer science courses so, to make up numbers, accept students without paying too much attention to whether or not their background and qualifications level is appropriate.

Now, I do not believe that computer science should be an elitist profession and that these students should be denied opportunities to study the subject. What these students need is a course that is tailored to their background where much more time is spent on fundamentals simply because it takes their students longer to master these basics of the discipline. My experience is that in some universities it is possible to graduate in computer science with a good honours degree but be unable to construct a non-trivial computer program. These students may well know about business skills or whatever the pet interests of course lecturers are but without the mastery of fundamentals, they simply cannot adapt to modern software engineering.

Unfortunately, a course that simply focused on fundamentals would find it it difficult to be accredited under the current process.  There is a set of requirements (hoops) that degree schemes must meet, topics that are expected to be covered and universities have to jump through these hoops in order to be accredited. While the accreditation process is not quite ‘one size fits all’ , my experience is that there is an expectation from people on accreditation panels that certain topics will be covered and so universities simply don’t have the option of delivering the fundamentals of the discipline.

To be fair, the roots of this problem do not lie in the accreditation process but in the ‘marketing’ of courses by some institutions that believe that students are attracted to buzzwords rather than fundamentals. But accreditation exacerbates the problem because it only looks at courses, persists in the delusion that all universities are of a comparable standard and fails to look at the quality and skills of the graduating students. It focuses on process and the material covered rather than whether or not the graduates are actually any good.

The fundamental problem with BCS accreditation is that it has bought into the quality delusion that process is all. This stems from Deming’s process improvement approach in manufacturing but we don’t manufacture graduates and it isn’t simply a problem of getting the process of setting up machines correct.  If accreditation is to have any real value, it has to look at the quality of the output from a course and employability is one metric that can be used to assess that quality.  Some, perhaps most, courses will fail. But accreditation will then be meaningful and have some relevance to employers.

There is a further problems with the current approach to accreditation which I believe is harmful.  The most active and enthusiastic academics and practitioners are repelled by the bureaucracy of the process and refuse to become involved. Hence, those involved tend to be rather conservative, their knowledge is often out of date and they find it difficult to understand some of the material in courses. They tend to favour what they know and universities that try to innovate and discard some less relevant material are sometimes criticised (I speak from personal experience here).  Like all process-focused activities, those who don’t step out of line are favoured rather than innovators.

As it stands, I see no value in the current BCS accreditation process. Tweaking the process will not help – it needs root and branch redesign.

Of course, this won’t ever happen. The BCS is run by volunteers who have a vested interest in maintaining the existing system and who will be reluctant to antagonise the universities whose courses should not be accredited. Universities, especially those that offer sub-standard courses, will actively oppose change as it would reveal their own inadequacies. Most potential and graduating students see no value in BCS accreditation and couldn’t care less about whether or not a degree scheme is accredited. Employers are not fooled by accreditation and focus on making their own assessment of skills. They consider the BCS to be largely irrelevant (which is a pity) and they certainly won’t pressurise the BCS to change.

I have been a Fellow of the BCS for 25 years and a member for rather longer than that. I have worked in universities where the courses have been BCS accredited and in St Andrews, one of the few universities which has actively opted out of the accreditation process (a decision made before I joined the university). The computer science course at St Andrews comes top of the Guardian league tables and consistently appears in the top half dozen or so courses in other league tables.


Tags: ,

In my previous post on this topic, I hypothesised that one reason why there is a gender imbalance in science and engineering is that teenage girls see science as ‘uncool’ and so choose non-science subjects to study at school. It is harder for a 15 year old girl to reject the claim that ‘Only nerdy people take science’ than it is for a 25 year old woman. I believe that one important contribution that we can make to addressing the gender imbalance is to make it easier for women to change their mind and career and switch to science later in life.

Fundamentally, we have to change our educational system to make it more flexible and to ensure that choices made at a vulnerable age do not constrain peoples’ career for the rest of their life. This will involve very long-term changes in the school and university entrance system so that students are not forced into a science or non-science stream at 15 or 16 years old. But such changes will take a generation to implement and will only partially address the problem – we need to do more and we need to do it sooner rather than later.

Our higher educational system is really quite ludicrous for the 21st century. We cram higher education into a short period between the late teens and early twenties then make it difficult both practically and financially for people to reeducate themselves later in life.  This makes things really tough for students who, for whatever reason, realise that they have taken the wrong degree. A tiny minority of students who have the time and means can take an additional, different degree but even then, the lack of appropriate school qualification limits their choice.

Of course there are routes for learning that are open to all. MOOCS are available to the highly motivated although I’m not convinced that you could do a chemistry degree from MOOCS alone. The Open University in the UK is a fantastic institution that offers degree courses to anyone but offers quite a limited range of science and engineering courses.   Laboratory provision is difficult and peer learning is less effective because it has to be through electronic rather than face to face communication.

To provide a better, local learning experience for people we need universities to repurpose their courses to create part-time educational opportunities that allow mature students to study alongside work or looking after children.  We need university funding bodies to encourage this change and facilitate this with new money. We need to make it possible to decide to become a scientist or engineer after having taken some other degree course and not to exclude a large segment of the population who were trapped by early over-specialisation. We need to think about the needs of these students and not simply treat them in the same way as full-time undergraduates. We need a recognition from government that education is not a one-off event and provision of financial support for part-time students.  The possibilities for blended learning supporting internet-based courses with face-to-face learning are endless and exciting and I am convinced that we could make this work.

Opening up opportunities in science and engineering to people in work and not just to school leavers is necessary but, of course, not the only thing that we need to do. As well as encouraging women into careers in science and engineering, we need to change the working culture so that they are encouraged to stay.

Obviously, there is a need to change the patronising culture that’s typified by Hunt’s remarks on women in science but I believe that this change is on the way. It won’t happen overnight but over the next few years we will see an attitudinal change. However, changing attitudes to women in science and engineering is not enough – we need to take practical steps that recognise the reality that women are the primary carers of children and that this will not change in the foreseeable future.  We need to provide much better support for parents who take a career break with opportunities for keep up to date and for returners to update their knowledge – another reason why we should have part-time courses;  we need to ensure that the ways in which we assess success in science and engineering are not biased against part-time workers – currently, I believe they are.

Interestingly, these changes won’t just benefit women who want to change their careers. They will also give opportunities to men who didn’t realise their potential at school because they were too busy being ‘cool’ or because their economic circumstances made it impossible to afford university.  Changing culture will provide a better working environment for everyone and maybe dads as well as mums will get a chance to see more of their kids. There are no losers if we change but no winners if we don’t.

Sadly, I see very little evidence that universities, industry and professional institutions even recognise the need for change.  They all play lip service to gender equality then do very little about it. In scientific research, there is a poisonous culture of long hours and measuring success through publications which are often simply interim results of very little value. In engineering, project work often requires extensive travel and meetings away from home without any recognition that many of these are unnecessary or could be replaced by better electronic communications.  Professional institutions, such as the IET, seem to think that all they need to do is to is to have pictures of young women engineers in their publications and don’t think about how to use their influence in an imaginative way.

Can these changes happen? It is difficult to be optimistic. We need bold and imaginative government education policies that encourage change and that recognise the importance of part-time lifelong education. Sadly, there is no evidence that our current politicians understand this; universities have been turned into businesses and while they pay lip service to equality they don’t even tackle their own problems in this area. Businesses complain about the lack of qualified staff but expect someone else to solve the problems for them and continue to maintain unsupportive working environments for part-time workers and those with caring responsibilities.

The majority opinion is probably still that we should attempt to tackle the gender imbalance by encouraging more teenagers to study science and engineering. But this hasn’t worked and there is no evidence that continuing to focus on this will make the slightest difference. We needed to fail before we move on and now it is time for everyone who want to help realise the potential of all members of our society to start making a noise about it.

Tags: ,

« Newer Posts - Older Posts »