Feed on

Test-first or test-driven driven development (TDD) is an approach to software development where you write the tests before you write the program. You write a program to pass the test, extend the test or add further tests and then extend the functionality of the program to pass these tests. You build up a set of tests over time that you can run automatically every time you make program changes. The aim is to ensure that, at all times, the program is operational and passes the all tests. You refactor your program periodically to improve its structure and make it easier to read and change.

It is claimed that test-first development improves test coverage and makes it much easier to change a program without adversely affecting existing functionality. The tests serve as the program specification so you have a detailed but abstract description of what the program should do.

I deliberately decided to experiment with test-first development a few months ago. As a programming pensioner, the programs I am writing are my own personal projects rather than projects for a client with a specification  so what I have to say here may only apply in situations where there’s no hard and fast program specification.

At first, I found the approach to be helpful. But as I started implementing a GUI, the tests got harder to write and I didn’t think that the time spent on writing these tests was worthwhile. So, I became less rigid (impure, perhaps) in my approach, so that I didn’t have automated tests for everything and sometimes implemented things before writing tests.

But, I’m not giving up on TDD because of the (well-known) difficulties in writing automated tests for GUIs. I’m giving up because I think it encourages conservatism in programming, it encourages you to make design decisions that can be tested rather than the right decisions for the program users, it makes you focus on detail rather than structure and it doesn’t work very well for a major class of program problems – those caused by unexpected data.

  • Because you want to ensure that you always pass the majority of tests, you tend to think about this when you change and extend the program. You therefore are more reluctant to make large-scale changes that will lead to the failure of lots of tests. Psychologically, you become conservative to avoid breaking lots of tests.
  • It is easier to test some program designs than others. Sometimes, the best design is one that’s hard to test so you are more reluctant to take this approach because you know that you’ll spend a lot more time designing and writing tests (which I, for one, quite a boring thing to do)
  • The most serious problem for me is that it encourages a focus on sorting out detail to pass tests rather than looking at the program as a whole. I started programming at a time where computer time was limited and you had to spend time looking at and thinking about the program as a whole. I think this leads to more elegant and better structured programs. But, with TDD, you dive into the detail in different parts of the program and rarely step back and look at the big picture.
  • In my experience, lots of program failures arise because the data being processed is not what’s expected by the programmer. It’s really hard to write ‘bad data’ tests that accurately reflect the real bad data you will have to process because you have to be a domain expert to understand the data. The ‘purist’ approach here, of course, is that you design data validation checks so that you never have to process bad data. But the reality is that it’s often hard to specify what ‘correct data’ means and sometimes you have to simply process the data you’ve got rather than the data that you’d like to have.

So , I’m going back to writing code first, then the tests.   I’ll continue to test automatically wherever it makes sense to do so, but I won’t spend ridiculous amounts of time writing tests when I can read the code and clearly understand what it does. Think-first rather than test-first is the way to go.

PS. I’m sure that TDD purists would say that I’m not doing it right so I’m not getting the real benefits of TDD. Maybe they are right. But I have never found zealots to be very convincing.

Tags: , ,

Resigning from the BCS

I have been a Member and a Fellow of the British Computer Society for about 30 years. I joined because I thought that it was important to support professional bodies in computing and software engineering. The degrees of the university I worked for at the time (Lancaster) were accredited by the BCS and, rightly, they expected staff in these institutions to support them.

I never thought that the BCS accreditation process was particularly good or contributed much to the quality of education, nor because of our geographical location, did I manage to attend many BCS meetings. Nevertheless, I continued my membership because of my general feeling that this is something that a professional should do.

However, I have now decided that enough is enough and that it’s time to leave the BCS. There are several reasons for this:

  1. Subscriptions seem to go up every year by more than the rate of inflation and I get very little for this subscription. The BCS’s magazine for members (IT Now) is lightweight and, in my view, not worth reading. Normally, it goes straight into the recycling, unread. Most group meetings, which sometimes sound interesting, are in London and the BCS seems to make no effort whatsoever to cater for members outside the south of England.
  2. As I have written previously, I don’t think that the BCS accreditation process is fit for purpose and I have no wish to support this. In my view, it is now damaging the quality of computer science education.
  3. The BCS Academy was supposed to be a ‘Learned Society’, with a focus on more academic computer science. It doesn’t seem to be particularly active, its ‘Research Symposium’ seems to have died a death and those events which take place are mostly in London or the south of England. I can’t really see anything in this for me.
  4. Most of all, I dislike the BCS’s refocusing on ‘Information Technology’. I am an engineer but the BCS insists on calling me a ‘Chartered Information Technology Professional’. I never asked for this and, like I suspect the vast majority of the population, I have no idea what it means. What is seems to me is a ‘dumbing down’ of the whole discipline – exactly the opposite of what’s needed in my view. To be respected, we need to focus on science and engineering not wiffly-waffly ‘Information Technology’.

So, it’s time to leave. With some regret, as I do approve of the principle of professional institutions. The BCS lost its way some years ago when it rejected the opportunity to merge with the IEE (now the IET) to create a software engineering institution and I wonder if it really has any future?


I wrote a series of posts in 2011 reflecting on the future of textbooks and how I thought they would evolve. I predicted that textbooks would not be supplanted by ‘free’ internet resources but that paper textbooks would disappear in favour of e-books. Almost 5 years on, how did I do? Well, textbooks have certainly survived and we hear less and less about how you can learn things simply by drawing on materials on the internet. But they certainly haven’t become e-books and paper versions of texts continue to dominate, even in areas like computer science.  This survey suggests that students prefer paper books to electronic books.

Why has this happened? Textbooks are the obvious e-book. You rarely read a text in sequence and you access material randomly. You often want to search and link to other resources.  As I said in this post, e-textbooks have huge potential to provide a richer learning experience than paper books.

So what has stopped the development of e-textbooks. I think there are three  factors that are most significant:

  1.  General issues with retention from screen reading compared to paper.  This is not just a textbook issue but for reasons that are unclear, we seem to be better at remembering things when we read on paper rather than on a screen. It’s certainly true for me and I now rarely read non-fiction books as e-books. This is obviously a big deal for students as they use textbooks to help them pass exams and texts so having to obviously want to avoid having to work harder to learn.
  2. The Kindle standard. Amazon dominate e-book publishing and, although there are much richer e-book standards, if you’re not published on Kindle then you are not a serious -e-book player. But the Kindle standard has been designed for cheap, portable devices being used to read novels without illustrations, equations, tables, breakout boxes or other features that are standard in textbooks. Therefore, converting a textbook to work properly on a Kindle takes a lot of work as a simple conversion ends up with a complete mess if your book isn’t just sequential text. In the latest edition of my text, we decided it just wasn’t worth the effort and don’t publish on Kindle.
  3. User perceptions that digital resources should be ‘free’ or at least much cheaper than their paper equivalents.  Internet giants such as Facebook and Google have defined the internet business model to be an advertising model rather than a paid-content model so users are reluctant to pay – I am regularly asked for free digital copies of my book as if somehow the only cost in producing a book is that of paper and distribution.  However. selling adverts around a textbook isn’t ever going to be viable.

    When readers are willing to pay for an e-book, they want to pay less as, obviously, there are no printing or material costs involved. However, if you want to produce an e-book that’s a better learning resource than a paper book, you need to do a lot more work – to build in links, simulations, social networking, multi-media etc. I looked into this and reckoned that producing an e-book that would take advantage of the affordances of the Internet that I discussed in this post would take at least 3 times as much effort as simply producing sequential text. To cover the cost of an e-book that’s more than a PDF, the price would have to be considerably higher than a paper book. I don’t think that there’s a hope that people will pay for this so it simply won’t happen.

There are other reasons too. Publishers are increasingly squeezed financially and are unwilling to take risks and  textbooks are not recognised in the university funding model so academic writers are discouraged from spending time experimenting with new approaches to publishing;

I’d like to see e-textbooks as I described in my 2011 post and I’d like to create this kind of learning resource. But if this is going to happen, we need a reward model for content creators that recompenses them for the work involved and for Amazon to develop a new technical standard for e-publishing.  The Kindle model will certainly evolve but I see no recognition of the fact that high-quality digital content (either created or curated) is expensive.  Until that happens, then I really don’t think we’ll see any real change.



In the first version of this, I got it wrong! It is possible to leave comments but the comment button did not show up on my browser. The post has been amended.

I’ve written here before that, as a loyal reader for 30 years, how I think IEEE Software magazine has lost its way. I made the point that it seemed to be impossible to leave comments on articles so was delighted to see a tweet announcing the arrival of the IEEE Software blog.

The aim, as stated is to publicise research to practitioners:

“At the end of the day, we want practitioners to be able to easily access and apply the latest research advancements…”

This is all very well but, with 30-odd years as a researcher who has always worked with practitioners, the problem is not simply that practitioners don’t access research. There are two other issues that are at least as important:

  1. The best practice in industry is, in my view,  far beyond what’s going on the research labs and practitioners have no idea about that.
  2. A very high proportion of researchers simply do not understand good industrial practice and this is one reason why a great deal of software engineering research has no practical impact.

We need a lot more articles about leading edge industry work as well as research. We need blog posts from practitioners as well as researchers (only one of the contributors was from industry and he was from an industrial research lab).

It’s also a pity that there isn’t a more even gender balance amongst the contributors. I know this is hard to do but unless leaders like IEEE Software set an example, things will not change.

So, as well as communicating academic research to industry, IEEE Software should also be about communicating best practice and to informing researchers about the realities of good practice.

A blog for IEEE Software is something that’s definitely a step forward but its instantiation has a long way to go.

On seeing the blog, I naturally thought that leaving a comment there was the way to communicate with the blog editor. But there seemed to be no way to comment and in the first version of this post, I criticised this. However,  it seems this is a browser incompatibility as the comment facility is actually there and showed up on other browsers.  My advice would be not to use WordPress rather than Blogger as there are fewer of these incompatibilities.

Apologies for the misrepresentation. I have made the substantive points here as a comment.



I have been a subscriber to the IEEE’s Software magazine since it started in 1984. At one time, in my office, I had a 25 year unbroken run of issues on my shelf until, thank goodness, digital libraries meant that I could dispense with the paper copies. It was, and maybe still is, the most readable and useful of the magazines that tries to bridge the divide between researchers and practitioners.

For the past few years though, I have found the magazine to be rather dull. There were occasional good articles and columns but most of the articles were pretty skippable. This is not to say they were badly written or researched – I just didn’t find them worth reading. I didn’t think much about this – if anything, I probably put it down to the cynicism of age.

Then, in the July-August 2015 issue, I read an article by Philippe Kruchten on lifelong learning (http://www.computer.org/csdl/mags/so/2015/04/mso2015040085.html)  which brought home to me why I was disenchanted with the magazine.  For me at least, hardly any of the articles were of any use for learning and professional development. I think that this is a significant change from a few years ago. In the past, there were lots of useful, general articles on software engineering that I learned from and recommended to my students. But now, most of the articles seem to be specialised or are short columns presenting opinions and one person’s experience.

For example, in the same issue as Kruchten’s column, there’s an article on an evaluation of common Python implementations (hardly a topic that’s going to engage the vast majority of software engineers), an article that says there’s no evidence that global software engineering saves money (I would guess few managers who could benefit from this are actually readers of a software magazine) and an article on why API documentation fails (which comes to a conclusion of the completely obvious that it fails when the content isn’t very good).

I have my opinion on why the magazine is publishing fewer articles that are of general interest to engineers, but I don’t think it is useful to go into these here. Rather, I want to make some constructive suggestions that would make IEEE Software a flagship publication that has some real value to the professional and academic software engineering community.

The key, I think, is to take on board what Kruchten is saying and focus the magazine on professional development.  It should be the place that engineers go to to learn about the latest advances in software engineering and related areas. Every issue should have two or three articles that you can recommend to a practitioner or students and say ‘read this and learn something useful’.

What does this mean in practice? I think there are three key changes that are needed:

1.     Tutorial articles. Kruchten asks ‘what do I know about the MEAN stack’. Well, I didn’t know about this term (it means Mongo DB, Express.js, Angular.js, Node.js) but I would like to read a well-written tutorial article about it. General tutorial articles on new areas used to be much more common in the magazine and it’s a real pity they have mostly disappeared.

2.     Review articles.  Review articles looking at available tools and technologies are really helpful for learners of these technologies. For example, as a recent learner of Python, I would really have appreciated a good review article on Python IDEs. Reviews of web material, akin to the ‘Surfing the Web’ column in SEN, would also be incredibly useful.

3.     Practice focused research. At the moment, research articles are mostly useless to practitioners as they fail to relate the research to practice. They present non-conclusions (e.g. we have no evidence that global software engineering works) rather than useful lessons learned and solid information on how to do things better. IEEE Software is not an academic journal but most of the ‘research’ articles are simply rather better edited versions of conference papers. Research articles should be writing for practitioners who want to learn from their research – not for other researchers.

Of course,  I know from my experience as a journal editor, that if you simply rely on submitted articles, you will not necessarily get the kind of articles that you want. Realistically, the changes to the magazine that I think are needed are not going to happen without proactive commissioning of articles and significantly more resources than I suspect are currently available.

If you want good articles, you have to pay for them. Otherwise, you will simply get academics writing to enhance their chances of tenure or promotion. I don’t think that you have to pay commercial consultancy rates but you have to recognise that writing well takes time and offer some reward for those willing to share their knowledge.  Paying $1000 for a well-written tutorial article would be money well spent.

I don’t believe this is unrealistic. My dues for the IEEE Computer Society and subscription to IEEE Software came to $172. According to this site (http://www.payscale.com/research/US/Job=Software_Engineer/Salary), the median salary of a software engineer in the US is $78, 000 so, even for those whose employer doesn’t pay their dues, another $20 or $30 a year will hardly break the bank.

Will this happen? Probably not as the ‘volunteer ethos’ is deeply embedded in the IEEE Computer Society. But I’m afraid that unless there are changes, IEEE Software will become increasingly irrelevant to practising software engineers.

PS I do know how to link text to URLs. But, from a security perspective, I think it safer to leave URLs explicitly visible so that you know what you are clicking on.

A pensioner learns Python

When I was younger, I thought of myself as a pretty good programmer. I started with Algol 60 in 1970 and since then have learned (and forgotten) lots of programming languages. But as my career developed and interests evolved, I spent more time managing people who programmed rather than programming myself and focusing on software requirements and use rather than on development issues. So, I haven’t written any significant programs for many years, with Java the last programming language that I learned about 15 years ago. But, I have always enjoyed programming so now that I’m retired, I decided to see if I could revive my old programming skills.

I never liked Java – it always seemed to me to be a wordy and inelegant language (compared, for example, to Pascal) and I had no wish to program in that language.  I flirted with Ruby but finally decided on Python – it’s a popular language (the 4th most popular according to this survey) and used as the programming language for the Raspberry Pi – the small, ultra-cheap computer that I fancied playing with. Python is a dynamically typed, interpreted language – completely different from Java.

So I started learning Python about 10 days ago and so far have written a few hundred lines of code. My experience is that you can learn enough of a programming language  in a week (if you understand the basic structures, it’s really just an issue of syntax and idiosycrasies) to write useful programs but it takes several months (and thousands of lines of code) to be completely ‘fluent’ and proficient in that language. Python was no different. I had written my first program (bubble sort) within a couple of hours of getting started and, although I still forget to include some of these damned colons, I’m reasonably confident in the basic functionality of the language (but haven’t yet tried some of its more esoteric features).

So, coming from a background in reliable systems, what are my impressions of the language:

1.    It’s a very productive language – you can do a lot with only a few lines of code. It does encourage a ‘program by experiment’ approach to development rather than thinking it through in advance but that’s no bad thing when you are learning a new language. It means you are less likely simply to work in the structures of the language that you already know.

2.     Part of the productivity comes from lots of built-in operations e.g. on strings. But these seem rather arbitrary and simply are ‘things that might be useful’ rather than a coherent and orthogonal set of functionality. I found myself programming operations then sometimes discovering there was a built-in function to do the same thing.

3.     The use of whitespace as a bracketing device is incredibly error-prone. This has been the major source of problems for me as I changed things and forgot to change the indentation. Maybe this could be solved with a better editor but essentially it was a stupid and unnecessary design decision.

4.       My experience with statically typed languages means that I’m suspicious of the dynamic typing and I certainly haven’t used it in a constructive way. I find myself writing str (X), list (X), etc. just to be completely sure that a variable is the type that I expect. Learning to be comfortable with dynamic typing will, I suspect, take some time.

Overall, I’m impressed but with reservations and I’ll certainly be continuing to use the language. I think that programming is possibly a bit like swimming in that once you learn, you never really forget – you just get slow and rusty. I’m at that stage but over the next few months, I hope to get a lot better and more ‘Pythonic’. I’ll post again in a few months about my experiences.

One thing I am completely convinced of, however, is that we would be much better teaching Python rather than Java as an initial programming language. All of this ‘public static Class’ crap is completely confusing for beginners. I think abstraction and program structuring are things that come later and it’s far better to design a programming curriculum around programming by experiment so that students can get immediate gratification and see progress being made.

Tags: ,

I have just completed a HEFCE survey about accreditation of degree courses by the British Computer Society (BCS) that has the aim of soliciting opinion about whether accreditation of courses can be used to enhance graduate employability.  The details of the survey are unimportant but what is certainly not addressed in the survey is whether or not accreditation is worthwhile. It is assumed that accreditation is a good thing and that tweaking the process will improve employability. Unfortunately, I believe that the current accreditation process as adopted by the BCS stifles innovation and provides spurious respectability to degree courses that produce graduates that are simply not very good.

If we look at employability statistics, there is a close correlation between the input qualifications and graduate employment after 6 months.  This is not even slightly surprising – employers want to hire the most able students and hence these graduates are the ones who have no problem in getting jobs. What is covered in their degree course is largely irrelevant so long as there is a sound fundamental understanding of programming, abstraction and algorithms.  With this as a basis, these graduates can easily learn new technologies and will do so several times in their working life. Accreditation of the courses taken by these students is unimportant – I don’t like it because it tends to stifle innovation but, apart from the waste of academic time in the involvement in the accreditation process, it is not particularly damaging.

Not all students entering computer science and related degree courses come with high input qualifications. In fact, probably the majority of CS students are accepted into degree schemes with relatively low entry requirements. It would be invidious to single out particular institutions but the reality is that many universities are financially reliant on maintaining a large intake to their computer science courses so, to make up numbers, accept students without paying too much attention to whether or not their background and qualifications level is appropriate.

Now, I do not believe that computer science should be an elitist profession and that these students should be denied opportunities to study the subject. What these students need is a course that is tailored to their background where much more time is spent on fundamentals simply because it takes their students longer to master these basics of the discipline. My experience is that in some universities it is possible to graduate in computer science with a good honours degree but be unable to construct a non-trivial computer program. These students may well know about business skills or whatever the pet interests of course lecturers are but without the mastery of fundamentals, they simply cannot adapt to modern software engineering.

Unfortunately, a course that simply focused on fundamentals would find it it difficult to be accredited under the current process.  There is a set of requirements (hoops) that degree schemes must meet, topics that are expected to be covered and universities have to jump through these hoops in order to be accredited. While the accreditation process is not quite ‘one size fits all’ , my experience is that there is an expectation from people on accreditation panels that certain topics will be covered and so universities simply don’t have the option of delivering the fundamentals of the discipline.

To be fair, the roots of this problem do not lie in the accreditation process but in the ‘marketing’ of courses by some institutions that believe that students are attracted to buzzwords rather than fundamentals. But accreditation exacerbates the problem because it only looks at courses, persists in the delusion that all universities are of a comparable standard and fails to look at the quality and skills of the graduating students. It focuses on process and the material covered rather than whether or not the graduates are actually any good.

The fundamental problem with BCS accreditation is that it has bought into the quality delusion that process is all. This stems from Deming’s process improvement approach in manufacturing but we don’t manufacture graduates and it isn’t simply a problem of getting the process of setting up machines correct.  If accreditation is to have any real value, it has to look at the quality of the output from a course and employability is one metric that can be used to assess that quality.  Some, perhaps most, courses will fail. But accreditation will then be meaningful and have some relevance to employers.

There is a further problems with the current approach to accreditation which I believe is harmful.  The most active and enthusiastic academics and practitioners are repelled by the bureaucracy of the process and refuse to become involved. Hence, those involved tend to be rather conservative, their knowledge is often out of date and they find it difficult to understand some of the material in courses. They tend to favour what they know and universities that try to innovate and discard some less relevant material are sometimes criticised (I speak from personal experience here).  Like all process-focused activities, those who don’t step out of line are favoured rather than innovators.

As it stands, I see no value in the current BCS accreditation process. Tweaking the process will not help – it needs root and branch redesign.

Of course, this won’t ever happen. The BCS is run by volunteers who have a vested interest in maintaining the existing system and who will be reluctant to antagonise the universities whose courses should not be accredited. Universities, especially those that offer sub-standard courses, will actively oppose change as it would reveal their own inadequacies. Most potential and graduating students see no value in BCS accreditation and couldn’t care less about whether or not a degree scheme is accredited. Employers are not fooled by accreditation and focus on making their own assessment of skills. They consider the BCS to be largely irrelevant (which is a pity) and they certainly won’t pressurise the BCS to change.

I have been a Fellow of the BCS for 25 years and a member for rather longer than that. I have worked in universities where the courses have been BCS accredited and in St Andrews, one of the few universities which has actively opted out of the accreditation process (a decision made before I joined the university). The computer science course at St Andrews comes top of the Guardian league tables and consistently appears in the top half dozen or so courses in other league tables.


Tags: ,

In my previous post on this topic, I hypothesised that one reason why there is a gender imbalance in science and engineering is that teenage girls see science as ‘uncool’ and so choose non-science subjects to study at school. It is harder for a 15 year old girl to reject the claim that ‘Only nerdy people take science’ than it is for a 25 year old woman. I believe that one important contribution that we can make to addressing the gender imbalance is to make it easier for women to change their mind and career and switch to science later in life.

Fundamentally, we have to change our educational system to make it more flexible and to ensure that choices made at a vulnerable age do not constrain peoples’ career for the rest of their life. This will involve very long-term changes in the school and university entrance system so that students are not forced into a science or non-science stream at 15 or 16 years old. But such changes will take a generation to implement and will only partially address the problem – we need to do more and we need to do it sooner rather than later.

Our higher educational system is really quite ludicrous for the 21st century. We cram higher education into a short period between the late teens and early twenties then make it difficult both practically and financially for people to reeducate themselves later in life.  This makes things really tough for students who, for whatever reason, realise that they have taken the wrong degree. A tiny minority of students who have the time and means can take an additional, different degree but even then, the lack of appropriate school qualification limits their choice.

Of course there are routes for learning that are open to all. MOOCS are available to the highly motivated although I’m not convinced that you could do a chemistry degree from MOOCS alone. The Open University in the UK is a fantastic institution that offers degree courses to anyone but offers quite a limited range of science and engineering courses.   Laboratory provision is difficult and peer learning is less effective because it has to be through electronic rather than face to face communication.

To provide a better, local learning experience for people we need universities to repurpose their courses to create part-time educational opportunities that allow mature students to study alongside work or looking after children.  We need university funding bodies to encourage this change and facilitate this with new money. We need to make it possible to decide to become a scientist or engineer after having taken some other degree course and not to exclude a large segment of the population who were trapped by early over-specialisation. We need to think about the needs of these students and not simply treat them in the same way as full-time undergraduates. We need a recognition from government that education is not a one-off event and provision of financial support for part-time students.  The possibilities for blended learning supporting internet-based courses with face-to-face learning are endless and exciting and I am convinced that we could make this work.

Opening up opportunities in science and engineering to people in work and not just to school leavers is necessary but, of course, not the only thing that we need to do. As well as encouraging women into careers in science and engineering, we need to change the working culture so that they are encouraged to stay.

Obviously, there is a need to change the patronising culture that’s typified by Hunt’s remarks on women in science but I believe that this change is on the way. It won’t happen overnight but over the next few years we will see an attitudinal change. However, changing attitudes to women in science and engineering is not enough – we need to take practical steps that recognise the reality that women are the primary carers of children and that this will not change in the foreseeable future.  We need to provide much better support for parents who take a career break with opportunities for keep up to date and for returners to update their knowledge – another reason why we should have part-time courses;  we need to ensure that the ways in which we assess success in science and engineering are not biased against part-time workers – currently, I believe they are.

Interestingly, these changes won’t just benefit women who want to change their careers. They will also give opportunities to men who didn’t realise their potential at school because they were too busy being ‘cool’ or because their economic circumstances made it impossible to afford university.  Changing culture will provide a better working environment for everyone and maybe dads as well as mums will get a chance to see more of their kids. There are no losers if we change but no winners if we don’t.

Sadly, I see very little evidence that universities, industry and professional institutions even recognise the need for change.  They all play lip service to gender equality then do very little about it. In scientific research, there is a poisonous culture of long hours and measuring success through publications which are often simply interim results of very little value. In engineering, project work often requires extensive travel and meetings away from home without any recognition that many of these are unnecessary or could be replaced by better electronic communications.  Professional institutions, such as the IET, seem to think that all they need to do is to is to have pictures of young women engineers in their publications and don’t think about how to use their influence in an imaginative way.

Can these changes happen? It is difficult to be optimistic. We need bold and imaginative government education policies that encourage change and that recognise the importance of part-time lifelong education. Sadly, there is no evidence that our current politicians understand this; universities have been turned into businesses and while they pay lip service to equality they don’t even tackle their own problems in this area. Businesses complain about the lack of qualified staff but expect someone else to solve the problems for them and continue to maintain unsupportive working environments for part-time workers and those with caring responsibilities.

The majority opinion is probably still that we should attempt to tackle the gender imbalance by encouraging more teenagers to study science and engineering. But this hasn’t worked and there is no evidence that continuing to focus on this will make the slightest difference. We needed to fail before we move on and now it is time for everyone who want to help realise the potential of all members of our society to start making a noise about it.

Tags: ,

I’ve been thinking about the problems of correcting the gender balance in science and engineering for a while but I’ve been inspired to write this post now because of the recent controversial comments by Tim Hunt, an eminent bioscientist, about women in science. Google these if you haven’t heard of them.

The only people who come of the ‘Hunt affair’ with any credit are the women scientists who have satirised Hunt’s ridiculous comments about women in science with the Twitter hashtag #distractinglysexy. Hunt himself is clearly one of these very bright people who really aren’t very smart. I don’t know if he is simply unworldly and didn’t understand the impact of his offensive comments or so arrogant that he thought that his eminence allowed him to be offensive without criticism.

Hunt was forced to resign from his post at UCL, by the university, because of his comments. He has been rightly vilified but sacking him was utterly disgraceful. Universities ought to be the foremost defenders of free speech and it is  unbelievable that a university professor should be sacked because of his opinions, irrespective of how offensive these may be. Hunt did not use abusive language and has never been accused of inappropriate behaviour with his female colleagues both of which might be grounds for dismissal. Sadly, the university seems to be more concerned with media opinion than with upholding free speech and I am astonished by its behaviour. The lack of public protest from UCL academics is disappointing but perhaps less surprising as they may well be concerned that by speaking out they compromise their own position.

This, of course, is only an issue because of the gender imbalance in engineering and science. The percentages vary from discipline to discipline but overall, within science and engineering, women have somewhere around 20%-25% of the jobs.  This has been a concern for many years and there have been a range of initiatives that have attempted to attract women into science and engineering. By and large, these have been an abysmal failure and the situation now is, if anything, worse than it was 15 or 20 years ago.

Why has this situation occurred and why have initiatives to address the problem failed. I believe that one reason why they have failed is because of our education system forces teenagers to make decisions at age 15 about the subjects they will study. Essentially, a choice has to be made to focus on science or humanities and this choice constrains their later choice of university courses and future careers.

Most initiatives to attract women into science and engineering have focused on trying to convince teenage girls to chose science subjects at school and later at university. Yet, they ignore the fundamental reality that at age 14 or 15, all teenagers are profoundly affected by cultural attitudes and peer pressure. The reality is that, at that age, science just isn’t ‘cool’.  Science attracts people who may be thought of as obsessives or nerds (I know, I was one of them) and most teenage girls, irrespective of their abilities, simply don’t want to be thought of as nerdy.  It is perfectly natural and understandable that they choose not to expose themselves to the teasing and exclusion that can happen to science students.

The reality is that 15 year olds will always be concerned what people think of them. Earnest talks by adults on the value of science and engineering isn’t going to change this situation.  It is utterly ridiculous that people at a difficult stage in the lives are forced to make decisions about their future careers that are incredibly difficult to change.

Unfortunately, most UK universities have colluded in this sad situation. They have been lazy in adopting admissions policies that focus simply on results in school exams and have demanded particular A-levels or Highers for admission to science courses.  In reality, a great deal of school science is so over-simplified that it is utterly irrelevant at a university level and developing science courses for students without a science background would not be particularly difficult. This would help those students who regretted the choices they made at age 15. This, on its own, obviously wouldn’t solve the problem of gender imbalance in science but would at least be a mechanism for widening the choice of study subjects for both boys and girls.

I think there is much more that we can and should do to enhance educational opportunities for those who decide that their choices at age 15 were not the ones they wished to make for life. I’ll talk about these in my next post.

Tags: ,

Grady Booch, who I admire immensely, has a long-term project entitled Computing: The Human Experience  where, in his words, he is “engaging audiences of all ages in the story of the technology that has changed our civilization. The story of computing is the story of humanity.”

I think that this is a fantastic endeavour and, as a European, it is good to see that Grady, unlike so many American commentators, understands the contributions that have been made in Europe to the development of the discipline. Recently, he has created a list of several hundred candidates who “we consider the most important computing people”. He invites the readers of his site to vote on these using a somewhat curious pairwise voting system.

I say ‘somewhat curious’ because it makes a (presumably) random selection of 2 candidates and asks the voter to select who is the ‘most important’. Therefore, you might be asked to decide whether Alan Turing is more or less important than Bill Gates. This vote appears to be given exactly the same weight as a situation where a voter has to chose between Alston Householder and Jean Hoerni (If you have never heard of these guys, join the club). The pairwise voting system is such that you get bored quite quickly and give up after a few attempts so that after 2 votes your view (as represented here) may be than Alston Householder is more important than Bill Gates.  What!

The list of candidates is also odd – and I wonder if it has been created automatically by data mining sources such as wikipedia. It does include  obvious major intellectual contributors to the discipline such as Alan Turing, John Backus, Maurice Wilkes, Tim Berners-Lee, Vint Cerf and Dennis Ritchie as well as commercial contributors such as Steve Jobs, Bill Gates, Mark Zuckenberg and Sergey Brin. It also includes a very much longer list of people who may have made some contribution (computer graphics seems heavily represented) but I immediately thought of people, such as Brian Randell and Cliff Jones, whose contributions are at least comparable but who are not included.  I’m sure there are many more than I’m aware of. There is also a number of  bizarre candidates who has far as I can see have contributed nothing to computing such as Julian Assange, Alan Sugar and Jimmy Wales.

The problem, of course, is that there is no objective means to judge importance.   All sorts of factors come into play that depend on the world view of the judge. Is a commercial contribution more important than an intellectual contribution? Is research more important than practice? Is engineering more important than mathematics? How much background knowledge does a judge have?   Are historical contributions (which have been assessed) more important than new developments (which have not)?

I think it is perfectly reasonable for Grady to pick his own list of who he considers important although I think that he should classify this to recognise (at least) the differences between intellectual and commercial contributions. He should also shorten it significantly – lists of any kind that are too long become meaningless. I appreciate that he is trying to emphasise that there is a wide diversity of contributors to the field not just the well-known names but I don’t think that this is the right way to do it.

But asking people to vote on ‘importance’ is a daft idea – it’s like asking people to vote on whether Coca Cola or Facebook is most important. Unless we have a set of parameters to form a judgement, then all we’ll get is a reflection of the knowledge (or lack of it) and prejudice of the voters.

« Newer Posts - Older Posts »