Search
ross-anderson

SE Radio 559: Ross Anderson on Software Obsolescence

Ross John Anderson, Professor of Security Engineering at University of Cambridge, discusses software obsolescence with host Priyanka Raghavan. They examine risks associated with software going obsolete and consider several examples of software obsolescence, including how it can affect cars. Prof. Anderson discusses policy and research in the area of obsolescence and suggests some ways to mitigate the risks, with special emphasis on software bills of materials. He describes future directions, including software policy and laws in the EU, and offers advice for software maintainers to hedge against risks of obsolescence.


Show Notes

Transcript

Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Priyanka Raghaven 00:00:16 Hello everyone, this is Priyanka Raghaven for Software Engineering Radio and today my guest is Ross Anderson, and we’ll be discussing software obsolescence. Professor Ross Anderson is a professor of security engineering at the Department of Computer Science and Engineering at the University of Cambridge, where he’s a part of the university’s security group. He’s also professor of security engineering at the University of Edinburgh. He’s an author of the book called Security Engineering, A Guide to Building Dependable Systems. And his areas of interests are security, dependability, and technology policy. I wanted to have him on the show to discuss software obsolescence after a very engaging conversation at his office at Cambridge University. And welcome to the show.

Ross Anderson 00:01:04 Thank you.

Priyanka Raghaven 00:01:06 At SE Radio, we’ve done a few shows on technical debt, managing software, supply chain attacks, a show on software archiving, but we’ve never done a full show on obsolescence. And the reason I wanted to do it was because of the fact that it’s hitting everyone now and very little attention is actually being paid to it. So, let’s just start right from the top for our listeners. Would you be able to explain what is obsolescence or end of software life?

Ross Anderson 00:01:35 Well, as time goes on, people add new features to software. The software features interact, you end up getting the dependability issues, you end up getting security vulnerabilities, and so the software has to be upgraded. And of course, no piece of software lives on its own nowadays. The artifacts with which we interact tend to have millions of lines of code, they talk to servers; the servers talk to apps. There’s a whole ecosystem at every node. And so, whenever you’ve got a new version of iOS or Android or Linux or whatever coming out, that has implications that ripple through the whole ecosystem. Similarly, when components such as web kit get upgraded that can ripple through many other parts of the system, and now we’re making things still more complicated by bringing in new types of components in the form of machine learning models, which will be embedded here, there, and everywhere.

Ross Anderson 00:02:30 And coordinating the disclosure of vulnerabilities, the upgrade to patch vulnerabilities, the upgrades that are necessary for dependability is becoming an ever more complex task. How this reflects in real life is that you may be tempted to go and buy a fridge for a bit more money because it’s advertised as a smart fridge, and it talks to Wi-Fi. And then two years later you find that the manufacturer doesn’t maintain the server anymore and it turns into a frosty brick. So, we find that artifacts that used to be good for 10 years or 20 years or 30 years suddenly become dysfunctional because the software that was built into them to support complex business models fails far before the underlying hardware does. And this is about to be a serious problem. For example, with cars. On the one hand, it’s great that we move to electric cars because an electric powertrain has got maybe a hundred components instead of the 2,000 components in an internal combustion engine powertrain.

Ross Anderson 00:03:35 So you don’t need to hire as many car mechanics, but there’s so much more software that you have to hire lots of software engineers to pick up the maintenance burden that has not been eliminated but merely shifted. This is going to have all sorts of political and economic effects worldwide. It’s great for India because there will be lots and lots of jobs for software maintenance engineers with the big tech companies in India and then many new startups. It’s perhaps less good for employment of skilled mechanics in north America and Western Europe. And over the next 20 years, all these implications are going to be working their way through the system, and it’s up to us as technologists to try and understand what’s going on, to try and figure out how we can make better tools to make software last longer, to figure out how we can perhaps redesign institutions so that we can do coordinated disclosure of vulnerabilities better. There’s a whole lot of pieces to solving this problem.

Priyanka Raghaven 00:04:33 I think, like you rightly said, it’s a maze and there’s a lot of things that need to be tied up in maintained. So, one of the questions I wanted to ask you, picking up from that is, when a software gets obsolete, does that mean nothing works or can it still be used with risks? And if you could just maybe talk a little bit about the risks, because there’s a case where you can actually work on things which are obsolete, but then of course there’s a lot of risks, associated risks.

Ross Anderson 00:05:00 Well, the question is whether the artifact that you’re trying to maintain was designed so that it would have a known death date or whether it would merely degrade. For example, my wife had a Lexus that was almost 20 years old, which we got rid of last year and replaced with a new car. But for all the time that she owned it, we couldn’t use the GPS because the GPS — the navigation and map display — was of a generation that was designed 25 years ago, and it had a strange popup screen that would show the moving map display, which still popped up annoyingly in the dashboard, but it depended entirely on getting a new DVD every year from Lexus with a new updated map of the whole world in it. And Lexus stopped supplying that about 10 years ago. So, here’s a car with a subsystem that was completely nonfunctional.

Ross Anderson 00:05:57 So how you replace that of course is you get a clip and you clip your mobile phone onto the air event and you fire up Google Maps or Apple Maps and you use that to navigate instead. There’s going to be more and more of that. Let me give you another example. We moved house recently, and the two owners, previous owners, of my new house were both gadget freaks, and the most recent owner was, although he was a gadget freak, he was not an engineer and so he didn’t understand how to do maintenance and documentation. So my house is haunted, right? It’s like it’s got a poltergeist in it because at all times of the day and night, there’ll suddenly be a quick click and a whirr, and a motor starts up somewhere in the house, and I’m trying to figure out what an earth is going on?

Ross Anderson 00:06:41 And so I go to the electricity meter, and I see that this is drawing 270 watts and I figure, well what could that be? And I go around, and I listen and tap the walls, and eventually with much exploration and patience, I find out everything that’s happening and whether I want to turn things off or maintain them or replace them or whatever. But this is our future, right? It’s not just about maintaining software, it’s about maintaining all these things that have got software in them, and all these things that have things in them that have software in them that somebody bought 14 years ago because it seemed like a good idea at the time.

Priyanka Raghaven 00:07:18 Wow, so this is really one of the negative impacts of two customers which hits home. I did listen to one of your other podcasts and there was something that you referred to as like turning on a dumb switch. And I think that what you said is when the software on a phone or the car is no longer supported, you were suggesting that you essentially like take it off the internet and thus you can make it more sustainable or dependable. Can you talk a little bit about that more for our listeners here?

Ross Anderson 00:07:50 Well, one of my interests has always been technology for development. My wife is from Cape Town, although she’s of an Indian family. And so, I have in-laws in both in India and Africa. And when we go to Africa, we see that many of the cars there are 20 years old because they are cars that had a first life in Britain or Singapore or Japan. And then when they were 10 years old, they were put on boats and they went to Africa and they then lived for another 10 years until they eventually fall to pieces. And there’s a big question as cars get software because you see, in Western Europe you have to get your car past a road worthiness test once a year. You go in and they test the brakes and they check the lights and all the safety stuff, they check the tires.

Ross Anderson 00:08:39 Now fairly soon they’re going to start checking that the software has been upgraded. And this means that when the car vendor no longer provides software upgrades, the car presumably has to be exported or scrapped. Now this is a real big deal, and we had a big fight in the European Union from 2016 to 2019 over how long the car makers would have to maintain the software. And the car makers — Volkswagen and Mercedes and Porsche and so on — said we only want to maintain software for six years because we either sell you a three year lease on a used car or a three year lease on a new car, depending on how much money you have. And we don’t want to maintain past the sixth year because that’s the duration of our sales contracts. And the European Union eventually said, no, well you’ve got a legal obligation to make spare parts available for 10 years, so we’re going to make you make software available for 10 years, too.

Ross Anderson 00:09:36 And it was possible to push this through only because of the emission scandal, which weakened the political power of the car companies. Now, if this means that the maximum life of a car in Europe in five or 10 years time will be 10 years, then this is an environmental disaster because at present the average age of a car when it’s scrapped in Europe is 16 years, right? So, if that is reduced from 16 years to 10 years, what happens to all these millions of 10-year-old cars? Do we export all of them to Africa? There’s probably not the market for it. And in Africa, how do people drive them? This is another problem. If you go to Kenya, for example, you find that most of the cars on the roads in Kenya were originally in Japan because that’s how the trade works. And so, there are people in Kenya who are specialists who know how to read Japanese manuals and things like that and to fix stuff up.

Ross Anderson 00:10:30 How does, how is this going to work out once cars have got software in them that becomes safety critical? This is something we have to start thinking about now because if you reduce the life of cars by two thirds, you have to bear in mind that the total lifecycle carbon cost of a car is only 50% in the fuel. It’s 50% in making the car. And so, you’ve got a significant increase in CO2 emissions if you scrap all cars after 10 years. So, this means that you have to make car software in a way that is maintainable. And that’s hard because the software in the car typically comes from 40 different companies. There’ll be this software in the brake controller, this in the engine controller, this in the remote key entry system, other software in the controller that operates the sliding roof, and maybe only three or four of them are safety critical, but they still come from different companies and testing them together — the integration test for safety — is a complex and expensive process. Who’s going to do that?

Priyanka Raghaven 00:11:31 So that brings me up to another question. So, in your research and your experience, do you have any data on the lifespan of a software project? How long does it typically last?

Ross Anderson 00:11:42 Well, there has been research on software project management going back to the 1960s because once IBM started selling large mainframes at scale to many businesses and computing was no longer a craft thing done by specialists, then people started to notice that most software projects were late and some were never finished at all. Perhaps a third of big software projects became disasters. And that was in companies; in governments, typically two thirds of large software projects become disasters, despite the fact that civil servants are more risk-averse than company managers. And people have been trying to understand this. Now, for all of my working life — and this is where the very idea of software engineering comes from — the idea was coined by Brian Randall, who was then a young academic in Newcastle University. Now he’s very old, he’s in his 80s, he’s emeritus professor. But his idea was that the techniques that in Newcastle they used to build ships could be applied to software.

Ross Anderson 00:12:43 If you had a suitably top-down structure, if you started with a plan and you organized things into laying down the keel, making the ribs, putting on the plates, putting in the engines, putting on the decks, fitting out the cabins, then presumably you would be able to scale up software the way you could scale up ship building. And of course, it doesn’t work that way because the bigger a software project becomes the more the complexity grows. It’s not something that grows as order(N) more like N squared. And so, in practice, the largest software artifacts that we produce are not built but grown. Things like Windows or Microsoft Office, I’ve got tens of millions of lines of code, which have accumulated over many decades of people at Microsoft adding more features, more features, and still more features. And Microsoft tried twice to redevelop Office from scratch and gave up both times, right?

Ross Anderson 00:13:39 So, the business of managing projects has become replaced by the task of managing ecosystems. And we now have got various tools from doing that. We’ve got static analysis tools that are things like Git that enable you to coordinate lots of people writing code for bits of a project and then checking it in and then you can run integration tests and so on and so forth. And much of the interesting work in software engineering, and the impactful work over the past 20 years, has been improving these tools. Now we face a different kind of problem, which is how do you coordinate software maintenance across organizations? For example, a bit over a year ago we discovered what we call the Trojan Source vulnerability. As you know, some languages like English are written left to right and others like Urdu are written right to left.

Ross Anderson 00:14:42 And if you’re going to have both in the same newspaper article, you need means of flipping from left to right to right to left. And these are called bidirectional control characters or BD characters. And because it’s very complex to do, you have to give people fine-grained controls and what we found is that if you put BD characters into software, right? You could play havoc because you could see to it that software would look one way to a human developer, but another way to the computer, or more accurately to the compiler or interpreter. And so, this was a vulnerability that’s affected all programming languages at the same time just about, and it’s also affected machine learning systems. And so, we had a fascinating experiment when we notified the maintainers of big machine learning systems and also the maintainers of computer languages and of editors and other tools — linters and so on — for software development that this was a potential vulnerability because there’s a very, very wide variation in response. Most of the machine learning system people were not interested because they don’t yet have a culture of patching stuff regularly.

Ross Anderson 00:15:42 And also because it’s slow and expensive to update a large machine learning model, and the machine learning people considered security to be somebody else’s problem. So, there’s a cultural thing there, as well as a technical thing. And among programming languages, we found that some language teams such as Rust were very keen and eager, and they wanted to patch instantly even before the public announcement. Others, such as Apple and Amazon, didn’t want to cooperate or say anything. And one vendor, Oracle, basically refused to have anything to do with it. They said, we don’t accept that this is a vulnerability in Java; it’s a vulnerability in whichever editor you use to edit Java. So, this gave us an insight into the enormously differing cultures across the industry towards maintenance and towards cooperation with other firms. And we also explored the mechanisms that are available for people to coordinate work on a vulnerability before it’s publicly exposed.

Ross Anderson 00:16:41 And we found that there’s a tension, for example, between what CERT does — because CERT will enable coordination between teams working on a pre-public bug fix on the one hand and on the other hand, companies like hack forums which operate bug bounties on behalf of the software developers. So since then, we have been trying to talk to people at CERT and people at hack forums and so on about how we can coordinate these approaches better. And this is going to end up being a long process that lasts many years as we get people in the industry to coordinate the sponsors to complex supply chain issues.

Priyanka Raghaven 00:17:20 So, if I were to understand what you’re saying, it’s that essentially, it’s very difficult to actually put a number on the lifespan because everyone is going to be treating things differently. Like, for some companies it might be better to just kind of kill the project rather than maintaining it, whereas there might be some other companies because of their good engineering culture that they’ll sort of maintain the project and then give you more support.

Ross Anderson 00:17:43 Well, it depends ultimately on the company’s business model. Now if you’ve got a company that’s offering a service — somebody like Google or Facebook — if there’s a bug on their website, they have to fix it. Otherwise, the flow of advertising dollar stops. And the beautiful thing about running software on your servers rather than on your customer’s phones or laptops is that you can patch it on the fly. And so, it doesn’t have to be quite as dependable because the costs of remediation are much lower. But of course, that isn’t the case for all software, and much of the software that you see in cars cannot be upgraded remotely. You have to go to a garage and have them reflash the memory. And in the case of railway signals — in Britain for example, our security agencies have forbidden the remote upgrade of railway signal software because they think that this is national critical infrastructure, and if the railways could patch their software remotely, then so could the Chinese secret police.

Ross Anderson 00:18:40 And this means that when you got a major vulnerability that they have to send out people in high-visibility jackets to walk up and down the tracks and change all the software. So, there are the security agencies got in the way of maintainability of railway signal software. And there are going to be all of these problems again and again and again and again. Now, other business models: the typical business model with Indian software companies is that if someone like Tata Consulting is writing software for a client in the West, the contract will typically say that the Indian contractor will maintain the software for 90 days after delivery and thereafter it’s a customer’s problem. So, maybe there’s a business opportunity for people to offer extended maintenance contracts. The business is again different if you have got internet-of-things devices, if you’ve got things like room thermostats or burglar alarms or anything like that because, again, many of these are made in China.

Ross Anderson 00:19:41 And in China because the electronics industry is hardware-driven, maintenance is notoriously poor. Example: in 2016, there was a big DDoS attack from Mirai botnets, and the Mirai software was software that initially infected CCTV cameras in Vietnam and in Brazil that had been produced by this Chinese company Showme. And they basically built those CCTV cameras so that they could be connected to wi-fi, and they all had the same factory default password and software that couldn’t be upgraded. So, whenever anybody turned on one of these devices, anybody who was doing an IPV4 scan and who could find that this was a Showme camera could take it over and use it to DDoS people. And we have since had several hundred versions of the Mirai worm, which has been recruiting various IOT devices which had unpatchable software with known vulnerabilities.

Ross Anderson 00:20:39 And this has become such a nuisance that we now have laws in America, in Britain, and in Europe, which enable the Customs people to turn back containers full of IOT software which have got systemic vulnerabilities. You’re supposed to have different installation passwords for each device, and you’re supposed to have the ability to patch software if something’s going to go online. There are different legal tools used for that in different countries. So, this is again a world in which the legislator is constantly playing catch up as selfish, short-sighted industries sell stuff that has got vulnerabilities or safety hazards and they don’t care about the consequences.

Priyanka Raghaven 00:21:18 It’s very interesting because one of the episodes that we did by another host, episode 541 on Securing Software Supply Chain that has a relation to what you’re just saying, because one of the main things that came out of the show was part of the advice that the person there was giving on, scanning your code for vulnerabilities because of the off-the-shelf components you’re using, he also talked a lot about building a relationship with the maintainer of the library or software that you’re using, so that you could get better visibility on what’s happening there and upgrade as and when they make upgrades. What do you think about that? Is that good advice? Is that what we should be doing?

Ross Anderson 00:21:59 It reminds me of the comment that Mahatma Gandhi made when he was asked, what do you think of Western civilization? And he said that would be a nice idea because you see one of the problems is that the maintainers, the people who have to maintain your software, can very often fall to the business tactics of others. My classic case here is what happened with SolarWinds. Now, SolarWinds used to be a great engineering company, but some very clever people set up in order to provide software that would enable you to optimize the performance of complicated Windows databases in big installations. And so, it ended up being used in over a hundred of the Fortune 500 companies and in over a dozen American government departments. So, what happened then is that some bankers bought SolarWinds, and so the founders could then go and buy big houses and nice yachts and so on.

Ross Anderson 00:22:52 And the bankers went and bought up their competitors too, so that in order to manage big Windows databases, you basically needed to use SolarWinds products. And then what happened is they sacked most of the really able engineers who maintained this product and replaced them with low-cost labor from Eastern Europe, and then the Russian FSB noticed. And so, they somehow managed to infiltrate SolarWinds infrastructure and they saw to it that when SolarWinds updated its product, it included an advanced persistent threat which basically installed itself and reported back to the FSB in Moscow. And this meant that over a dozen US government departments were running Russian spyware together with a hundred American companies. And this was discovered only when the SolarWind software infected a security company and they noticed. So, the question here facing companies is what sort of due diligence do you do in your suppliers?

Ross Anderson 00:23:52 In the past, you’d want to see the last three years’ accounts from your supplier, and you’d like to see some nice PowerPoints from them about how they planned wonderful things, blah, blah, blah, blah, blah. And now I think you have to do slightly more ruthless and intelligent due diligence. You can’t just say, does this supplier get audited by a big four audit firm? Because sure they all do. That’s a racket. It doesn’t tell you anything. You’ve got to ask who actually owns this company, and do they give a toss? Right? And if the company is suddenly owned by a private equity firm or a bank, you shouldn’t be running the software anywhere critical. Now most companies don’t do that kind of due diligence because it’s not been part of the business process up until now. One or two companies are starting to do it, the clever ones. But again, it’s going to take time and it’s going to cost, lots of grief before people realize that this is necessary. And the working costs. Because you know, if selling your company to a private equity firm causes its value to go down because 20% of your customers will walk, then as a founder you won’t be able to realize as much money when you sell your company. So again, there will be second-order consequences, third-order consequences all the way through the ecosystem.

Priyanka Raghaven 00:25:08 I think this probably also sounds a bit bleak, but let me ask you on how do we mitigate these kinds of risks? So, one of the things that came out of the previous show on software supply chain attacks and probably ties in with this obsolescence pieces, also incentivizing the maintainers. Would that help? incentivizing the maintainers for giving minimum stability promise?

Ross Anderson 00:25:33 Well that’s hard. How do you go about defining a service level agreement, and how do you go about incentivizing people to meet it? Because it depends on the kind of maintenance work that’s being done. That is going to vary enormously from one type of product to another. One of the things that we have learned from the experiment that we did with the Trojan Source vulnerability is that it’s very, very difficult if you subcontract something like a bug bounty program to write a proper scope for a contractor to incentivize them to report the right type of stuff. Because what typically happened when we reported the Trojan Source vulnerability to a firm that used an outsourcing company was the outsourcing company would say, sorry, this isn’t a vulnerability, go away. This happened even when we reported to some companies that did their own vulnerability management because their own first responders were in the same kind of pickle.

Ross Anderson 00:26:33 The first responders, whether in-house or outsourced, had been given a list of things that they should treat seriously, such as a remote code execution vulnerability, blah blah blah blah blah. And if you come up with something that doesn’t fall neatly within any of these existing categories, they say, sorry, this is too complex for me. It makes my brain hurt, go away. And then the only way you can report the vulnerability is by going to the software maintainer — their customer — and saying, oi, your guys say that the Trojan Source doesn’t affect Google and that you know about it already, but how come JavaScript is vulnerable? Right? Here’s our proof-of-concept exploit. Something’s wrong, your mechanism is broken, please go and fix it. So, with anything that’s a bit off the beaten track, you end up having to escalate. And so again, there are some things that you can outsource, but there need to be escalation mechanisms to get round the outsourced stuff because the scope will never be quite right. You can never have complete contracts here. Safety-critical systems in particular, tend to break in unexpected ways because of combinations of things going wrong. A combination of a software failure or hardware or failure and humans not understanding what’s happening. Because the stuff that you could think of in advance, you already mitigated somehow or another.

Priyanka Raghaven 00:27:51 So what’s the solution then? Would that be like if, so one of the things that we typically happen in software is that we take an off-the-shelf component because it’s easier for us to actually build something quicker and get something out to the market, right? So, that’s the reason why we take, and then one of the things that people usually do is check that if it’s maintained by, say, one of the big companies, the maintainers then, and it’s got a sufficiently good rating and itís got a thing then is something that we use. But then what do you do? Is that, is it better then to build something by yourself because of all these risks? Or how do you mitigate?

Ross Anderson 00:28:28 Well, that’s hard. If you use Microsoft as a platform, for example, then to what extent can you rely on the assurances that they give you your own Windows? There’s a nuclear power station within an hour and a half drive of here, which is still using Windows 95 in some systems, right? Crazy. But, that’s what the world is like. Old systems end up being built into safety-critical stuff because revising the safety assessment of something like a medical accelerator or a nuclear power station is just too expensive. So again, it’s difficult. And even in the case of Windows, Microsoft may say that Vista stops on such and such a date, but if you’re a government customer and you pay them extra, they will still give you security updates. So, there are conflicts of interest in terms of the kind of contracts that people want to sell and the kind of services that other people want to buy.

Ross Anderson 00:29:26 And ultimately, I suspect the best way to regulate this is in the application environment. So, in the case of an aircraft or a vehicle or a ship or whatever, you can say I want my ship to be maintainable for 25 years, or I want my oil refinery to keep on working for 40 years. And then you can go and speak to the suppliers of the various components, and you can say, well what can you offer us? And often there’ll be a very big gap. You go to someone like GE or Honeywell or ABB and say, what maintenance guarantees will you give us on these particular sensors or actuators? And they may say three years and thereafter a maintenance contract at a price that we’ll tell you at the time. So, you end up with gaps that are in some sense uninsurable.

Ross Anderson 00:30:18 And then it is a business risk decision by the person who is building the oil refinery as to what they do. And what they tend to do in practice is they will then say, fine, in that case we need the refinery built to the following series of IEEE standards and using messaging protocols, the MP3 or whatever, which are supported by three different vendors so I can buy my sensors from ABB or GE or Honeywell. And what then happens is that you find that you then can’t change these standards to include authentication. This is a problem that you get for example, in the world of chemical plants and electricity transmission and distribution. But 20 years ago, everybody started putting devices onto IP networks because they were cheaper than using these lines. And that meant that anybody in the world who knew the IP address of your sensor could read it, and anybody in the world who knew the IP address of your actuator could operate it.

Ross Anderson 00:31:14 And then there’s been a huge big rush to re-perimeterize, to put the networks in electricity substations and all refineries and so on into almost private networks where there’s just one gateway between that and the internet, and the gateways become very specialized and that’s where you put the investment of effort and upgrades and so on to stop bad people from getting in and doing bad things. So, in a world like control systems, you can do that, you can re-perimeterize. With a car, it’s different, it’s difficult. The typical car nowadays has got about 10 radio frequency interfaces. Not only does the car have its own SIM card, so it can speak to the mobile phone network, it probably connects via Bluetooth. It’s probably two different modes of radio communication with your key fob for remote key entry and for alarm deactivation. You’re then going to have other radio interfaces to the tire pressure sensors, and all of these can become attack vectors.

Ross Anderson 00:32:12 People have found attacks on all of them, and very often on the really boring software that glues the radio frequency chips to the chips that do real programming work from the point of view of the car vendor. So, nobody’s interested in that. So, nobody tested it. And so, it is got bugs in it. So, you end up in a situation where you have to be able, at least in theory, to patch all the software in the car. And that means that you have to have the foresight to build in the mechanisms to do that. And if you’re going to do that over the air, it had better be secure otherwise the Russians or the Chinese will do it for you. And so, what this means is that when we graduate students with degrees in computer science or information engineering so that they can take the entry-level jobs — Tata or Wipro or whatever — we’d better teach them this stuff. And then the companies for their part during their bootcamp training for new employees have to put in their own cybersecurity training and ongoing cybersecurity training so that people remember all this stuff and they think about it when they’re working on projects for customers. But again, this becomes a big opportunity for India because there is a significant shortage of cybersecurity workforce worldwide, and this creates an opportunity for Indian firms to supply that missing talent.

Priyanka Raghaven 00:33:32 I think this would be a good time for me to actually ask you something else, which struck me right now. There’s also this concept of software deprecation, right? Which happens because you want to have something because of a new user requirement or things like that, you’re just up upgrading. Now this deprecation of software, is it pretty much similar to obsolescence?

Ross Anderson 00:33:53 I would tend not to use these words, I tend to think in terms of software that’s embedded in systems and in components and how these systems and components work and evolve over time. Whether somebody describes it as deprecation or obsolescence may depend on the internal politics of that company. Because they may have different accounting rules for writing stuff down, but the underlying engineering fact is that software needs to be maintained, which may mean small tweaks here and there, or it may mean refactoring, it may mean throwing out a chunk of software and replacing it with something different. It may mean replacing the operating system with a newer version. It may mean replacing the web kit in your browser with a newer version. And from the point of view of the operator outside, say the maintainer of Safari, that means pull out this web kit and put in that web kit. But from the point of view of the people working on web kits, it’s a smaller update that gets repackaged as a new version. So, you see from different points of view of different levels in the supply chain, the nature of a change may be different. This is because of the way that changes are packaged up and rolled out.

Priyanka Raghaven 00:35:02 So the question right now is that I think like if you have a container with all these different components, as you say, and each one has a different end goal for maintaining it and how it looks and stuff like that, so who’s the person who’s owning the container has to be very cognizant of what goes inside the container. That’s what you’re saying. So?

Ross Anderson 00:35:23 Yep. So this brings us to the question of a software Bill of Materials.

Priyanka Raghaven 00:35:27 Right.

Ross Anderson 00:35:27 Which is the subject of a US presidential executive order last year. And basically, President Biden ordered government agencies and contractors to see to it that they could account for all the software on which they were depending, right? And this was a response among other things to the SolarWinds incident. It’s a good idea that you know which software in your system is critical. It wasn’t just SolarWinds, it was logforge, which was something that had been sitting around software for years. But you want to know what is compiled into the binaries on which you rely, which are somehow inside your trust perimeter in the sense that they could break your security policy. And this is hard. It’s hard for technical reasons, and there may eventually be some kind of emergent international technical standard for how you maintain dependency trees of stuff that gets compiled. And you’ll presumably have some metadata that goes along with binaries, which contains pointers with hash trees and digital signatures showing everything that went into that particular pot of soup.

Ross Anderson 00:36:34 And that means that if you wake up one morning and you find that some particular library was compromised seven years ago by the Chinese, for example, you can then just press a button and you can see where all the places in your organization where that library is relied on. And you can then do a crosscheck against what parts of your infrastructure are critical in the sense that they could bring down your operations or steal money or kill people or whatever. And you could then prioritize a fix. So, this is going to be partly technical and partly organizational. To begin with, it will be largely organizational, but I believe in time people will develop better technical tools that will enable you to generate automatic records when you build software of everything that went into that build.

Priyanka Raghaven 00:37:23 Actually that was going to be my question that I was going to ask you next that should companies, how do they track this Bill of Materials? Should it be automated or do you hire people to do it? So, I think you’ve kind of answered it right now that it might start with being organizational and then once the process is in place, you can think about automation.

Ross Anderson 00:37:39 Yeah, right at the moment you have to hire people, and what’s going to happen is that the larger software companies — whether American or Indian or whatever — are then in their usual way going to write a whole bunch of Python scripts or whatever, which will automate some of this grunt work. And then eventually people will get together at conferences and they’ll try and hammer out some kind of international standard. Perhaps the US government will with luck, give us academics a bunch of money to try and facilitate that and whatever. This is how the industry kind of leaps forward after it had its ankle twisted in a pothole like that.

Priyanka Raghaven 00:38:17 Yeah, actually that brings me up to another question. This is more project related because most of the listeners of the show are, I think practitioners. One of the things that when we are asked to come up with an estimate, the development costs, we never factor in this thing called is Cost of Delay because of our COTS products that we use, whether it’s libraries or frameworks, et cetera. So is this something that we should start looking at, like when you’re estimating that, this is going to be done by then, ah yeah, we have this, it’s going to be done, but that’s only the development costs, but then there’s also this other thing that needs to be estimated as well for the upkeep of all our third-party dependencies.

Ross Anderson 00:38:57 Well, people who study software engineering economics have known since the 1970s, since pioneering work by Barry Boehm, that about 90% of the total cost of voting software is maintenance. And this was the case even in the old days when people wrote their own software and ran it on their own mainframes, right? Because somebody like a bank would hire some programmers to write themselves software to support ATMs when those come along. Then the ATMs would be rolled out and then over the next 20 years they keep on wanting more features in their ATMs. They’d want to accept deposits, they’d want to be able to make third-party payments, they’d want to be able to buy magic numbers to activate the prepayment electricity meters. And this meant that you would’ve an ATM team of a dozen programmers who would keep on working away for 20 years. And that ended up costing a lot more money than the initial development.

Ross Anderson 00:39:49 Then eventually, the ATMs become obsolete and you have to go to a different vendor and that means you’ve got to hire more people and do a redevelopment. So, you end up with this lifecycle cost, with an initial spur of the ongoing maintenance and then towards the end of life the costs go up because, the software is becoming crafty, there’s feature interaction, blah blah blah, blah, blah. And then you have a cut and then you have the same thing being done again with the next product cycle. So, the maintenance costs of the delay costs with software project failures are something that’s been around in our industry for years and years and years and years. It’s just that if you’ve been working in an outsourcing environment for one of the bigger tech firms, you might not be seeing this up close and personal because it’s a pain for your customer rather than for you. But then again, it’s one of the things that drives customers to outsourcers in the first place, right? Because, they can hopefully agree a project cost with an outsourcing firm and then the contractor’s in teeth so if the outsourcers screw up then there are penalties to pay.

Priyanka Raghaven 00:40:53 Interesting. So, it’s a lot more just than the software that you’re writing. It’s a lot more happening there behind the scenes.

Ross Anderson 00:40:59 Well, yeah. This is one of the things that I try and get across to our students that you can’t see this just as a kind of branch of applied mathematics where you sit down and write the code and then go home at five o’clock. If you want to be really good in this business, if you want to aspire to the role of a top technical consultant or a senior manager in either a customer company or an outsourcing company, then you’ve got to understand the broader business environment and the context in which software is developed, and the history of how software engineering as a discipline has evolved over the past now almost 60 years.

Priyanka Raghaven 00:41:37 One more thing I wanted to ask you was when we spoke in the beginning, we talked a little bit about when as consumers, we can actually demand that there should be an easier way that when the software that we are buying, there’s an easier way for it to get patched or to be more sustainable. So, in a similar sense, would it be as consumers of software third-party libraries, would it be okay to ask for the same thing as consumers of their thing that, you give us an easy way to automatically patch, but more securely, et cetera?

Ross Anderson 00:42:14 Well, consumers are simply interested in whether their fridge is going to last for seven years or 20 years. It’s the OEMs who are using things like libraries, and there your choice is often between buying some software product from a company for money, in which case you have to have very careful negotiations about support, or alternatively using an open source project because in that case, if it breaks, you can put your own people into the open source developer community and you can fix it. And how the dynamic typically has evolved over the past 30 years or so is that you may have a leading company, a hegemon, an incumbent, someone like Microsoft for example 30 years ago, was trying to make all the world dependent not just on its browser but also on its web server. And this would mean that it would’ve been able to appropriate many of the profits from the .com boom as companies built websites and went online.

Ross Anderson 00:43:14 And so all the other companies which were trying to profit from the .com boom got together and they wrote Apache, right? Companies like IBM didn’t want to end up handing over most of their profits to Mr. Microsoft. And so, they put a lot of their best people onto developing Apache. And when companies like Google came along, they also contributed to that. And so, this is the kind of dynamic that we have seen in the industry that whenever somebody threatens to monopolize too important a part of the ecosystem, there will be a crowdsourced open-source competitor. Linux is another good example. And free BSD. Nobody wants to have to use Windows all the time for everything and pay huge amounts of money for all the stuff that goes with the big Windows installation.

Priyanka Raghaven 00:43:59 Interesting. So, I would like to sort of go onto the next area, which is in the future direction. So, what I’m hearing from you is just advice for maintainers of repositories. If you were to actually use open-source, then maybe you can put people inside and try and fix problems. And also, the other thing, what I wanted to ask is what is the advice you would give to people building software? So, one of the things I’ve heard is of course the due diligence of all your third party. The second thing is of course contributing to open-source, as you said. And is there anything else? Have I missed anything else?

Ross Anderson 00:44:38 Well, the main thing that points on which many engineers fall down is that they don’t anticipate how long the software will be maintained for. Now if you are, for example, I mean one of my wife’s cousins is from India works as an engineer designing bits and pieces for cars, things like controllers for windscreen wipers and so on. And if you are designing something like that, whether than the hardware or the software level, you’ve got to bear in mind that once your product ships, it’ll maybe be three years in R&D and it’s going to be seven years in cars that are being sold in the showroom. And then there’s a maintenance obligation for 10 years after that. That’s a minimum in Europe at the moment, and it may increase over time because of sustainability to another 10 years. So, you’re looking at a minimum of 20 years’ worth of maintenance and possibly 30 years’ worth of maintenance.

Ross Anderson 00:45:34 And then you have to ask yourself what sort of programming language and tools you’re going to use, right? Now if you had been writing this stuff 20 years ago, you might have thought, well let’s write it in Java. Now that would be a bad idea because now Oracle is legging everybody over on licensing fees. Or you might have said, well let’s write it in this amazing new language C++ that is promoting and people are still writing such software and C++, but because of all the safety and security issues around that, people are now abandoning that and they’re moving wholesale to languages like Rust and Golang and C Sharp and so on. So, is that what you should be writing in? Are you confident that Rust is still going to be around in 30 years’ time?

Priyanka Raghaven 00:46:22 These are tough positions.

Ross Anderson 00:46:25 And the move away from C Sharp is I think largely because of an appreciation of the life cycle costs of doing security patching. So, then a question for researchers is this, what’s hidden costs and likely future emergent costs are there with using languages like Rust and C Sharp, and what things might be around that would help you to mitigate those longtail costs and risks? And how’s all this going to be affected by machine learning tools like co-pilot? Now these are the strategic things that you have to think about when selecting tools, selecting development environments. Or if you’re an individual programmer, where are you going to invest your own time and expertise? Where are you going to make your career bets? Are you going to become a first-class Rust programmer? Are you going to devote yourself to Oracle? Are you going to become a Windows fundi?

Priyanka Raghaven 00:47:18 Yeah, actually it’s interesting is I had actually the principal researcher for Gthub co-pilot. I had interviewed him, we did a show on the co-pilot. And one of the things I asked him was for some of these older languages, right? Like mainframes and stuff, are you going to be training the co-pilot on that? Because it’s becoming increasingly hard to find people who know Cobol. And they were thinking that yeah, maybe that’s something — I mean he wasn’t aware, but he says, yeah, maybe that’s something that’ll be there in the future. So, do you think then in that case, in the case of when you have like a smart AI-powered buddy, would the language not matter?

Ross Anderson 00:47:52 Well, the language is really going to matter because unless you live it and breathe it, you are not going to be expert at maintaining it. Right? The buddy can help you a lot. And there, there is going to be a market for tools from maintaining old stuff. Microfocus has made huge amounts of money out of tools to maintain old Cobol programs. That’s one of the UK software success stories over the years. And a scare story is what happened about 10 years ago. The NatWest bank, one of Britain’s big five banks, almost died because they outsourced the maintenance of their core banking system to a firm in India, which told them that it was expert at dealing with IBM mainframe assembly when it wasn’t really, and I knew a number of the guys who had worked on this and had been shown the door, and I mean, one friend in particular had retired to live in the desert in Israel so he could enjoy the sunshine.

Ross Anderson 00:48:45 And, all of a sudden if you went into a NatWest bank in Britain and said, hello, I’ve got an account here, can I withdraw some money? They would say, certainly, sir, how much would you like? Will a hundred pounds do you? And they were just handing out monies for people and getting, taking a note of it, because they couldn’t access the systems. And they were just hoping that they would make it all good in the end. And after about a week or 10 days, they got the systems running again. But if it had been another week, you’d have had a dead bank.

Priyanka Raghaven 00:49:11 And out of curiosity, the reason for this was because the outsource firm didn’t really know what was the problem. So, they had to get along? Okay.

Ross Anderson 00:49:18 So that was a nail-biting experience, I think, for the British economy. It’s one of the reasons that I always keep accounts at more than one bank because having worked in IT banking, I know that sometimes you’ve got near misses. I never worked for the NatWest, but I knew people who did.

Priyanka Raghaven 00:49:33 Okay. I think that’s a good advice anyway for the software engineers listening to the show. I have to ask you two more questions before I let you go. One is, of course, there is this paper on standardization and certification of the Internet of Things, which I chanced upon when I was Googling you. And that was conducted with the support from the European Union. What motivated this research, and it was quite relevant and fascinating when I was reading it, but I just was curious to know, how did you do that?

Ross Anderson 00:49:59 Well, we were approached by the European Union’s Research Department, which wanted a study of what would happen to safety regulation once you get software in everything. You see, the European Union is in effect the world’s regulator in several dozen verticals. From things like medical devices through railway signals to children’s toys. And very often it’s the lead regulator because America doesn’t care and nobody else is big enough to matter. Sometimes it regulates a part of the world market — as with cars, for example, there are basically car standards for the Americas, car standards for Europe, Middle East and Africa, and car standards for China. Right? So, the cars in India, for example, mostly comply with European standards. And so, what happens when you get software everywhere? What happens to the regulatory agencies in Brussels who organize and update the safety standards? Who supervise the tests that new cars have to go through and so on and so forth.

Ross Anderson 00:50:56 Is it going to be necessary for each of these agencies to acquire security engineers? Well, that would be difficult because many of them don’t even have engineers to begin with. They have got lawyers and economists. So, one of the things we come up with was the recommendation that the EU needed to have an agency in Brussels to provide the cybersecurity expertise for that. And they duly passed the Cybersecurity Act, which meant that the European network, an information security agency, which had previously been located in Greece, was allowed to open an office in Brussels so it could provide that expertise. There were other recommendations that we made that were accepted and others weren’t accepted. But the main thing that we learned from that was realizing that sustainability was a real big deal.

Ross Anderson 00:51:44 This wasn’t part of our initial brief, but we put into our report the fact that hey you’re going to have to start thinking about software lifecycle. Because at present we know how to make two types of secure system. There’s things like cars that we used to test to death, but then not connect to the internet. And there’s things like your phone, which is secure because it’s patched every month. But the problem is, your Android phone might remain secure for a year or two because after that the OEM won’t make any patches available. Have an iPhone, you might get five years. But what happens once you start connecting your car to the internet? Then if there’s a vulnerability, it can be exploited remotely to cause car crashes or whatever. So suddenly you have to start patching your car every month, or maybe every three months, or every six months. But it’s still a huge additional cost. Who’s going to regulate that?

Ross Anderson 00:52:29 Who’s going to demand that software in children’s toys be capable of being patched? If a vulnerability comes along, which means, for example, that any bad man anywhere in the world could phone up your kids on the baby alarm and start soliciting or whatever, then clearly you need to patch that. How do you regulate that? And this is one of the things that stirred the European Commission to eventually change the Sales of Goods directive so as to ensure that everything’s sold in the EU the software has to be patched for at least two years or for longer if that’s a reasonable expectation of the consumer. And for things like fridges and washing machines and cars and so on, we already had the 10-year rule for spare parts. So that’s what becomes operational. And there’s now a debate going on in the EU about whether we compel sellers of mobile phones to patch the software for five years.

Ross Anderson 00:53:24 In other words, words do we compel Samsung to treat its customers as nicely as Apple does? And again, of course, that becomes political. Ultimately, it’s down to the regulator to fix this if the market won’t fix it. So, standardization and certification start with safety. It immediately leads into security because security vulnerabilities in safety-critical equipment become safety vulnerabilities too. And it immediately crosses over to sustainability. Because once you’ve got software, there will be a tendency for the OEMs to use that for fancy business models of extracting rents from the customer by selling mandatory subscriptions along with it and bombarding you with ads and so on. And again, that becomes abusive and may have to be stopped by regulation.

Priyanka Raghaven 00:54:11 So in a way it’s a regulation to drive change.

Ross Anderson 00:54:16 Or regulation to stop change that would upset existing safety standards, social expectations, social norms.

Priyanka Raghaven 00:54:24 This has been a great conversation, and the last question I want to ask you is where can people reach you if they wanted to know more about your work? Would it be through email, or should they just look you up and then try to contact?

Ross Anderson 00:54:38 The simplest thing to do is to look up my website.

Priyanka Raghaven 00:54:41 Okay.

Ross Anderson 00:54:42 That’s our up-to-date research there. You can also download and watch the security engineering lectures that I teach at Cambridge. So, first-year undergraduates and the security engineering that I teach at Edinburgh to a fourth year undergraduates and master students There’s also a massively open online course on security economics that I developed with the University of Delft for people who are interested in the economics of security. And there’s stuff around recent policy questions. For example, the attempt by the governments in Europe and Britain and Canada and Australia to outlaw encryption end-to-end in messenger services like WhatsApp, using terrorism and child safety as excuses.

Priyanka Raghaven 00:55:26 And we had a similar thing here in India as well. So yeah,

Ross Anderson 00:55:29 The agencies all around the world are trying their luck on this one. Think of the terrorists, think of the children. Give us all your keys.

Priyanka Raghaven 00:55:36 Yeah. I think in India, I think it was also talked about like, I think women’s safety. So I mean I was just called just because of my title in my, I think, LinkedIn or something. So yeah. So, let’s see where that goes. Yeah.

Ross Anderson 00:55:47 Well, the safety of women and girls in particular against violent crime is extremely important. But you don’t fix that problem by giving all our cryptographic keys to the NSA. You fix that problem with more local policing, you fix it with child protection, social workers, you fix it by changing social attitudes towards women. There’s a whole lot of very valuable work to do from which people shouldn’t be distracted by intelligence agency attempts to get into all our networks.

Priyanka Raghaven 00:56:14 Yeah. This is great. Thank you so much for coming on the show. I’ll definitely put a link to your website on our show notes. And again, it’s been fascinating. It has really opened my mind to a lot of things. So yeah, I’m going to be doing a lot of research after this.

Ross Anderson 00:56:29 Yeah. And there’s also my security engineering book. Of which their chapters available for free download. And next year I’ll be making whole book available for free download.

Priyanka Raghaven 00:56:40 Oh wow. Wonderful. It’s a very entertaining read as well. I mean, it’s one of the things, I think the first edition came out in 2008, if I’m not mistaken.

Ross Anderson 00:56:48 I think the first edition was 2001.

Priyanka Raghaven 00:56:50 Oh wow, okay, okay.

Ross Anderson 00:56:51 And the second edition, 2008. And those are both now available free online. The strategy I negotiated with my publisher in each case is to hold back some of the chapters from full public availability for a few years so they can make some money. But ultimately, I want my book to be read by everybody. I want it to be available to students, not just in places like Oxford and Cambridge, but also in places like Bangalore and Kolkata.

Priyanka Raghaven 00:57:19 . Thanks a lot for coming on the show. This is Priyanka Raghaven for Software Engineering Radio. Thanks for listening.

Ross Anderson 00:57:25 Thank you. [End of Audio]

Join the discussion
2 comments
  • This is an excellent tutorial with the real-life examples on software obsolescence, and the related necessity of its anticipation in the design and complexity of a supported product.

More from this show