The history of computers is full of a number of spectacular and expensive bugs that cost companies and government entities money, time, their reputation and sometimes, people’s lives. It’s important to learn from our past, and that’s no different when it comes to software testing services. While calculating the full financial damage of software bugs can be difficult, these four bugs were some of history’s most expensive and have a lot to teach us about the importance of testing in software development moving forward.
Mariner 1 Spacecraft — $18.5 million
What Happened: At $18.5 million, the Mariner 1 spacecraft is the least expensive bug on our historical list. In 1962, a programmer incorrectly transcribed a crucial formula when writing the spacecraft’s source code. The lack of a single superscript bar caused the rocket to overcompensate its trajectory as it launched, veering dangerously off its intended flight path.
What We Can Learn: Always have someone double-check your code. Even a single typo can have unexpected side-effects. While the typos in your code will hopefully not come with an $18.5 million dollar price tag, developers can often be too close to their project to notice transcription or programming errors and other mistakes that produce incorrect results. It’s always worth having another pair of eyes look over your software.
The Morris Worm — Between $250 thousand and $96 million
What Happened: The internet’s first worm virus, released in 1988, wasn’t intended to be harmful at all. Cornell University grad student Roger Morris programmed the worm as an experiment meant to assess the size of the current internet and accidentally crashed approximately six thousand computers in a single day! The computer program was meant to find connections between computers and pass the worm along, acting as a mapping tool, but the code failed to detect when it was already present on a computer. When the code installed itself multiple times, the computer’s processor became overwhelmed and crashed.
What We Can Learn: The Morris Worm teaches us two valuable lessons when it comes to software testing. First, always test for software vulnerabilities. The Morris Worm was able to exploit several known vulnerabilities in the ways Unix computers connected to each other to pass itself from one computer to the next. The second lesson is: know your maximum load! Even if your software is not meant to handle large amounts of traffic or simultaneous demands, that doesn’t mean your users won’t find that limit for you. Knowing your software’s load limit ahead of time can help you put in precautions to keep a crash from happening.
Intel Pentium Chip — $475 million
What Happened: When Pentium launched the flagship follow-up to its Intel i486 processor in 1993, a bug in the chip’s programming caused it to incorrectly divide floating point numbers. In other words, it would miscalculate values that approached 4 or 5 decimal places. The chip’s bug resulted from the chip’s floating point engine, which had been separate from the main chip in other computers, being placed on the main processor and the two segments of programming not meshing well with each other. The customer backlash was huge and replacing chips ran up quite the cost for Pentium.
What We Can Learn: Sometimes, crawling code for typos or misused commands isn’t enough to catch all the bugs in your software. When working with more complex software, you also have to test the ways that different segments of your code function together. It is important for your software testers to test your software on multiple levels and report back to software engineering.
Y2K Bug — $500 billion
What Happened: The Y2K bug might be the most expensive bug since the development of computers, and yet, many don’t stop and think of its cost because the resulting large-scale computer disasters never happened. The Y2K, or Millenium Bug, was a bug across multiple types of software that reduced calendar years in data sets to be stored as the last two digits of the year to save space. In other words, a computer would read a ‘00 date as 1900, not 2000. Had this not been addressed, it would have caused major bugs in government, financial, scientific softwares and more.
What We Can Learn: Sometimes it’s not enough to just test for current errors. Sometimes quality software testing requires looking forward to try and predict how certain software stipulations could affect its function in the future. A little foresight can save you money now, instead of later in the process when the problem becomes more urgent.