software bugs represented by cartoon computer bug and magnifying glass

History’s Most Expensive Software Bugs

Computer history is full of spectacular and expensive bugs that cost companies and government entities money, time, reputation — and sometimes, people’s lives. It’s important to learn from our past and software testing is no different. While calculating the full financial damage of software bugs can be difficult, these four bugs were some of history’s most expensive and have a lot to teach us about the importance of testing in software development moving forward.

Mariner 1 Spacecraft — $18.5 million

What Happened: At $18.5 million, the Mariner 1 spacecraft is the least expensive bug on our historical list. In 1962, a programmer incorrectly transcribed a crucial formula when writing the spacecraft’s source code. The lack of a single superscript bar caused the rocket to overcompensate its trajectory as it launched, veering dangerously off its intended flight path.

What We Can Learn: Always have someone double-check your code. Even a single typo can have unexpected side-effects. While the typos in your code will hopefully not come with an $18.5 million dollar price tag, developers can often be too close to their project to notice transcription or programming errors and other mistakes that produce incorrect results. It’s always worth having another pair of eyes look over your software.

The Morris Worm — Between $250 thousand and $96 million

What Happened: The internet’s first worm virus, released in 1988, wasn’t intended to be harmful at all. Cornell University grad student Roger Morris programmed the worm as an experiment meant to assess the size of the current internet and accidentally crashed approximately six thousand computers in a single day! The computer program was meant to find connections between computers and pass the worm along, acting as a mapping tool, but the code failed to detect when it was already present on a computer. When the code installed itself multiple times, the computer’s processor became overwhelmed and crashed.

What We Can Learn: The Morris Worm teaches us two valuable lessons when it comes to software testing. First, always test for software vulnerabilities. The Morris Worm was able to exploit several known vulnerabilities in the ways Unix computers connected to each other to pass itself from one computer to the next. The second lesson is: know your maximum load! Even if your software is not meant to handle large amounts of traffic or simultaneous demands, that doesn’t mean your users won’t find that limit for you. Knowing your software’s load limit ahead of time can help you put in precautions to keep a crash from happening.

Intel Pentium Chip — $475 million

What Happened: When Pentium launched the flagship follow-up to its Intel i486 processor in 1993, a bug in the chip’s programming caused it to incorrectly divide floating-point numbers. In other words, it would miscalculate values that approached 4 or 5 decimal places. The chip’s floating-point engine (which was separated from the main chip in other computers) was placed on the main processor. When two of the main program segments couldn’t function well together, a software bug ensued. The customer backlash was huge and replacing chips ran up quite the cost for Pentium.

What We Can Learn: Sometimes, crawling code looking for typos or misused commands is not enough. Software bugs can be elusive. When working with more complex software, you must also test different segments of code for functionality. It is important to test your software on multiple levels and report back to software engineering.

Y2K Bug — $500 billion

What Happened: The Y2K bug might be the most expensive bug since the development of computers. Yet, many don’t stop and think of its cost because the resulting large-scale computer disasters never happened. The Y2K, or Millenium Bug, was a bug across multiple types of software that reduced calendar years in data sets to be stored as the last two digits of the year to save space. In other words, a computer would read a ‘00 date as 1900, not 2000. The issue could have caused major bugs in government, financial, scientific software and more had it not been addressed.

What We Can Learn: Sometimes it’s not enough to just test for current errors. Sometimes quality software testing requires looking to the future to predict how certain software stipulations could affect its functionality later on. A little foresight can save you money now, instead of later in the process when the problem becomes more urgent.