Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Rethinking Software Security


Feb04: Rethinking Software Security

Herbert is Director of Security Technology at Security Innovation Inc. and James is a professor of computer science at the Florida Institute of Technology. They are also coauthors of the book How to Break Software Security (Addison-Wesley, 2003). Herbert and James can be contacted at [email protected] and [email protected], respectively.


Security & the RDISK Utility


Editor's Note: No matter what language or platform, security is perhaps the most challenging problem programmers now face. Over the coming months, security experts Herbert Thompson and James Whittaker will examine in this multipart series the problems and solutions we face in developing secure software.

According to conventional computing wisdom, security is a network perimeter problem—keep the bad guys off your system and all will be well. Despite the onslaught of new products and marketing literature from network-security vendors, security is not a problem that can be solved completely with better firewalls and antivirus software.

Security is a software problem, one that needs to be wholeheartedly addressed by software developers and testers. But there are obstacles. The software-development community has seen a rash of new programming paradigms, methodologies, and development environments, yet the number of security flaws in software continues to climb. According to the Cert Coordination Center (CERT/CC), there were 4129 software security vulnerabilities reported in 2002, nearly double the number reported in 2001. The techniques that we're accustomed to using to build software faster, cheaper, and more bug-free haven't produced significantly more secure software.

Basic software-engineering tenets, such as rating a bug's severity by the number of users likely to encounter it, don't hold for security. You must find those obscure execution paths that aren't traversed by average users but open up security holes to the entire user base. Corporations, governments, and vendors have begun to realize that we need to look at software bugs, requirements, and development differently to adapt to new security-savvy consumers. In this article, we take a look at what the software-engineering community needs to consider as consumer focus turns to security.

The Security Business Case

Security now has a better understood business case. Software vendors are witnessing the emergence of security-aware consumers who make purchasing decisions not just on price and utility, but who also demand proof that vendors have done a reasonable job of security testing their products. Corporations are now starting to consider the "total cost of ownership" of applications, instead of just the cost to purchase and deploy. Corporations are realizing that downtime, data theft, and cybervandalism made possible be security flaws in software are part of the cost incurred by deploying vulnerable software.

The largest technology analyst firm (Gartner) is advising its clients to demand proof of security testing from their software vendors. What does this mean for development organizations? Certainly there are no generally accepted certifications or standardized test suites to verify applications or solutions as secure. Business consumers, though, are likely to be the first to ask the tough questions. How was this product tested? What methods, processes, techniques, outsourcers, and people did you devote to making sure this product isn't riddled with security holes? In competitive software markets such as databases, how well a vendor is prepared to answer these questions may be the single biggest factor in who wins the contract.

Requirements Don't Tell the Whole Story

Requirements tell programmers what applications, components, or functions should do. They are usually pretty good at describing how component interfaces should work, the type of data (or inputs) that these components will receive, the manipulation that should be done on data, and the eventual outputs of a module. Developers then write modules, and testers create tests that feed the application data and look for the presence of correct output. Tools and languages like UML have made the process of moving from requirements to implementation easier. Programming paradigms like extreme programming (XP) have realized the important role that testing plays in the process and are quick to turn requirements into test cases. But the problem with security defects is that all of these paradigms focus on creating the correct result without focusing on how the application produces that result.

It's the "how" and the "what else did the application do" that are important to security. Consider a simple function that accepts a string of 10 characters and is expected to return a string that is also 10 characters long but with the characters in reverse order. Therefore, if you were to supply the string "abcdefghij" you would expect the output to be "jihgfedcba." These are simple requirements for a rather trivial function and you can easily imagine test cases for this function—a series of strings with varying characters, all with verifiable results. Astute functional testers would certainly try to vary the length of the input by applying strings of zero length up to hundreds or thousands of kilobytes, expecting to receive an error message if the string was not exactly 10 characters long. Different developers may choose to implement this function in different ways, possibly using arrays, structures, or temporary files. Each implementation of this simple function may be functionally correct and might pass the test cases discussed. Now imagine that there were some other, unspecified security concerns at play. What if this string were a password or encryption key? In this case, the requirements would undoubtedly be the same, but the implementation that stores the string in a temporary file would be grossly insecure. We see then that there can be a discrepancy between secure and correct; the accompanying text box entitled "Security & the RDISK Utility" makes this point with a real-world example.

To create software that is secure, requirements must evolve to not only identify correct behavior but also describe how behavior must be constrained for security.

Threat Modeling is Poorly Done

Application designers, developers, and testers rarely take the time to model the security threats to their applications. Sometimes we make poor assumptions about attackers and attacker motives and this can lead to defenses that are either ineffectual or defensive mechanisms that create new attack vectors into a system. In 2000, Andre Dos Santos of the Georgia Institute of Technology published a paper titled "Security Testing of the Online Banking Service of a Large International Bank" (http://www.cc.gatech.edu/~andre/pub.html). In this case study, he described a bank in which users were required to enter their account number and personal identification number (PIN) to access their account online. The bank realized that they were open to the threat of an attacker taking a valid account number, then trying to brute force the password. To counter this, they implemented a control that locks an account out for 24 hours if there are three consecutive login failures. This is a common protection mechanism that is used to try and thwart attackers. But what if an attacker's motives were different:

  • What if attackers wanted to get into anybody's account, not one account in particular? An attacker might choose a common PIN number such as "1234," then iterate through a list of sequential account numbers. The accounts that use a different PIN would then only show one failed login attempt and the lockout would never be triggered.
  • What if attackers weren't motivated by money and just wanted to cause harm? An attacker's goal may be to deny legitimate account holders access to their accounts online. Attackers could easily do this by exploiting the security mechanism itself and write automation to purposely make three failed logins to every account on the system—effectively shutting out all legitimate users for 24 hours. They could repeat their attack daily causing even further harm for the bank customers and adding expense for the bank's IT department (not to mention loss of reputation for the bank itself).

Both of these attack vectors are made possible by a poor understanding of the threat. Software vendors and corporations who deploy applications need to integrate threat modeling into their processes from the beginning. There are many resources that can help. Probably the best starting point is to read through bug and incident reports at places like SecurityFocus (http://www.securityfocus.com/) and CERT (http://www.cert.org/). Understanding how other applications have been broken will make you more attuned to what the weak points might be in your own software. Michael Howard and David LeBlanc's Writing Secure Code, Second Edition (Microsoft Press, 2002; ISBN 0735617228) and our How to Break Software Security (Pearson Addison Wesley, 2003; ISBN 0321194330) also offer some ideas on how to model threats.

Attacker Techniques Aren't Understood

Until recently, graduates of a computer science or software engineering programs were never taught how to attack and exploit software. It is rare that a software developer knows how to write an exploit for a buffer overflow or that a web developer understands how to use SQL injection to gain control of a web server and its data. But motivated teenagers can learn these skills in a matter of days in the back alleys of the information superhighway. It isn't difficult, then, to understand why attackers are often so successful at breaking applications and breaking into networks. To build applications that are more resistant to attackers, software developers and testers need to understand their techniques. There are some positive steps in this direction. Many computer science programs are now including classes on security engineering and security testing. A new crop of books that focus on how attackers think and the tools and techniques they use are appearing. To build and deploy secure systems, you must know the tools and techniques of your adversaries.

Industry Metrics Aren't Built for Security

A great deal has been invested in processes to help organize functional testing efforts producing bug severity scales, coverage metrics, and other generalized benchmarks that are relevant to the functional testing of applications. Many of these processes, however, work directly counter to the needs of security testers. Consider standard bug-severity metrics that view a bug as severe if there is a loss of functionality, corruption of data, or if the failure is encountered by large numbers of users. With such metrics, it is easy to imagine why the writing of a temporary file or the sending of extra network packets would not be noticed.

Testers are traditionally rewarded for both the quantity and severity of bugs they report. Since side-effect functionality does not equate to broken functionality, it is likely that testers may not notice these behaviors, and even if they do, these bugs are likely to receive a low severity rating and be dismissed by managers if the product is near release. We need to rethink how bug severity, coverage, and productivity are measured to recognize the security aspect of software quality.

Avoiding the Blame Game

The general thinking is that software such as Microsoft's operating systems are inherently less secure than alternatives like Linux. The facts, however, tell a different story. Linux had more vulnerabilities reported and security patches issued in 2002 than Windows, and exploits for common flaws like buffer overflows are easier to write on Linux than Windows. Yet it is undeniable that organizations running Windows have been harder hit with viruses and worms than those running alternative platforms. To reconcile this disparity, you must understand how attackers think.

Attackers are likely to devote more time to uncover and exploit security vulnerabilities in software that is widely deployed. It is pretty doubtful that a horde of intruders somewhere are working on the next killer VMS worm. Why would they? There is no allure in infecting a few hundred or a few thousand machines. That doesn't get picked up by CNN. So attackers go for the big fish—the widely deployed homogeneous operating systems and the applications that run on them. Does the fact that you never hear about a VMS virus mean that VMS is more secure than Windows? Or does it mean that you just have more attackers focused on a more widely deployed target? So it is always an interesting exercise for developers to try and figure out how intruders will view their application. If you develop software for a market that is large, then your application is a target too—no matter who you work for.

A New Era in Software Development

The new focus of consumers on security has forced vendors to commit to producing more secure software. Developers are starting to become more attuned to the security implications of their code. There is also a new breed of software tester focused exclusively on security. Many books, magazines, and articles are emerging on the issue of software security. We are finally beginning to see the signs of a revolution—one that changes the way that software is designed, developed, and tested. Over the coming months, we are going to take an in-depth look at software security, the issues involved, and how the nature of software development will and must change.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.