Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Risk Analysis: Attack Trees & Other Tricks

, August 01, 2002


August 2002: Application-Level Security: Attack Trees And Other Tricks

Most software houses consider security only once or twice during the development lifecycle, if at all. The main motivator is fear. A few companies become concerned during the design phase; however, most wait until potential customers start to ask hard questions.

At design time, software teams often believe that whatever they've done for security is probably fine, and if it isn't, they can go back and fix things later. Thus, while some cursory thinking is devoted to security, design focuses mainly on features. After all, you can show new functionality to investors. They can see progress and get a warm, fuzzy feeling. Security loses out—it doesn't manifest itself as interesting functionality.

When security is most commonly invoked, analysis tends to focus on the finished product and not on the design. Unfortunately, by the time you have a finished product, it's way too late to start thinking about security. At this stage of the game, design problems are usually deeply ingrained in the implementation, and are both costly and time-consuming to fix. Security audits that focus on code can find problems, but they don't tend to find the major flaws that only an architectural analysis can reveal.

Of course, plenty of implementation flaws can be coded right into a sound design. Some such flaws (like buffer overflows) are reasonably easy to fix when they're identified. This means that a code review is sometimes productive. But it's usually not worth anyone's time to look for implementation-level problems—an attacker can usually find significant design flaws that have been present since the software was designed.

We believe in beginning a risk management regimen early in the project, and continuing it throughout the entire lifecycle. This is the only way to ensure that a system's security reflects all changes ever made to it. One of the flaws of the "penetrate-and-patch" approach is that patches for known security holes often introduce new holes of their own. The same sort of thing can happen when software is being created. Systems that are being modified to address previously identified security risks sometimes end up replacing one risk with another. Only a continual recycling of risks can help.

A good time to perform an initial security analysis of a system is after you've completed a preliminary iteration of a system design. At this point, you may expect to find problems in your design and fix them, yet you probably have a firm handle on the basic functionality you wish to implement, as well as the most important requirements you're trying to address. Waiting until you're finished with design is probably not a good idea, because if you consider yourself "done" with design, you'll likely be less inclined to fix design problems uncovered during a security audit.

Only when you're confident about your basic design should you begin to worry about security bugs that may be added during implementation. Per-module security reviews can be worthwhile, because the all-at-once approach can sometimes overwhelm a team of security analysts.

Experience and Objectivity
Who should perform a security analysis on a project? A risk analysis tends to be only as good as the knowledge of the team performing the analysis. There are no checkbox solutions for auditing that are highly effective. Although a good set of auditing guidelines can help jog the memory of experts and can also be an effective training tool for novices, there is no match for experience. It takes expertise to understand security requirements, especially in the context of an entire system. It also takes expertise to ensure that security requirements are actually met, just as it takes expertise on the part of developers to translate requirements into a functional system that meets those requirements. This variety of expertise can only be approximated in guidelines, and poorly at that.

You may have a person on the project whose job is software security. In a strange twist of logic, this person is not the right one to do an architectural security audit. For that matter, neither is anyone else working on the project. People who have put a lot of effort into a project often can't see the forest for the trees when it comes to problems they may have accidentally created. It's much better to get a pair of objective eyes. If you can afford it, an outside expert is the best way to go.

When it comes to doing implementation analysis, the same general principles apply. If you have a security architect who designs systems but does not build them, this person may be a good candidate to review an implementation. However, an implementation analyst must have a solid understanding of programming. Sometimes, excellent security architects aren't the greatest programmers. In such a case, you may pair a highly skilled programmer with the analyst. The analyst can tell the programmer what sorts of things to look for in code, and the programmer can sniff them out.

Even if your security architect is well rounded, he shouldn't work alone; groups of people tend to work best for any kind of analysis. In particular, having a variety of diverse backgrounds always increases the effectiveness of a security audit. Different analysts tend to see and understand things differently. For example, systems engineers tend to think differently than computer scientists.

Similarly, when bringing in outside experts, it can be helpful to use multiple sets. Major financial companies often take this approach when assessing high-risk products. It's also a good technique for figuring out whether a particular analysis team is good. If an outside team isn't finding the more "obvious" problems that other teams have discovered, the quality of their work may be suspect.

Architectural Security Analysis
Use the techniques of your favorite software engineering methodology for performing a risk analysis on a product design. In our preferred strategy, there are three basic phases: information gathering, analysis and reporting.

During the information-gathering phase, the security analyst's goal isn't so much to break the system as it is to learn everything about it that may be important. First, the analyst strives to understand the requirements. Second, he or she reviews the proposed architecture and identifies the areas that seem to be most important in terms of security. Ultimately, the analyst will have a number of questions about the system and the environment in which it operates. When these are answered, it's time to move on to the analysis phase.

This phase frequently raises all sorts of new questions about the system, and there's no harm in this. The phases tend to overlap somewhat, but are distinct. In the information-gathering phase, we may break a system, but we're actually more worried about ensuring that we have a good overall understanding of it. Contrast this against a more ad hoc "red-teaming" (penetration testing) approach. During the analysis phase, we're more interested in exploring attacks that one could launch against a system, but will seek out more information if necessary to help us understand how likely or how costly it will be to launch an attack.

It's unrealistic to think that an analyst won't be trying to conceive of possible attacks on the system during the information-gathering phase—any good analyst will. In fact, such critical thinking is important, because it helps determine which areas of the system aren't understood deeply enough. Although the analyst should be taking notes on possible attacks, formal exploration is put off until the second phase.

Your Documents, Please
The second goal of the information-gathering phase is getting to know the system. A good way to go about this is to get a brief, high-level overview of the architecture from the design team (from the security engineer in particular, should one exist). At this time, the analyst should read all available and relevant documentation about a system, noting any questions or inconsistencies.

If a system isn't documented, or if it's poorly documented, a security analyst will have a hard time doing a solid job. Unfortunately, this often is the case when an analyst is called in to look at a design when the implementation is finished or is in progress. In these cases, the best way for an analyst to proceed is to get to know the system as deeply as possible up-front, via extensive, focused conversations with the development team. This should take a day or two.

This is a good idea even when the system is well documented, because what's on paper doesn't always correlate with the actual implementation, or even the current thinking of the development staff. When conflicting information is found, the analyst should try to find the correct answer and then document the findings. If no absolute answer is immediately forthcoming, document any available evidence so that the development staff may resolve the issue on its own time. Inconsistency is a large source of software security risk.

When the analyst has a good overall understanding of the system, it's time to create a battle plan. The analyst may research the methods or tools used extensively, but must prioritize issues based on probable risk, and budget available time and staff appropriately.

The next step is to research parts of the system in order of priority. Remember to include segments of the system that were not created in-house. For example, shrink-wrapped software used as a part of a system tends to introduce real risk. The analyst should strive to learn as much as possible about the risks of any shrink-wrapped software. He should scour the Internet for known bugs, pester the vendor for detailed information, check out Bugtraq archives, and so on.

When researching parts of the system, questions inevitably arise. During this part of the analysis, providing access to the product development staff may seem to be a good idea because it will produce mostly accurate information quickly. However, you may want to rethink offering this kind of full-bore access to the analyst, because the analyst can easily become a nuisance to developers. Instead, he should interact only with a single contact (preferably the security architect, if one exists), and should batch questions to be delivered every few days. The contact can then be made responsible for getting the questions answered, and can buffer the rest of the development team.

Attack Trees
The analysis phase begins when all the information is gathered. The main goal of the analysis phase is to take the information, methodically assess the risks, rank the risks in order of severity and identify countermeasures. In assessing risk, we like to identify not only what the risks are, but also the potential that a risk can actually be exploited, along with the cost of defending against the risk.

The most methodical way we know of achieving this goal is to build attack trees. Attack trees are a concept derived from "fault trees" in software safety (see Nancy G. Leveson's Safeware: System Safety and Computers [Addison-Wesley, 1995]). The idea is to build a graph to represent the decision-making process of well-informed attackers. The roots of the tree represent potential goals of an attacker. The leaves represent ways of achieving the goal. The nodes under the root node are high-level ways in which a goal may be achieved. The lower in the tree you go, the more specific the attacks become.

In our approach, a pruning node specifies what conditions must be true for its child nodes to be relevant. These nodes are used to prune the tree in specific circumstances, and are most useful for constructing generic attack trees against a protocol or a package that can be reused even in the face of changing assumptions. For example, some people may decide not to consider insider attacks. In our approach, you can have nodes in which the children are applicable only if insider attacks are to be considered.

Now that we have an attack tree, we need to make it more useful by assigning some sort of value to each node for perceived risk. Here we must consider how feasible the attack is in terms of time (effort), cost and risk to the attacker.

The best thing about attack trees is that data gets organized in a way that is easy to analyze. In this way, it's easy to determine the cheapest attack. The same goes for the most likely attack. How do we organize the tree? We come up with the criteria we're interested in enforcing, and walk the tree, determining at each node whether something violates the criteria. If so, we prune away that node and keep going. This is a simple process as long as you know enough to be able to make valid judgments.

Making valid judgments requires a good understanding of potential attackers. It's important to know an attacker's motivations, what risks the system operator considers acceptable, how much money an attacker may be willing to spend to break the system and so on. If you're worried about governments attacking you, much more of the attack tree will be relevant than if you're simply worried about script kiddies.

Unfortunately, building and using attack trees isn't much of a science. It takes expertise to organize a tree, and a broad knowledge of attacks against software systems to come up with a tree that even begins to be complete.

Putting exact numbers on the nodes is error prone. Again, experience helps.

Building the Tree
How do you compose an outline of possible attacks—the attack tree? First, identify the data and resources of a system that may be targeted. These are the attack tree's goals. Next, identify all the modules, all the communication points between the modules and all the classes of the system's users. Together, these tend to encompass the most likely failure points. Include both in-house software and any shrink-wrapped components. Don't forget the computers on which the software runs, the networks they participate in and so on.

Now, gather the entire analysis team together in a room with a big whiteboard (assuming that they're all familiar with the system at this point). One person "owns" the whiteboard, while everyone starts brainstorming possible attacks.

All possible attacks should make it up onto the whiteboard, even if you don't think they're going to be interesting to anyone. For example, you may point out that someone from behind a firewall could easily intercept the unencrypted traffic between an application server and a database, even though the system requirements clearly state that this risk is acceptable. Why note this down? Because it's a good idea to be complete, given that risk assessment is an inexact science.

As the brainstorming session winds down, organize attacks into categories. A rough attack tree can be created on the spot from the board. At this point, divide up the attack tree between team members, and have the team go off and flesh out their branches independently. Also, have them "decorate" the branches with any information deemed important for this analysis (usually estimated cost, estimated risk and estimated attack effort).

Finally, when each branch is complete, have someone assemble the full document, and hold another meeting to review and possibly revise it.

Implementation Security Analysis
An architectural risk analysis should almost always precede an implementation analysis. The results of the former will guide and focus the latter.

Implementation analysis has two major foci: First, we must validate whether the implementation actually meets the design. The only reliable way to do this is by picking through the code by hand and trying to ensure that things are really implemented as designed. This task alone can be quite time-consuming because programs tend to be vast and complex. It's often reasonable to ask the developers specific questions about the implementation and to make judgments from there. This is a good time to perform a code review as part of the validation effort.

The second focus involves looking for implementation-specific vulnerabilities. In particular, we search for flaws that aren't present in the design. For example, errors like buffer overflows never show up in design (race conditions, on the other hand, may, but only rarely).

In many respects, implementation analysis is more difficult than design analysis because code tends to be complex, and security problems in code can be subtle. The extensive expertise required for a design analysis pales in comparison with that necessary for an implementation analysis. Not only does the analyst need to be well versed in the kinds of problems that may crop up, she needs to be able to follow how the data flows through code.

Auditing Source Code
Analyzing an entire program is more work than most people are willing to undertake. Although a thorough review is possible, most settle for a "good-enough" audit that looks for common problems.

With this in mind, our source code auditing strategy is to first identify all points in the source code where the program may take input from a local or remote user. Similarly, look for any places where the program may take input from another program or any other potentially untrusted source. By "untrusted," we mean a source that an attacker may control. Most security problems in software require an attacker to pass specific input to a weak part of a program. Therefore, it's important that we know all the sources from which input can enter the program. We look for network reads, reads from a file and any input from GUIs.

Next, look at the internal API for getting input. Sometimes developers build up their own helper API for getting input. Make sure it's sound, and then treat the API as if it were a standard set of input calls.

Then, look for symptoms of problems. This is where experience comes into play. For example, in most languages, you can look for calls that are symptomatic of time-of-check/time-of-use race conditions. The names of these calls change from language to language, but such problems are universal. Much of what we look for consists of function calls to standard libraries that are frequently misused.

Once we identify places of interest in the code, we manually analyze things to determine whether there is a vulnerability—this can be a challenge. (Sometimes it's better to rewrite any code that shows symptoms of being vulnerable, regardless of whether it is. This is true because it's rare to be able to determine with absolute certainty that a vulnerability exists just from looking at the source code, because validation generally takes quite a lot of work.)

Occasionally, highly suspicious locations turn out not to be problems. The intricacies of code may end up preventing an attack, even if accidentally! This may sound weird, but we've seen it happen. In our own work, we're willing to state positively that we've found a vulnerability only if we can directly show that it exists. Usually, it's not worth going through the lengthy chore of actually building an exploit. Instead, we say that we've found a "probable" vulnerability and move on. The only time we're likely to build an exploit is if some skeptic refuses to change the code without absolute proof.

Implementation audits should be supplemented with thorough code reviews. Scrutinize the system to whatever degree you can afford.

Blunt Instruments, Sharp Eyes
Software security scanners still require expert human oversight. Although security tools encode a fair amount of knowledge on vulnerabilities that no longer must be kept in the analyst's head, an expert still does a much better job than a novice at taking a potential vulnerability location and manually performing the static analysis necessary to determine whether an exploit is possible.

Also, even for experts, analysis is time-consuming. A security scanner cuts out only one quarter to one third of the time it takes to perform a source code analysis because the manual analysis is still required. However, when a tool prioritizes one instance of a function call over another, we tend to be more careful about analysis of the more severe problem.

Performing a security audit is an essential part of any software security solution. Simply put, you can't build secure software without thinking hard about security risks. An expertise-driven architectural analysis can be enhanced with an in-depth scan of the code—and as the software security field matures, we expect to see even better tools emerge.

A Partial Attack Tree for SSH, a Protocol for Encrypted Terminal Connections

This outline doesn't cover every attack against SSH, of course. Part of the trick to security analysis is getting the confidence that your analysis is even reasonably complete. Note that most child nodes represent logical ORs. Sometimes we may also need to use logical ANDs. For example, in the attack tree shown here, we can attack the system by obtaining an encrypted private key and the pass phrase used to encrypt it.

Man-in-the-middle attacks tend to be a very high risk for SSH users. The conditionals in that section of the tree are often true, the cost of launching such an attack is relatively low (tools like dsniff can automate the process very well), and there is very little risk to the attacker. This is an excellent area to concentrate on during an analysis.

Looking at the attack tree, we can see that our two biggest risks are probably man-in-the-middle attacks and attacks on a user's password or pass phrase (in each case, attacks can be automated). In fact, this attack tree suggests that most users of SSH could be infiltrated with relative simplicity, which may come as a surprise to many.

Goal 1: Intercept a network connection for a particular user.
1. Break the encryption.
   1.1 Break the public key encryption.
      1.1.1 Using RSA?
          1.1.1.1 Factor the modulus.
          1.1.1.2 Find a weakness in the implementation.
          1.1.1.3 Find a new attack on the cryptography system.
      1.1.2 Using El Gamal?
          1.1.2.1 Calculate the discrete log.
          1.1.2.2 Find a weakness in the implementation.
          1.1.2.3 Find a new attack on the cryptography system.
          1.1.2.4 Try to attack the key generation method.
              1.1.2.4.1 Attack the random number generator.
              1.1.2.4.2 Trick the user into installing known keys.
   1.2 Break the symmetric key encryption.
      1.2.1 [details elided]
   1.3 Break the use of cryptography in the protocol.
      1.3.1 [details elided]
2. Obtain a key.
   2.1 User uses public key authentication?
      2.1.1 Obtain private key of user.
          2.1.1.1 Obtain encrypted private key (AND).
             2.1.1.1.1 Break into the machine and read it off disk.
             2.1.1.1.2 Get physical access to the computer.
             2.1.1.1.3 Compel user to give it to you (social engineering).
          2.1.1.2 Obtain pass phrase.
             2.1.1.2.1 Break into machine and install a keyboard driver.
             2.1.1.2.2 Install a hardware keystroke recorder.
             2.1.1.2.3 Try passwords using a crack-like program.
             2.1.1.2.4 Read over someone's shoulder when he or she is typing.
             2.1.1.2.5 Capture the pass phrase with a camera.
             2.1.1.2.6 Capture less secure passwords from the same user and try them.
             2.1.1.2.7 Get the pass phrase from the user (for example, blackmail).
          2.1.1.3 Read the entire key when unencrypted.
             2.1.1.3.1 Break into the machine and read it out of memory (especially on Windows 9X boxes).
             2.1.1.3.2 Launch a "tempest" attack (capture emissions from the computer to spy on it).
   2.2 Obtain a server key.
      2.2.1 [details elided]
3. Obtain a password.
   3.1 [details elided … see 2.1.1.2]
4. Attempt a man-in-the-middle attack.
   4.1 Does the user blindly accept changes in the host key?
      4.1.1 Use dsniff to automate the attack, then intercept all future connections with the same (fake) host key.
   4.2 Does the user accept the host key the first time he or she connects?
      4.2.1 Use, and be sure to intercept, all future connections with the same key!
5. Circumvent software.
   5.1 Compel administrator to run modified daemon.
   5.2 Break in and install modified code.
6. Find a software vulnerability in the client or daemon, such as a buffer overflow.
7. Modify the software distribution.
   7.1 Bribe developers to insert a backdoor.
   7.2 Break into the download sites and replace the software with a Trojan horse version.

Goal 2: Denial of service against a particular user or all users
1. Attack the server.
2. Intercept traffic from the client to the server without delivering it.

 

Source-Level Security Auditing Tools
While you can't automate design analysis, when it comes to combing through code, you're in luck.

Source-code scanners statically search source code for known bad function calls and constructs, such as instances of the strcpy function, which is susceptible to buffer overflows. Currently, three such tools are available:

• RATS (Rough Auditing Tool for Security, www.securesw.com/rats/) is an open-source tool that can locate potential vulnerabilities in C, C++, Python, PHP and Perl programs. The RATS database currently has about 200 items in it.

• Flawfinder (www.dwheeler.com/flawfinder/) is an open-source tool for C and C++ code, written in Python. At the time of this writing, the database has only 40 entries.

• ITS4 (It's The Software, Stupid! www.cigital.com/its4/) is the original security source-auditing tool for C and C++ programs. It currently has 145 items in its database.

• SourceScope (www.cigital.com/solutions/securereview/sr.html), sold by Cigital, is a commercial-grade source-auditing tool for C, C++ and Java. SourceScope improves on token-based scanners by using a parser, and also has an XML-defined set of security rules.

Source-level security auditing tools help focus the implementation analysis, providing a list of potential trouble spots. You can do something similar with grep, but you must remember what to look for every single time. RATS, on the other hand, encodes knowledge about more than 200 potential problems from multiple programming languages. Additionally, these tools perform some basic analysis to rule out false positives. For example, though sprintf() is a frequently misused function, if the format string is constant and contains no "%s", it probably isn't worth examining.

The scanners also suggest potential remedies and provide a relative assessment of the severity of each problem, to help the auditor prioritize. Such a feature is necessary, because these tools give a lot of output.

One problem, however, is that their databases largely comprise UNIX vulnerabilities. In the future, we expect to see more Windows vulnerabilities added. In addition, it would be nice if these tools were a lot smarter. Currently, they point you at a function (or some other language construct), and it's the responsibility of the auditor to determine whether that function is used properly or not. It would be nice to automate the real analysis of source that a programmer has to do. Such tools do exist, but only in the research lab.

—J. Viega and G. McGraw

This article is abridged from Chapter 6 of Building Secure Software: How to Avoid Security Problems the Right Way (Addison-Wesley, 2002). Reprinted with permission.

Please see Addison-Wesley's Web site for more information.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.