Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Security

Testing for Software Security


Nov02: Testing for Software Security

Herbert is Director of Security Technology for System Integrity LLC (http://www .sisecure.com). James is a professor of computer science at the Florida Institute of Technology. Herbert and James are coauthors of How to Break Software Security (Addison-Wesley). They can be contacted at [email protected] and [email protected], respectively.


Security bugs are different from other types of faults in software. Traditional nonsecurity bugs are usually specification violations; the software was supposed to do something that it didn't do. Security bugs, however, typically manifest themselves as additional behavior—something extra the software does that was not originally intended. This can make security-related vulnerabilities particularly hard to find because they are often masked by software doing what it was supposed to.

Traditional testing techniques, therefore, are not well equipped to find these kinds of bugs. Why? For one thing, testers are trained to look for missing or incorrect output; they see only the correct behavior and neglect to look for other side-effect behaviors that may not be desirable.

For instance, the circle on the left in Figure 1 represents the specification—what the software is supposed to do. The circle on the right represents the true functionality of the application—what the software actually does. Developers and testers are painfully aware that these circles never completely overlap. The area on the left represents either incorrect behavior (the software was supposed to do A but did B instead) or missing behavior (the software was supposed to do A and B but did only A). Traditional software testing is well equipped to detect these types of bugs. Security bugs, however, do not fit well into this model. They tend to manifest as side effects; for instance, the software was supposed to do A, and it did, but in the course of doing A, it does B as well. Imagine a media player that flawlessly plays any form of digital audio or video, but manages to do so by writing the files out to unencrypted temporary storage. This is a side effect that software pirates would be happy to exploit.

It is important that as you verify functionality, you also monitor for side effects and their impact on the security of your application. The problem is that these side effects can be subtle and hidden from view. They could manifest as file writes or registry entries, or even more obscurely as a few extra network packets that contain unencrypted, supposedly secure data.

Luckily, there are both commercially and freely available tools—such as Mutek's AppSight (http://www.identify .com/products/appsightsuite.html) and Holodeck Lite (http://se.fit.edu/holodeck/), respectively—that let you monitor these hidden actions. Another option is to write your own customized monitoring solution such as injecting a custom DLL into the running application's process space.

Creating a Plan of Attack

Software takes input from many different sources. Users, operating-system kernels, other applications, and filesystems all supply input to applications. You have control over these interfaces, and by carefully orchestrating attacks through them, you can uncover many vulnerabilities in the software. Figure 2 is a simple model of software and its interaction with the environment. This model gives you a way to conceptualize these interactions. The four principal classes of input in Figure 2 are:

  • Human interface (UI). Implemented as a set of APIs that get input from the keyboard, mouse, and other devices. Security concerns from this interface include unauthorized access, privilege escalation, and sabotage.
  • Filesystem. Provides data stored in either binary or text format. Often, the filesystem is trusted to store information such as passwords and sensitive data. You must be able to test the way in which this data is stored, retrieved, encrypted, and managed for security.

  • API. Operating systems, libraries, and other applications supply inputs and data in the return values of API calls. Most applications rely heavily on other software and operating-system resources to perform their required functions. Thus, your application is only as secure as the other software it uses and how well equipped it is at handling bad data through these interfaces.

  • Operating-system kernel. Provides memory, file pointers, and services such as time and date functions. Any information that an application uses must pass through memory at one time or another. Information that passes through memory in an encrypted form is generally safe, but if it is decrypted and stored even momentarily in memory, then it is at risk of being read by hackers. Encryption keys, CD keys, passwords, and other sensitive information must eventually be used in an unencrypted form and its exposure in memory needs to be protected. Another concern with respect to the operating system is stress testing for low memory and other faulty operating conditions that may cause an application to crash. An application's tolerance to environmental stress can prevent denial of service and also situations in which the application may crash before it completes some important task (like encrypting passwords). Once an application crashes, it can no longer be responsible for the state of stored data. If that data is sensitive, then security may be compromised.

At first glance, it seems as if you could organize a plan of attack by looking at each method of input delivery individually, and then bombard that interface with input. For security bugs, though, most revealing attacks require you to apply inputs through multiple interfaces. With this in mind, we scoured bug databases, incident reports, advisories, and the like, identifying two broad categories of attacks that can be used to expose vulnerabilities—dependency attacks and design-and-implementation attacks.

Attacking Dependencies

Applications rely heavily on their environment to work properly. They depend on the OS to provide resources such as memory and disk space, the filesystem to read and write data, the registry to store and retrieve information, and on and on. These resources all provide input to the software—not as overtly as human users do, but input nonetheless. Like any input, if the software receives a value outside of its expected range, it can fail.

When failures in the environment occur, error-handling code in the software (if it exists) gets called. Error handlers tend to be the weak point of an application in terms of security. One reason for this is that failures in the software's environment that exercise these code paths are difficult to produce in a test lab situation. Consequently, tests that involve disk errors, memory failures, and network problems are usually only superficially explored. It is during these periods that the software is at its most vulnerable and where carefully conceived security measures break down. If such situations are ignored and other tests pass, we are left with a dangerous illusion of security. Servers do run out of disk space, network connectivity is sometimes intermittent, and file permissions can be improperly set. Such conditions cannot be ignored as part of an overall testing strategy. What's needed is a way to integrate these failures into your tests so that you can evaluate their impact on the security of the product itself and its stored data.

Creating environmental failure scenarios can be difficult, usually requiring you to tamper with the application code to simulate specific failing responses from the operating system or some other resource. This approach isn't very feasible in the real world, however, because of the amount of time, effort, and expertise it takes to simulate just one failure in the environment. Even if you did decide to use this approach, the problem is determining where in the code the application uses these resources and how to make the appropriate changes to simulate a real failure in the environment.

One alternative approach is run-time fault injection: Simulating errors to the application in a black-box fashion at run time. This approach is nonintrusive and lets you test production binaries, not just contrived versions of your applications that have return values hard coded. There are several ways to do this; in the example presented here, we overwrite the first few bytes of the actual function to be called in the process space and insert a JMP statement to our fault injection code in its place. There are other methods that can be used as well, such as modifying the import address tables; a technique for which we have found Jeffrey Richter's Programming Applications for Microsoft Windows, Fourth Edition (Microsoft Press, 1999) to be an excellent reference.

Using these techniques, you can redirect a particular system call to your own impostor function. One passive use for this is to simply log events. This can be informative for the security tester because it lets you watch the application for file, memory, and registry activity.

At this point, you are in control of the application and can either forward a system request to the actual OS function or deny the request by returning any error message you choose. This technique is illustrated in the first attack.

Block access to libraries. Applications rely on external software libraries to get work done. Operating-system libraries and third-party DLLs are critical for the application to function properly. As testers and developers, it is your responsibility to ensure that failures here do not compromise the security of your application. By denying a library to load, you have deprived the application of some functionality it expected to use. If the application does not react to this failure by displaying an error message, this may be a sign that appropriate checks are not in place and that the software may be unaware that this code did not load. If the library in question provides security service, then all bets are off.

You can deny a library to load in Windows by intercepting the LoadLibraryExW function. For instance, consider a publicized bug with Internet Explorer's Content Advisor feature (see "Exposing Software Security Using Runtime Fault Injection" in Proceedings of the ICSE Workshop on Software Quality, 2002). If you turn the feature on, all web sites that don't have a RASCi rating are blocked by default. (The Recreational Software Advisory Council, RASCi, rating is assigned to a web site based on its content. This rating system was replaced in 1999, however, with the Internet Content Rating Association, ICRA, rating system.) Listing One is the C++ source code of a DLL you can inject into the application to hook the function LoadLibraryExW for Windows XP. Our DLL overwrites the first few bytes of this function in the process space of the application under test. These bytes are replaced with a JMP statement to the memory address of our imposter function, imposter_LoadLibraryExW.

The problem with IE's Content Advisor is that if IE fails to load the library msrating.dll, users can surf the Web unrestricted. Our imposter function checks to see whether the library that the application is attempting to load is msrating.dll; if so, it blocks the library from being loaded by returning NULL (indicating failure) to the application.

You can uncover clues to library dependencies such as this by changing the code in the imposter function, either to alert you when a specific call is made or log all such calls and their parameters to a file. It then takes a little detective work to determine which services the library is providing to the application and when they are used. With a few modifications to the imposter function, you can then determine what would happen if that functionality were to be denied. Listing Two is the source of the executable used to inject our DLL into the target application's process space.

In addition to LoadLibraryExW, this code can easily be modified to intercept other system calls and monitor and/or selectively deny them at run time. We have developed a freeware tool called "Holodeck Lite" (available electronically at http://se.fit.edu/holodeck/ and from DDJ; see "Resource Center," page 5), using techniques similar to those in Listing One, to help you easily monitor and obstruct common system calls.

Manipulate registry values (Windows specific). The problem with the registry is trust. When developers read information from the registry, they trust that the values are accurate and haven't been tampered with maliciously. This is especially true if their code wrote those values to the registry in the first place. One of the most extreme vulnerabilities is when sensitive data, such as passwords, is stored unprotected in the registry.

More complex information can cause problems too. Take, for example, "try and buy" software, where users have either limited functionality or a time limit in which to try the software, or both. In these cases, the application can then be unlocked if it is purchased or registered. In many cases, the check an application makes to see if users have purchased it or not is to read a registry key at startup. We've found that in some of the best cases, this key is protected with weak encryption; in some of the worst, it's a simple text value: 1 purchased; 0 trial.

Force the application to use corrupt/protected files and file names. A large application may read from, and write to, hundreds of files in the process of carrying out its tasks. It's the tester's job to make sure that applications can handle bad data gracefully, without exposing sensitive information or allowing unsafe behavior.

This attack is carried out by taking a file that the application uses and changing it in some way the software may not have anticipated. For a file that contains a series of numerical data that the software reads, for instance, you may want to use a text editor and include letters and special characters. If successful, this attack usually results in denial of service either by crashing the application or by bringing down the entire system. More creative changes may force the application to expose data during a crash that users would not normally have access to.

Force the application to operate in low-memory/diskspace/network availability conditions. Depriving applications of these resources lets testers understand how robust their application is under stress. The decision of which faults to try and when can only be determined on a case-by-case basis. A general rule of thumb, though, is to block a resource when an application seems most in need of it. For memory, this may be during some intense computation the application is doing. For disk errors, look for file writes/reads by the application, then start pounding it with faults. These faults can be simulated relatively easily by modifying the code in Listing One to intercept other system functions, such as CreateFile.

Attacking Design and Implementation

It's difficult to identify all the subtle security implications of choices made during the design phase. Looking at a 200-page specification and asking "Is it secure?" will be met with blank looks, even by the most experienced developers. Even if the design is secure, the choices made by the development team during implementation can have a major impact on the security of the product. Here we present some attacks that have been effective at exposing these types of bugs.

Force all error messages. This attack serves two purposes. The first is to see how robust the application is by trying values that should result in error messages and see how many are handled properly, improperly, or not at all. The second is to make sure that error messages do not reveal unintended information to a would-be intruder; for example, during authentication, having one error message that appears when an incorrect user name is entered and having a different error appear when a valid user name is entered but with an incorrect password. At this point, the attacker then knows that they have a correct user name, which means there is now only one string value to attack—the password.

Seek out unprotected test APIs. Complex, large-scale applications are often difficult to test effectively by relying on the APIs extended for normal users alone. Sometimes there are multiple builds a day, each of which has to go through some suite of verification tests. To meet this demand, many applications include hooks that are used by custom test harnesses. These hooks and corresponding test APIs often bypass normal security checks done by the application for the sake of ease of use and efficiency. They are added for testers by developers with the intention of removing them before the software is released. The problem, though, is that these test APIs become so integrated into the code and the testing process that when the time comes for the software to be released, managers are reluctant to remove them for fear of destabilizing the code. It is critical to find these hooks and ensure that if they were to make it out into the field, they could not be used to open up vulnerabilities in the application.

Overflow input buffers. The first thing that comes to many peoples' minds when they hear the term "software security" is the dreaded buffer overflow. For this reason, it is important to test an application's ability to handle long strings in input fields. This attack is especially effective when long strings are entered into fields that have an assumed, but often not enforced, length such as ZIP codes and state names.

API calls have been notorious for unconstrained inputs. As opposed to a GUI where you can filter inputs as they are entered, API parameters must be dealt with internally and checks must be done to ensure that values are appropriate before they are used. The most vulnerable APIs tend to be those that are seldom used or support legacy functionality.

Connect to all ports. Sometimes applications open custom ports on machines to connect with remote servers. Reasons for this vary from creating maintenance channels to automatic updates or possibly as a relic from test automation. There are many documented cases (see http:// www.ntbugtraq.com/) where these ports are left open and unsecured. It is important that the same scrutiny that's been given to the communications through the standard ports (Telnet, ftp, and so on) be given to these application-specific ports and the data that flows through them.

Conclusion

Software security testing must go beyond traditional testing if we ever hope to release secure code with confidence. In this article, we have discussed a fault model that describes a paradigm shift from traditional bugs to security vulnerabilities, and outlined some of the attacks testers can use to better expose vulnerabilities before release. These attacks are only part of a complete security-testing methodology. Research into security vulnerabilities, their symptoms, and habits has only just begun.

Acknowledgments

Thanks to Rahul Chaturvedi for providing code excerpts from Holodeck and to Attila Ondi, Ibrahim El-Far, and Scott Chase for their input on this article.

DDJ

Listing One

#include "stdafx.h"
#include <windows.h>
typedef HMODULE (WINAPI *loadlibrary_t) (LPCWSTR, HANDLE, DWORD);
loadlibrary_t real_LoadLibraryExW;
DWORD dwAddr;
/* Our imposter function for the real LoadLibraryExW. All it does is check 
if the incoming filename is msrating.dll and either returns NULL and 
sets an appropriate error, or lets the call go through to our saved header 
instructions of the real function which then jump to the real function 
in the appropriate location. 
*/
HMODULE WINAPI imposter_LoadLibraryExW(LPCWSTR lpFileName, 
                                                HANDLE hFile, DWORD dwFlags)
{
    if (!_wcsicmp(lpFileName, L"msrating.dll"))
    {
        SetLastError(ERROR_FILE_NOT_FOUND);
        return NULL;
    }
    else
    {
        return real_LoadLibraryExW(lpFileName, hFile, dwFlags);
    }
}
BOOL APIENTRY DllMain( HANDLE hModule, DWORD  ul_reason_for_call, 
                                                     LPVOID lpReserved
                     )
{
    switch (ul_reason_for_call)
    {
    case DLL_PROCESS_ATTACH:
      // Allocate memory for copying the first few instructions of the target
      // function. Since the granularity on VirtuallAlloc is a page, might as 
      // well allocate 4096 bytes
      real_LoadLibraryExW = (loadlibrary_t) VirtualAlloc(NULL, 4096, 
                                       MEM_COMMIT,PAGE_EXECUTE_READWRITE);
     // Copy first two instructions of LoadLibraryExW (which we know add up
     // to 7 bytes - we need 5 for our jump).
     memcpy((void *) real_LoadLibraryExW, (void *)LoadLibraryExW, 7);

     // Writes a jump instruction out right after the copied instructions. 
     // The jump is a relative near jump to the 8th byte of LoadLibraryExW.
     PBYTE pbCode = (PBYTE) real_LoadLibraryExW + 7;

     // Write opcode for jump near and move (write) pointer forward
     *(pbCode++) = 0xe9; 

     // Write out address of where to jump to using a double word pointer. 
     // That way, compiler takes care to put it in big endian convention.
     PDWORD pvdwAddr = (PDWORD) pbCode;
        
     // Write out address - the +3 = -4 +7 (for the offset into the function)
     *pvdwAddr = (DWORD) LoadLibraryExW - (DWORD) pbCode + 3;

     // Move (write) pointer forward the length of the address.
     pbCode+=4; 
     DWORD dwOld, dwTemp;
        
     // Set the page with LoadLibraryExW to writeable
     VirtualProtect((LPVOID) LoadLibraryExW, 4096, 
                                           PAGE_EXECUTE_READWRITE, &dwOld);
     // Write out the jump
     pbCode = (PBYTE) LoadLibraryExW;
        
     // Write opcode for jump near to the beginning to LoadLibraryExW
     *((PBYTE) LoadLibraryExW) = 0xe9; 
        
     // Compiler gymnastics to move forward by *1* byte and not 4 to get
     // the exact address where to write the target address for the jump to.
     pvdwAddr = (PDWORD) (pbCode + 1); 
     dwAddr = (DWORD) pvdwAddr;        
        
     // Write the address
     *pvdwAddr = (DWORD) imposter_LoadLibraryExW - (DWORD) LoadLibraryExW - 5; 

     // Set the old protection back. This is very important for some Win32
     // functions. They refuse to work with writeable protection enabled.
     VirtualProtect((LPVOID) LoadLibraryExW, 4096, dwOld, &dwTemp);

     break;
   }
 return TRUE;
}

Back to Article

Listing Two

#include "stdafx.h"
#include <windows.h>

/* This program uses one of the simplest injection techniques out there. It
utilizes the fact that parameters and calling convention for LoadLibrary are 
the same as the thread function that is suplied to CreateThread/
CreateRemoteThread. It uses that API to call LoadLibrary in the target 
process and load the desired DLL.
*/
int main(int argc, char* argv[])
{
    DWORD dwTemp;
    LPVOID pvDllName;

    if (argc < 3)
    {
        printf("Usage: inject commandline dllname.dll\n");
        return 0;
    }

    // Setup the required structures and start the process
    PROCESS_INFORMATION pi = {0};
    STARTUPINFO si = {0}; si.cb = sizeof(si);
    if (!CreateProcess(NULL, argv[1], NULL, NULL, false, NULL, 
                                                   NULL, NULL, &si, &pi))
        goto error;

    // Allocate memory for the name of the DLL to be loaded
    if (!(pvDllName = VirtualAllocEx(pi.hProcess, NULL, strlen(argv[2]), 
                                       MEM_COMMIT, PAGE_EXECUTE_READWRITE)))
        goto error;

    // Write out the name of the target DLL
    if (!WriteProcessMemory(pi.hProcess, pvDllName, argv[2], 
                                        strlen(argv[2]), &dwTemp))
        goto error;

   // Technically this will execute LoadLibrary in the target process with 
   // name of the DLL as the first parameter. This relies on the fact that
   // that kernel32.dll will NOT be relocated. Assuming that it won't be, then
   // then address of LoadLibraryA in the target process is the same as ours
   if (!CreateRemoteThread(pi.hProcess, NULL, NULL, (LPTHREAD_START_ROUTINE) 
   LoadLibraryA, pvDllName, NULL, &dwTemp))
        goto error;
    return 0;
error:
    if (pi.hProcess)
        TerminateProcess(pi.hProcess, 0);
    printf("Error in injection!\n");
    return -1;
}



Back to Article


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.