Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Generating Code at Run Time With Reflection.Emit


August 2002/Generating Code at Run Time With Reflection.Emit

If you're familiar with the Regex class (in the System.Text.RegularExpressions namespace) you may have already noticed that it, too, has the ability to compile your favorite regular expressions into a .NET assembly. In fact, the .NET Common Language Runtime (CLR) contains a whole namespace full of classes to help us build assemblies, define types, and emit their implementations, all at run time. These classes, which comprise the System.Reflection.Emit namespace, are known collectively as "Reflection.Emit."

Java programmers have long enjoyed the benefits of reflection and full-fidelity type information, and .NET (finally) delivers that bit of nirvana to the Windows platform. But the classes in the Reflection.Emit namespace raise the bar even further, allowing us to generate new types and emit new code, dynamically.

And, as the Regex class demonstrates, Reflection.Emit is not just for building compilers (although it's certainly good for that — the JScript .NET compiler makes heavy use of Reflection.Emit). It is a very important bit of software technology in its own right because, when combined with the power of .NET's Intermediate Language (IL), it allows us to do something we've never really been able to do before: generate portable, low-level code at run time.

Hello, Reflection.Emit!

Before we start writing full-fledged compilers, let's take a look at a trivial example of Reflection.Emit in action (Listing 1) to get acquainted with the cast of characters — there are quite a few. Long before we begin emitting any actual code, we have to set up the context in which that code will live. An application domain, an assembly, a module, a type, and a method are all required, at the very least. This is all the same stuff required of any executable .NET code. Even if we never intend on saving this code to disk, the architecture of the .NET run-time environment still requires our code stream to belong to an assembly, a module, and so forth. Emitting the actual code is the very last thing we do (before saving it, or executing it). Figure 1 shows the cascading set of "Builder" classes employed by Reflection.Emit to model this architecture.

The goal of the program in Listing 1 is simple: generate a dynamic assembly that houses a single module and a single type, and exposes a single method (which simply writes a line of text to the console), then save this assembly to disk as an executable file. (For a more useful example of Reflection.Emit in action, the sample code available online includes an RPN arithmetic expression engine.)

Step one is to define a new, dynamic assembly in our current application domain. (Reflection.Emit does not allow us a way to add code to a preexisting assembly.) For our purposes, a weakly named assembly, i.e., one without a cryptographic signature, will suffice.

AssemblyName an = new AssemblyName();
an.Name = "HelloReflectionEmit";
AppDomain ad = AppDomain.CurrentDomain;
AssemblyBuilder ab = ad.DefineDynamicAssembly(an,
    AssemblyBuilderAccess.Save);

Next, we spawn a dynamic module from our assembly. Even though we intend to save the module and assembly as a single file, the two abstractions are distinct to Reflection.Emit — the module represents a physical store of code and resources, and the assembly contains the metadata for those modules. Most of the time, you'll simply want to use the same name for the assembly and module, which is what we do here.

Also, because we intend on saving this code to disk, we must specify a filename for the module. We specify the same filename that we'll specify to AssemblyBuilder.Save(), later on, in order to have the assembly metadata and module merged into one single executable file.

ModuleBuilder mb = ab.DefineDynamicModule(an.Name, "Hello.exe");

Now we're getting somewhere. It's time to declare a type: public class Bar in namespace Foo. Note the "Namespace.Typename" syntax, which is the same syntax used elsewhere in the CLR's Reflection library. This syntax is well documented — a full tour can be had, starting with the MSDN documentation for the System.Type.FullName property.

TypeBuilder tb = mb.DefineType("Foo.Bar",
    TypeAttributes.Public|TypeAttributes.Class);

Reflection.Emit is just like C# (and most other object-oriented programming languages) in that if we neglect to build a default constructor for this type, one will be generated for us, which simply calls the default constructor of the base class. So, to make this new class useful, all we need to do is implement a method. Skipping ahead just a bit, we'll want this method to act as the EXE's entry point, so let's design it as a static method, accepting an array of strings, and returning an integer. The parameters and return type are optional (as is the name "Main"), but hey, if we're going to write a "Hello, World" program, let's do it properly.

MethodBuilder fb = tb.DefineMethod("Main",
    MethodAttributes.Public|
    MethodAttributes.Static,
    typeof(int), new Type[] { typeof(string[]) });

The actual code emitted by Listing 1 — a single call to System.Console.WriteLine() — is described by a short sequence of IL instructions. We'll get better acquainted with IL, the Intermediate Language that is the heart and soul of .NET, in the next section.

// Emit the ubiquitous "Hello, World!" method, in IL
ILGenerator ilg = fb.GetILGenerator();
ilg.Emit(...); // stay tuned 

The TypeBuilder.CreateType() method effectively "closes the door." No new code or data members will be allowed in afterward. So all that's left to do is declare the assembly's subsystem and entry point, then save our shiny new EXE file to disk. Newly created assemblies are born as DLLs by default, unless/until AssemblyBuilder.SetEntryPoint() is called, which effectively transforms the assembly into an EXE file.

// Seal the lid on this type
Type t = tb.CreateType();
// Set the entrypoint (thereby declaring it an EXE)
ab.SetEntryPoint(fb,PEFileKinds.ConsoleApplication);


// Save it
ab.Save("Hello.exe");

To recap what we've done so far: IL code can exist only as the body of a method or constructor. Methods and constructors can only exist within the context of a type (or a module, in the case of global functions). Each type must belong to a module and each module must be associated with an assembly. Even assemblies, when dynamically generated, do not exist in a vacuum — they are associated with an application domain, and thus are given a finite boundary for security, lifetime, and remoting purposes.

Now, let's learn a bit about IL, so we can finish the implementation of our Hello, World program.

IL: A Hitchhiker's Guide

IL is the native language of .NET — what machine language is to a CPU. Once loaded into an application domain, each IL method is translated into native machine code immediately before it's first executed. This process is known as "just-in-time compilation," or JIT compilation. The JIT compiler is a wonderful thing because it decouples the software we ship from any specific hardware platform, and it allows our code to be optimized for whichever/whatever hardware and operating system the user happens to be running, present or future. IL and JIT compilation are, at long last, a license for chip manufacturers to innovate.

Most of the Win32 programmers I work with today don't possess an intimate knowledge of x86 assembly language, but it's a funny thing: They're all sufficiently familiar with native x86 code to step through it in a debugger (well, most of them are). Unfortunately, stepping through IL in a debugger is a difficult thing to arrange, unless you've compiled it directly with ILASM.EXE. But IL is worth getting to know anyway, and Reflection.Emit is just one good reason why. At any rate, if you can comprehend x86 at all, even just stepping through the most trivial of functions in a debugger, you'll find IL a refreshing walk in the park.

And if not? Don't worry. With a little help from ILDASM.EXE, you can fake it. ILDASM.EXE is the disassembler tool included with the .NET Framework SDK. It is simply indispensable for learning IL. How do you calculate the cosine of a 64-bit IEEE value in IL? You could dig through a mountain of spec documents... Nah, just write a few lines of C# code, compile, then disassemble. Figure 2 shows ILDASM.EXE in action.

If you have experience with any low-level machine languages at all, the first thing you'll notice about IL is that it is entirely stack based — there are no registers. This makes sense if you consider that IL is designed to be portable to chip architectures other than good 'ol x86. After all, who can say how many registers the next generation of chips from Intel and AMD will have? IL avoids making any assumptions along these lines by simply not employing the concept of registers at all. All parameters for every instruction (even simple arithmetic and comparison operations) are either specified explicitly as operands, or taken from the stack. The results of these operations are then pushed onto the stack. The mapping of an IL method's stack space onto the physical chip's register space is left completely to the JIT compiler and its optimization logic.

Example 1 is a small sampling of some common IL instructions you may encounter, including the instructions emitted by Listing 1. Every IL instruction consists of an opcode followed by zero or more operands. In general, it's helpful to think of the opcodes beginning with "Ld" as pushing items onto the stack, and the opcodes beginning with "St" as popping items off the stack. For a complete reference, refer to the Common Intermediate Language (CIL) specification (see References). Or, just fire up ILDASM.EXE, and fake it. Now as promised, let's revisit those calls to ILGenerator.Emit() from Listing 1.

The first opcode, Ldstr, will take its operand (a string literal) and push a reference to it onto the stack. For this to work at run time, the string itself must obviously be contained within the module's data section, somewhere. This is where the power of a class library like Reflection.Emit becomes truly apparent — it hides these gruesome details behind such innocent little method calls. We needn't worry about laying out vast, cumbersome data sections to store our strings, and patching up our code with the appropriate references. Reflection.Emit will handle all those messy details for us.

ILGenerator ilg = fb.GetILGenerator();
ilg.Emit(OpCodes.Ldstr, "Hello, World!");

The next opcode, Call, will execute a subroutine (a method). The parameters for the method call will be taken from the stack, and its return value, if any, will be pushed on. The return type of Console.WriteLine() is void, so in this case, nothing will be left on the stack frame afterward.

ilg.Emit(OpCodes.Call,
        typeof(Console).GetMethod("WriteLine",
                new Type[] {typeof(string)} ));

If you plan on generating lots of calls to Console.WriteLine(), you should be aware that the ILGenerator class exposes a method for just that purpose: ILGenerator.EmitWriteLine() generates the exact same code as our example. (Could this be the first assembler ever devised that includes explicit support for creating "Hello, World" sample programs?)

We finish up our method implementation with two more opcodes: Ldc_I4_0 followed by Ret. This easy sequence pushes the number 0 onto the stack (as a 4-byte integer) then exits the method, effectively returning 0. The Ldc_I4_0 opcode is a special "short form" opcode — it's functionally equivalent to the Ldc_I4 opcode followed by a 4-byte operand of 0x00000000. But loading the value 0 onto the stack is such a common operation, the designers of IL included a number of special shorthand opcodes like this, to keep life easy (and binaries tiny).

ilg.Emit(OpCodes.Ldc_I4_0);
ilg.Emit(OpCodes.Ret);

Similar short form opcodes exist for the constants 1 through 7, and a not-quite-as-short form (Ldc_I4_S) exists for signed, single-byte operands -128 through +127. However, one must be careful when emitting instructions of this latter type. Consider the following code:

ilg.Emit(OpCodes.Ldarg_S, 13);

Which will generate the corresponding IL:

ldarg.s 13
nop
nop
nop 

Whoa, where did all those nop instructions come from? ILGenerator.Emit() is a very heavily overloaded method. The ldarg.s expects an 8-bit operand, but in our call to ILGenerator.Emit() we accidentally specified a 32-bit value (0x0000000D, aka 13). But why does this matter? To understand, we must consider IL's serialization format.

Each IL instruction is serialized as a simple sequence of bytes, one after the next, with no delimiter between instructions — the byte-size of each IL instruction is simply determined by the opcode. Multibyte operands are serialized in little-endian order (with the low-order bytes before the high-order bytes). When the run time encounters the ldarg.s opcode, for example, it will know that the next instruction begins just one byte further on (after the operand).

Lucky for us, 0x00 happens to be the bytecode for nop (which does nothing, by design). And IL operands are always written in little-endian form, so the 0x0D will be emitted earliest in memory, and therefore be treated as the operand to our ldarg.s instruction. So, all the superfluous 0x00 bytes (nop instructions) will be tucked neatly behind the end of the ldarg.s 13 instruction.

One could argue that this is a bug in Reflection.Emit — in a perfect world, the ILGenerator.Emit() function would be smart enough to validate its operand-parameters against the opcode specified, and either convert the parameters or throw an exception, as appropriate. Until this is fixed, you'll want to take great care to cast your operands explicitly:

ilg.Emit(OpCodes.Ldarg_S, (sbyte)13);

In a way, ILGenerator.Emit() is so heavily overloaded that it suffers from the same fundamental problem as C's printf() function: The compiler can't be expected to catch errors based on the semantics of the function, and the implementation never quite catches everything you think it should because the matrix of possible input is so complex.

Scared yet? The single-byte short form operations can actually pose far greater dangers than emitting an occasional nop. Remember that the 8-bit operand to short-form instructions is considered a signed value (with range -128 to +127). This means that if you happen to specify an 8-bit value larger than 127, it stands at risk to be sign-extended as a negative number by the runtime, and thus misused in ways that are difficult to forsee:

// load an array of double[300] onto the stack
int max = 300;

ilg.Emit(OpCodes.Ldc_I4, max);
ild.Emit(OpCodes.Newarr, typeof(double));

int idx = 200; // valid array index, but >127

ilg.Emit(OpCodes.Ldc_I4_S, (sbyte)idx);

ilg.Emit(OpCodes.Ldelem_R8); // kaboom: IndexOutOfRangeException

In this code, the value 200 is treated as -56 (or 0xC8) when cast as an sbyte. When the runtime gets around to using this value for something meaningful (like comparing it to the size of an array, which is done implicitly as part of the Ldelem instruction), it will be sign-extended to 32-bits: 0xFFFFFFC8, not 0x000000C8 as one might expect. Clearly, 0xFFFFFFC8 is outside the range of our little array, and this code will blow up at run time, but only when idx>127. Be careful out there.

The lesson here is clear: Always review a representative sample of the IL code generated by your Reflection.Emit call, for correctness, personally. Don't rely on unit-testing to uncover all the subtle signed/unsigned mismatch problems, let alone discover superfluous nop instructions. Still not convinced? We'll explore still-uglier problems in the next section.

More Fun With IL: Validity, Verifiability, and Security

Now that we're all IL experts (our generated "Hello, World" code in Listing 1 works brilliantly, after all), it's interesting to consider some of the things you can't do with IL (at least from within the confines of a safe/verifiable execution context).

IL certainly offers a "Turing Complete" set of instructions, and then some. However, it has some interesting constraints and limitations when compared to most native machine languages. For the most part, these limitations exist for one of two reasons: to simplify the implementation of JIT compilers (a stated goal of the Common Intermediate Language specification), or to help the run time verify the safety of your code.

The first issue you should be aware of is that there are two flavors of IL opcodes: verifiable and unverifiable. A most tempting example of an unverifiable opcode is cpblk, which is similar in purpose to the C language memcpy() function. If a method contains any unverifiable opcodes such as cpblk, it will not be allowed to execute in restricted, secure contexts (instead, a Security.VerificationException will be thrown). The typical example of a "restricted, secure context" is code downloaded from the Internet, but the real definition depends on your users' .NET security policies.

Another thing you might notice about IL is the conspicuous lack of a "peek" instruction, which is curious for a stack-based language. Unlike the system stack used by, say, a native x86 thread, you can't just access any arbitrary value on an IL stack frame. You can only pop values off the top, one at a time. In fact, you can't even peek at the top element, without explicitly popping it off and pushing it back on. One might think this limitation exists for the sake of simplifying the JIT compiler — certainly, limiting the complexity of IL in this way allows for a more efficient JIT compilation experience, enhancing the system's ability to map objects on the IL stack frame onto physical hardware resources (memory, chip registers, and the like) efficiently. But it also provides a measure of security, so that untrusted code can't snoop for interesting information further down the caller's stack frame. If a "peek" instruction did exist, it would almost certainly be unverifiable. Later in this section, we'll also see that it's illegal (invalid IL) to pop off more items than exist on our stack frame.

While it's possible to execute unverifiable IL in an appropriately trusted context, it's never possible to execute invalid IL. That may seem like a painfully obvious statement, but unfortunately Reflection.Emit makes it very easy to generate invalid IL, so the topic bears further examination.

Remember our earlier example, where we accidentally emitted an ldarg.s instruction with a 4-byte operand, and so produced a few unexpected nop instructions? The converse situation (specifying a too-small operand) is actually far worse:

ilg.Emit(OpCodes.Ldarg, (sbyte)13); // ldarg expects a 16-bit operand

Here, the ldarg instruction is expecting a 16-bit operand, but we only give it 8 bits. This emitted code will fail spectacularly, but not because 8 bits are missing from the operand — the "missing" bits will be taken from the opcode of the following instruction (remember our earlier discussion of IL's serialization format), effectively transforming the remainder of the codestream into useless garbage.

When we think about correctness and validity in a stream of low-level code, we typically think about the aforementioned problems (invalid opcodes and such). But the correctness and validity of IL is a far more subtle matter.

For example, a method cannot be allowed to grow its stack frame to an indefinite size (or reduce it below zero). You may have already noticed the .maxstack directive output in all methods disassembled by ILDASM — this qualifier exists on all IL methods, to inform the system that the stack size for the given method will never exceed a clearly defined, finite depth.

The JIT compiler contains a code-safety verifier, which will walk the branches of each method's flow-control logic, making careful note of the number (and type) of items on the stack at each instruction point. (Note that this is possible only because the number and type of items consumed/produced on the stack is well-defined by each IL opcode.) If the depth of a method's stack frame at any instruction point ever exceeds .maxstack or drops below zero, or if two incompatible stack states are ever deemed possible at a single instruction point, the system will throw an InvalidProgramException.

To illustrate this rule, consider the following bit of IL, which is invalid:

.method private hidebysig static
    void Kaboom1(int32 x) cil managed
{
  // if (x == 0) goto S1;
  ldarg.0
  brfalse.s S1

  ldc.i4 13 // kaboom: InvalidProgramException

S1:
  // return;
  ret
}

The ldc.i4 13 instruction alters the state of the stack frame by pushing a 4-byte integer — but this instruction may or may not be skipped by the conditional branch instruction (brfalse.s S1) that precedes it. When the JIT compiler gets around to this method, it will be unable to determine the depth of the stack frame at S1, and it will complain, loudly. To fix the above code, simply remove the ldc.i4 13 instruction (or replace it with a nop instruction, or follow it by a pop instruction, or... you get the idea).

To summarize, more generally: Correct IL must never allow a branch instruction to result in two or more incompatible stack-states, at any single instruction point. By "incompatible stack-state" we refer not only to the depth of the stack, but also to the types of the items on it. (Note we should say "nature," rather than "type," because the CLI only considers a small subset of the basic types we know and love, for the purposes of IL validation: object references, addresses, and the basic numeric types. Refer to the CIL spec for complete details.)

By way of (bad) example, consider the following IL code. It is invalid because the nature of the item on the stack at S2 is indeterminate (maybe int64, maybe float64):

.method private hidebysig static
    double Kaboom2(int32 x) cil managed
{
  // if (x == 0) goto S1;
  ldarg.0
  brfalse.s S1

  ldc.i8 7 // push((int64)7);
  br.s S2 // goto S2;

S1:
  ldc.r8 13.0 // push((float64)13);
  // kaboom: InvalidProgramException
S2:
  // return (pop());
  ret
}

To fix this code, one might simply insert a conv.r8 instruction after the ldc.i8 7 instruction. This would keep the state of the stack frame at S2 healthy, happy, and deterministic.

The topic of security and verifiability in IL is an interesting one, deserving an article all its own. As a bare minimum, it's important to understand how the CLR's verifier (and the JIT compiler) will be evaluating your code, before you undertake a nontrivial project with Reflection.Emit. Otherwise, you might labor all night to produce a brilliant work of IL art, only to get an InvalidProgramException for your efforts when you execute your generated code, the next day.

Would You Like Your Code for Here, or To Go?

When creating a dynamic assembly with Reflection.Emit, you must declare, ahead of time, what you plan on doing with it. Do you want to run it or save it? Or both? (Of course, if your answer is "neither," then you should probably should have stopped reading this article long ago.)

You must make a special note of what you pass for the AssemblyBuilderAccess parameter to the AppDomain.DefineDynamicAssembly() method, because it affects how you must call some of the AssemblyBuilder methods, later (namely, AssemblyBuilder.DefineDynamicModule(), and of course, AssemblyBuilder.Save()).

This portion of the Reflection.Emit API is a bit schizophrenic. The reason for the schizophrenia is that there are really two different use-cases for generating dynamic assemblies: "transient" dynamic assemblies (created with the AssemblyBuilderAccess.Run flag), which are never intended to be written to disk, and "persistable" dynamic assemblies (created with AssemblyBuilderAccess.Save or AssemblyBuilderAccess.RunAndSave), which are.

public enum AssemblyBuilderAccess
{
    Run        = 1, // transient
    Save       = 2, // persistable
    RunAndSave = 3  // persistable
}

One could argue that the DefineDynamicAssembly() method should have been refactored into two distinct methods, perhaps named something like DefinePersistableDynamicAssembly() and DefineTransientDynamicAssembly(), rather than switching semantics based on a flag specified at run time. But nobody ever consults me about these things, so we're stuck with using the AssemblyBuilderAccess enumeration to specify, at run time, what we would like to simply declare at design time.

The AssemblyBuilderAccess flags do have other implications, with respect to the lifetime of dynamically generated code. The .NET run time never garbage-collects code. This is just as true for code generated with Reflection.Emit as it is for conventionally loaded assemblies — the lifetime of all executable code is tied to the lifetime of its application domain. Put another way, the only way to unload code is to unload its application domain. However, if your dynamic assembly's code is generated via AssemblyBuilderAccess.Save, then the code is not immediately eligible for execution — it exists only as a stream of bytes in memory, and is therefore fair game for garbage collection.

Clearly, one should take care when designing systems that will have users generating lots of code into transient dynamic assemblies, because you'll be consuming resources that won't go away unless/until the hosting application domain goes away.

In fact, it's interesting to consider how the .NET regular expression classes are designed in this regard, because they don't really offer an easy way for callers to deal with this problem. The Regex class allows one to specify an option (RegexOptions.Compiled) which will cause the underlying implementation of the regex state machine to be generated, via Reflection.Emit, into a transient dynamic assembly.

[Flags]
public enum RegexOptions
{
   None                    = 0x00,
   IgnoreCase              = 0x01,
   Multiline               = 0x02,
   ExplicitCapture         = 0x04,
   Compiled                = 0x08, // Reflection.Emit!
   Singleline              = 0x10,
   IgnorePatternWhitespace  = 0x20,
   RightToLeft             = 0x40,
   ECMAScript              = 0x100
}

This feature greatly increases performance for frequently executed regex searches, at some expense of startup time (Reflection.Emit is not terribly slow, but it's not free). However, there's another caveat: Unless you go to great lengths to instantiate and compile your Regex objects within their own application domain, there'll be no way to release the resources they consume — once generated, the IL code that represents the regex state machine will not be released even when the corresponding Regex object is freed and garbage-collected. The only way to unload the compiled regexes' code is to unload the entire application domain.

This is all well and good if your app needs only a fixed set of regexes, which are well known at design time. But it's probably inappropriate if, for example, you're implementing a search engine where users can enter new and various regexes all day long.

There are two ways to workaround this problem with RegexOptions.Compiled. One workaround involves a two-phase approach: use the static Regex.CompileToAssembly() method to save your compiled regexes to disk as a persistable dynamic assembly (say, "MyTempRegexes.dll"), then load that assembly into a new application domain via AppDomain.Load().

The other alternative is to instantiate the Regex within a remote application domain, directly, via AppDomain.CreateInstance(). This solution is a little cleaner, but requires a bit more typing: Because the Regex class is [Serializable], and it does not derive from System.MarshalByRefObject, the resulting object (and its IL code) will end up back in your own application domain unless we build a remotable wrapper class (call it, say, MarshalByRefRegex) to expose the underlying regex functionality across the app domain boundary.

This is an important reminder: Application domains are more than just code-lifetime boundaries — they are also marshaling boundaries. So if we employ either of these techniques to segregate our transient dynamic assemblies into their own app domains, we will incur small a performance penalty each time we access them, because the calls must be marshaled across an app domain boundary. The performance hit will be light because the calls are only being marshaled intraprocess, but still it will likely consume much of the performance gain achieved by compiling with Reflection.Emit in the first place — depending on your application, of course. Remember, this issue is only relevant in the (arguably rare) case that we care about unloading our emitted code. In all other cases, we are free to use Reflection.Emit within our own app domains, without worry.

Conclusion

In much the same way that XML has saved us from ever again needing to design low-level file formats for our apps, Reflection.Emit technology may save us from ever again having to devise our own state machines (like regular expression engines) that are so commonplace in advanced applications.

Close your eyes, and imagine the possibilities: parsers, interpreters, state machines, static table-lookup code... All of these things and more can now be implemented as fast, native code, optimized for each individual user's platform, thus offering a level of performance never before attainable. The Age of The Interpreter might finally be over.

The downloadable sample code accompanying this article includes a parser and an engine for evaluating RPN arithmetic expressions. The design follows very closely after the classes in the System.Text.RegularExpressions namespace to offer a familiar look and feel, but also to offer the same level of control over how the code is generated.

Reference

The CIL Instruction Set Specification http://msdn.microsoft.com/net/ecma/PartitionIIICILOct01.pdf/

Download code at www.wd-mag.com


Chris Sells in an independent consultant, specializing in distributed applications in .NET and COM, as well as an instructor for DevelopMentor. He's written several books, including ATL Internals , which is in the process of being updated for ATL7 as you read this. He's also working on Essential Windows Forms for Addison-Wesley and Mastering Visual Studio .NET for O'Reilly. In his free time, Chris hosts the Web Services DevCon (November, 2002) and directs the Genghis source-available project. More information about Chris, and his various projects, is available at http://www.sellsbrothers.com.

Shawn Van Ness is a software engineer and consultant, specializing in .NET, COM and XML technologies. Shawn has contributed numerous tools and technologies for software development to the public domain. Find out more at http://www.arithex.com.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.