Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

C/C++

C Programming


Dr. Dobb's Journal February 1997: Space Shuttles, Tomato Cans, and Teenage Daughters

Al is a DDJ contributing editor. He can be contacted at [email protected].


The theme of this month's issue is testing, so I'll devote the first part of this column to a reminiscence of my participation in a large test project.

During the Apollo program, I was a certified ACE operator. ACE, short for "Apollo Checkout Equipment," was a configuration of consoles, computers, sensors, and transducers NASA used to test space capsules on the launch pad. After a one-week class, I was qualified to operate an ACE console. In movies about space exploration, when the astronauts talk to Mission Control, the guys in Houston always have crew cuts, sit at consoles, wear white shirts and ties, chain-smoke cigarettes, and look either cool or worried depending on what's happening. I was not one of those guys. Mission Control is in Houston. I worked at Kennedy Space Center; we tested the vehicles and lit the fuses, but Houston ran the show--a gift from Lyndon Johnson to his constituents.

Despite my certification, I never got to sit at an ACE console. They sent me to school to learn the system so that I could write programs to support it. Here's how it worked. Imagine a console like the ones that you see in the movies. Imagine me 30 years younger with a crew cut, sitting at the console. Smoking a cigarette. Looking cool. Because nothing ever goes wrong on my watch. Or when it does, I'm still cool.

The console has buttons and dials for input, and gauges and lights for output. The input and output devices are connected to a computer. The computer can sense when I press a button or change a dial's setting. The computer can turn the console's indicator lights on and off, and position the needles in the gauges.

Now imagine an array of cables that hang out the back of the computer. They are connected to D/A and A/D converters in the computer. The other ends are connected to sensors and transducers that can themselves be connected to something -- typically a space capsule or launch vehicle -- that is to be stimulated and tested. The computer reads the digital values converted from the sensors' analog data and writes digital values to be converted into analog values for the transducers to emit.

I am imagining this configuration along with you because they never let me actually see it. You had to look like Ed Harris or Gregory Peck to be allowed in the room with the computer and consoles. I was more the Rick Moranis, Wally Cox type. (Hunk/nerd actors selected from two generations so that readers from both generations can relate.)

In the space program, every launch is unique. Space exploration is an ongoing R&D activity, a never-ending management of crises, so the configuration of every vehicle is different in one way or another from those that precede it. Consequently, the procedures for testing a vehicle had to be custom-designed and custom-built for each particular launch. A complex table of parameters told the ACE computer which transducer values to change when the ACE operator pressed a console button or changed a dial and what console indicator to change based on the values read by the sensors. The sensors and transducers were connected to the vehicle at appropriate test points. A crew-cut person called the Test Conductor directed everything from a test procedure script while the crew-cut ACE operators punched the buttons, turned the dials, and read and reported the results. My role was to write programs on a different computer (which I was allowed to see) to generate the test procedure scripts for the Test Conductor and the test parameters for the ACE computer.

Pure oxygen and sparks don't mix. In January 1967, three astronauts perished in the infamous Apollo fire during such a test, and new occupations resulted in the space program. Spin doctors. Blame shifters. Fault deflectors. The darkest six-month period in my professional memory followed, and I left the space program not to return for 13 years.

History repeated itself in January 1986 with the Challenger accident. I wasn't involved with the vehicles anymore, but I watched every launch. Still do. It is interesting to observe one significant difference between those two moments in history. A cultural difference. After the Apollo fire, there were no jokes. But something happened to the American sense of humor in the interim. Somewhere between Gregory Peck and Rick Moranis, we turned to humor as a means of dealing with the emotionally unacceptable. Today, every national tragedy is followed almost immediately by a spate of jokes. Take it from me, you can't watch seven people get blown to Kingdom Come without being changed, and you can't see anything funny about it.

NASA still tests everything, of course, and they look for innovative, efficient ways to solve the unique problems of testing things that are headed for space. In the 1980s, the emphasis turned to Shuttle payloads. A Space Shuttle carries cargo -- its payloads -- into space. That's its purpose, and, typical of space exploration, every launch has a unique payload configuration. Ultimately, if budgets return, the Shuttle will be used to carry pieces of the Space Station to where astronauts can float around and assemble the Station like a huge orbital erector-set project. All those parts and pieces will be manufactured by different companies in different locations. Specific components will join one another in a Shuttle's cargo bay to be ferried into orbit. It costs a lot of money to put something into orbit. Anything that goes up needs to be tested thoroughly to ensure that we're not wasting fuel and rockets and time by taking junk into space. Every payload component needs to be tested at the factory, in the payload processing facilities, on the launch pad, and in orbit.

One of NASA's goals was to extend the old ACE concept so that, rather than use a file of cryptic parameters, a computer could run an interpreted language that expressed the test sequence in procedural program code. They called this concept the Space Station Operational Language (SSOL). The idea was that a common language could be hosted by different platforms to run the same program to test a component at different locations with the same results. It would have the control constructs of a structured programming language with the ability to directly address hardware registers to control the I/O devices. Sound familiar? Sounds like C. Furthermore, the language had to be intuitive so that engineers, installers, and astronauts who do not write programs all the time could still use it. Nope. Does not sound like C.

I got involved because of my C background and because, true or not, C was widely touted as the only portable programming language extant. C was a likely candidate, not to be SSOL, but to implement SSOL to run on various platforms. C++ was not well known outside of AT&T at the time, but based on what I had read about SmallTalk, I was pushing for an object-oriented approach for SSOL, an interpretable extension to a C-like syntax without the C gotchas that trip up even veteran programmers from time to time. Nobody really knew what I was talking about. Neither did I at the time. As it turns out, I was talking about Java.

The SSOL project eventually faded into obscurity as budget cuts took their toll on the Space Station program. (SSOL became, so to speak, S.O.L.) The results of the study were filed for later resurrection should the money ever start flowing again. When that happens, when we decide again to fund and follow our natural impulse to explore new worlds (after we balance the budget, reduce the deficit, reform campaign financing, eliminate war and world hunger, save Social Security and Medicare, sanction same-sex marriage, and forget about O.J. Simpson), the SSOL project, or something like it, will be resuscitated because the problems remain to be solved.

One of the obstacles in that project was our lack of, not vision, but forward-looking visibility. NASA planned the Space Station to be a 30-year program. We were charged with defining a programming language that would remain relevant for that duration. But when we considered how programming had changed in the prior 30 years, it seemed impossible that anyone could predict in 1987 what programming would be like in 2017. Now, in 1997, the state of programming has already changed in ways that were totally unforeseen then. Given the recent research being made in visual programming, I'd love to get another crack at that language.

MIDI Potential

ACE was a huge process-control system applied to solve a test-bed problem. Process control conjures images of many things. Robotics. Real time. Embedded systems. I always imagine a conveyor belt moving tin cans through the factory. One part of the process dumps tomatoes into the cans. A second stage seals the cans. A third pastes labels on the cans, and so on.

That rendition suggests the traditional, pre-1980s fear that computers are replacing workers. Years ago, when someone asked what I did for a living, I would say that I was a computer programmer. That response was inevitably met with a litany of complaints about how computers were putting people out of work. Eventually, I learned to keep my mouth shut. Years later, when computers entered the mainstream and people became accustomed to having them around, I relaxed and once again began admitting my profession when asked. Now my response is met with a detailed description of the other person's latest PC acquisition and a request that I drop by one day next week for a drink and, oh by the way, maybe I could help install Windows 95. I've got to learn to keep my mouth shut.

A process-control computer senses and commands the components of an assembly line. A well-formed process-control system is as generic as the ACE system. It senses generic events and knows how to command generic control devices. A table of parameters -- a program -- tells the system what those devices are and how to enact the procedures of the particular process.

Recently, I watched such a process being demonstrated on a TV program about the technology of making movies. This particular segment was about sound effects. The sound effects technician (artist, really) sat at two keyboards -- an electronic 88-key piano keyboard and a PC keyboard. His 21-inch computer monitor displayed what was obviously a sequencer program, such as CakeWalk or WinJammer, running under Windows 95. A second video monitor displayed the action of the movie. The technician played notes -- pressed keys -- on the electronic keyboard and his sound system generated sound effects. Not musical notes, mind you, but atonal, eerie sounds. He used combinations of notes to generate combinations of sounds to create the spooky sound effects required by the movie he was supporting.

This piqued my interest. While developing MidiFitz (a real-time rhythm accompaniment program that I published in this column last year), I gained an interest in the Musical Instrument Digital Interface (MIDI), which is the underlying technology that drives electronic music. That TV documentary was about an application of MIDI not related to conventional music at all. (Unless you, like Dracula, think that sounds such as the howling of wolves are music.) A visit to the local newsstand revealed several magazines devoted to electronic music and musical productions. Notable among them is Keyboard magazine, another Miller Freeman publication. There are several others. They feature articles mainly about the technology, but include a lot of photos and information about the performers as well. Like the crew cuts of 30 years ago, these guys all tend to look alike. They frown a lot, eschew shirts with sleeves, and sport earrings, body piercing, and tattoos. You might think they are cool now, but wait until one of them is at your door to take your teenage daughter to the prom.

What I learned from these magazines comes as no surprise to the cognoscenti, that MIDI is the process-control technology that drives not only the drum machine, but the lights, smoke, lasers, and anything else electro-mechanical associated with a rock concert extravaganza. My teenage daughter once played one of her albums for me and went on and on about the production effects at the concert she attended. In the finale, she said, the band went berserk and smashed all their instruments on the stage. I listened to the album and opined that they had it backwards; they should've smashed those instruments before they were allowed to play them. She responded with a "Faw-ther!" and left the room.

The sound-effects tech on the TV documentary had combined several MIDI and PC components to construct a soundeffects workbench. He used waveform tools to build the individual sounds. Then he loaded these sound files into a musical sampler, which is normally used to store the sampled recordings of real instrument notes. He assigned the samples as patches (voices) of the virtual MIDI instruments. Then, when he played notes on the keyboard, sound effects came out instead of music.

MIDI Fundamentals

MIDI is an electrical specification and a protocol. The most remarkable thing about MIDI is that in an unprecedented spirit of cooperation, an entire industry embraced the MIDI standard without being forced to by market pressure.

When you press a key on a MIDI keyboard, the keyboard sends a three-byte serial data packet to its MIDI Out port. The keyboard might also generate audio sounds to synthesize the notes of an organ or an acoustic piano, but that action is unrelated to the MIDI scenario. The first byte of the packet identifies the event and a four-bit channel number. The event is, in this case, the Note On message. The second byte identifies which note you pressed. Keyboards have 88 keys, and their notes are assigned the values 21 to 108. The third byte of the MIDI-event message packet contains a value called the velocity, which numerically represents how hard you pressed the key when you played it.

When you release the key, the keyboard sends a corresponding Note Off message. The piano's controller devices, such as the sustain pedal and pitch blend wheel, send controller messages when you use them.

MIDI includes other messages to signal the start and stop of a performance, clock synchronizing events, and so on.

You can connect a keyboard's MIDI Out port to any device that has a MIDI In port. Many keyboards have a MIDI In port. If you connect two keyboards, the notes you play on one are heard through the audio playback of the other.

Coordination of all these messages into a performance is the job of a sequencer device, which is usually a program running on a PC or Mac. All the instruments are connected to the sequencer in a daisy chain or in a star network. The sequencer operator (a person) assigns each instrument device to one of the 16 channels, and the sequencer program assigns one of the standard MIDI voices to each of the instrument channels. There are 128 standard voices called "patches," and they are organized into instrument groups with one set of voices for pianos, another for strings, and so on. The drum machine has its own set of patches.

Sequencer programmers (people) use the sequencer program to build Standard MIDI Format (SMF) files containing tracks of MIDI events, with each track assigned to a channel. The SMF file also records data related to the overall production, such as time signature and tempo. (Contemporary sequencer programs can also mix audio in with the MIDI events, but that technology is outside the MIDI standard and involves proprietary file formats.)

During playback, the sequencer program reads the SMF file and sends the MIDI event messages to the MIDI devices via the sequencer's MIDI Out port. Each device reacts only to the events assigned to the device's channel, which is assigned by the system operator (one of the musicians, usually) when he or she sets up the system. The MIDI system plays the production, and the live musicians jump around the stage and play along, adding the human element to the performance and exciting your teenage daughter to dangerous levels. If it is done well, the audience can imagine that a 30-piece orchestra and several lighting and effects technicians are hidden away backstage.

So, the studio and the stage and the factory have a lot in common. It is told that a musician showed up for a recording date to find 30 other musicians setting up in the studio for the session. The newly arrived musician looked around and said, "I hope you people realize that you are putting three sequencer operators out of work."

MIDI versus Process Control

Back to the factory. Consider the cost of process-control peripherals and their interfaces in the computer. Might there be something already in place that could do the same job? The Mac has MIDI connectors built in. Every PC with a SoundBlaster or compatible sound card has MIDI In and Out hardware ports. All you need is an adapter cable that takes the signals from the game port adapter to corresponding MIDI In and Out five-pin DIN connectors. Turtle Beach includes such a cable with its sound cards. The Sound Blaster MIDI Interface Adapter is an extra-cost option. These adapters are not just cables, by the way. There are some built-in electronic components.

Windows 3.1/95/NT have built-in MIDI drivers and applets. Sequencer programs are readily available.

Suppose that you built your factory devices or your payload-testing devices or the auto-animatronic figures at your theme park with MIDI interface hardware. You could orchestrate an assembly line or a widget test or the Gettysburg Address with an electronic keyboard and a sequencer.

What are the advantages of using MIDI input/output hardware for process controllers instead of other standard interfaces, such as the serial port? The main advantage is software. The drivers are built into the operating system. Sequencers are plentiful and inexpensive. The SMF file format is standard. You can play your sequence through the Media Player applet that comes with Windows. You can simulate input devices with a standard MIDI keyboard.

What are the disadvantages? One might be bandwidth limitations. Another might be the absence of error detection and correction.

Electronic musicians have come to know and work with the limitations of the MIDI bandwidth. The speed of the sequencer computer is a factor. The timing of MIDI messages is a function of the tempo assigned to the performance, which essentially throttles the data rate. The bandwidth to support that data rate is a function of how many devices are in the network, how many notes typically need to be played on those devices simultaneously, and whether the network topology is a daisy chain or a star.

MIDI is a real-time protocol, and, as such, it has no built-in error detection and correction. It's a lossy protocol in that it doesn't really matter if an occasional event gets lost. If you miss a cello note from somewhere deep inside a symphony, no one notices, or, if they do, the consequences are only aesthetic -- assuming that they are not employed as music critics. There are no ACKs and NAKs, no CRCs or checksums, and no retransmitted messages. You might stop and replay a passage when you are practicing, but on-stage you stay cool, pretend the mistake didn't happen, and just keep playing.

You can miss a wolf howl, cello note, or tomato can, or your autoanimatronic Abe Lincoln can skip a gesture, but the consequences of missing a critical voltage reading or hatch configuration could be dire. Therefore, MIDI's feasibility as a process control or testing protocol depends on the real-time requirements of the events and the critical nature of each one.

MIDI Xchg

I wondered just how fast and how accurately a computer could send MIDI messages to another computer without data loss. To that end, I wrote MIDI Xchg, the program included with this column. Its purpose is to stress the MIDI data stream by sending event messages from one device to another as fast as possible. I used Visual C++ 4.2 and Windows 95 for the test platform.

To run MIDI Xchg, connect the MIDI In port of one PC to the MIDI Out port of the other and vice versa. Run the program in both PCs. Choose the MIDI/Devices menu command in both PCs and select MIDI devices on the Select MIDI Devices dialog box, shown in Figure 1, to correspond with the PC's physical cable ports rather than the sound card synthesizers. Choose the Receive command on the MIDI menu of one of the PCs. Then choose the MIDI/Send Data command on the other. The sending PC begins sending Note On messages to its MIDI Out port. The data stream consists of repeated loops of 128 notes in the sequence of 0 to 127 so that the receiving PC can determine if any notes are missing in the stream. Wait a few seconds and choose the MIDI/Stop Sending command to stop the data stream. The receiving computer reports the number of events received, the number of notes dropped, the number of events per second, and the approximate effective baud rate just to provide a familiar measure with which to compare the performance. Figure 2 shows the application window after a session.

The MIDI Xchg Implementation

MIDI Xchg is an unremarkable MFC application with no document/view architecture but with application and frame classes. There are ten source-code files and a resource file. The entire project is available electronically; see "Availability," page 3. The application and frame classes each have headers and .cpp files. I adapted the MidiIn (see Listings One and Two) and MidiOut classes (Listings Three and Four) from MidiFitz for this project. Those classes each have a header and a .cpp file, and those are the classes I'll discuss here. The two classes encapsulate enough of the Windows MIDI API to support this test. When the program instantiates the two objects, their constructors call the midiOutGetNumDevs and midiInGetNumDevs API functions to determine the number of MIDI devices available. Both classes have a DeviceList member function that loads a CListBox object, passed as a reference argument, with the names of the installed devices. MIDI Xchg's Select MIDI Devices dialog box uses these functions.

When you select MIDI devices, the program calls the ChangeDevice member function, which closes the current devices and opens the new ones. The midiInOpen API function has a parameter that is the address of a function for the system to call when a MIDI in message is received. The MidiIn class includes a RegisterMIDIFunction member function so that the program can provide the address of its MIDI-message input function. This function must be called ahead of ChangeDevice so that the MidiIn object has a function address to provide to midiInOpen from ChangeDevice.

When the sending PC begins sending event messages, it first sends the MIDI start event message by calling MidiOut::StartMessage. When you choose the Stop Sending command, the program calls MidiOut:: StopMessage to send the MIDI stop message. The receiving program uses these two messages to time the data flow.

Conclusion

While working on this program, I learned several things about the way that the Win32 API handles MIDI. First, it buffers MIDI input messages so that if the program does not get around to processing them right away, they come in later. I was using the callback window option rather than the callback function for MIDI input messages so that I could post their reception in the program's main frame client window. You can't call most system functions from a callback function. The latent Win32 overhead for receiving the messages and displaying them was high enough that the messages backed up in the receiver. When I stopped the transmission, there was a burst of backed-up messages displayed, and the last bunch of them had a lot of missing messages.

By changing to a callback function and using that procedure to count the messages received and the messages dropped, I eliminated the bottleneck, and the MIDI Xchg receiver was able to receive and process as many messages as the MIDI Xchg sender could send.

The sender sends the messages in a tight loop by calling the MidiOut::SendEvent function with the Note on messages already formatted. That function calls the API's midiOutShortMsg message. It became apparent from the results that midiOutShortMsg does not return until the message has been sent, or, at least, waits until the previous message has been sent and the output port is available. That strategy effectively enforces the MIDI data rate, which turns out to be something close to 28K baud, about 1040 events per second, which ought to be plenty fast enough to paste labels on tomato cans.

DDJ

Listing One

// ----- midiin.h#ifndef MIDIIN_H
#define MIDIIN_H


</p>
#include <mmsystem.h>


</p>
class MidiIn    {
    void* m_pFunc;
    HMIDIIN hMidiIn;
    short int numDevices;
    short int currDevice;
public:
    MidiIn();
    ~MidiIn();
    void RegisterMIDIFunction(void* pFunc)
        { m_pFunc = pFunc; }
    void DeviceList(CListBox* dlist);
    BOOL ChangeDevice(short int device);
};
#endif


</p>

Back to Article

Listing Two

// ---- midiin.cpp

</p>
#include "stdafx.h"
#include "midiin.h"


</p>
MidiIn::MidiIn()
{
    m_pFunc = 0;
    hMidiIn = 0;
    currDevice = -1;
    numDevices = midiInGetNumDevs();
}
MidiIn::~MidiIn()
{
    if (hMidiIn != 0)   {
        midiInStop(hMidiIn);
        midiInClose(hMidiIn);
    }
}
void MidiIn::DeviceList(CListBox* dlist)
{
    ASSERT(dlist != 0);
    MIDIINCAPS icaps;
    for (short int i = 0; i < numDevices; i++)  {
        midiInGetDevCaps(i, &icaps, sizeof(icaps));
        dlist->AddString(icaps.szPname);
    }
}
BOOL MidiIn::ChangeDevice(short int device)
{
    ASSERT(device < numDevices);
    ASSERT(m_pFunc != 0);
    if (device != currDevice)   {
       if (hMidiIn)    {
            midiInStop(hMidiIn);
            midiInClose(hMidiIn);
            hMidiIn = 0;
        }
        HMIDIIN hIn;
        if (midiInOpen(&hIn,device,(DWORD) m_pFunc,0,CALLBACK_FUNCTION) != 0) {
            currDevice = -1;
            return FALSE;
        }
        hMidiIn = hIn;
        midiInStart(hMidiIn);
        currDevice = device;
    }
    return TRUE;
}


</p>

Back to Article

Listing Three

// ----- midiout.h#ifndef MIDIOUT_H
#define MIDIOUT_H


</p>
#include <mmsystem.h>


</p>
class MidiOut   {
    HMIDIOUT hMidiOut;
    short int numDevices;
    short int currDevice;
public:
    MidiOut();
    ~MidiOut();
    void DeviceList(CListBox* dlist);
    void SendEvent(DWORD dwEvent);
    void StartMessage();
    void TimingMessage();
    void StopMessage();
    BOOL ChangeDevice(short int device);
    void CloseDevice();
};
inline void MidiOut::SendEvent(DWORD dwEvent)
{
    if (hMidiOut != 0)
        midiOutShortMsg(hMidiOut, dwEvent);
}
#endif


</p>

Back to Article

Listing Four

// ---- midiout.cpp

</p>
#include "stdafx.h"
#include "midiout.h"


</p>
MidiOut::MidiOut()
{
    hMidiOut = 0;
    // ---- test for MIDI output devices
    numDevices = midiOutGetNumDevs();
    currDevice = -1;
}
MidiOut::~MidiOut()
{
    CloseDevice();
}
void MidiOut::CloseDevice()
{
    if (hMidiOut != 0)  {
        // --- close the device
        midiOutClose(hMidiOut);
        hMidiOut = 0;
    }
}
BOOL MidiOut::ChangeDevice(short int device)
{
    ASSERT(device < numDevices + 1);
    if (device != currDevice)   {
        CloseDevice();
        // --- open the new device
        HMIDIOUT hOut;
        if (midiOutOpen(&hOut, device, 0, 0L, 0L) != 0) {
            currDevice = -1;
            return FALSE;
        }
        currDevice = device;
        hMidiOut = hOut;


</p>
    }
    return TRUE;
}
void MidiOut::StartMessage()
{
    if (hMidiOut != 0)  {
        DWORD mmsg  = 0xfa;
        midiOutShortMsg(hMidiOut, mmsg);
    }
}
void MidiOut::TimingMessage()
{
    if (hMidiOut != 0)  {
        DWORD mmsg  = 0xf8;
        midiOutShortMsg(hMidiOut, mmsg);
    }
}
void MidiOut::StopMessage()
{
    if (hMidiOut != 0)  {
        DWORD mmsg  = 0xfc;
        midiOutShortMsg(hMidiOut, mmsg);
    }
}
void MidiOut::DeviceList(CListBox* dlist)
{
    ASSERT(dlist != 0);
    MIDIOUTCAPS ocaps;
    for (short int i = 0; i < numDevices; i++)  {
        midiOutGetDevCaps(i, &ocaps, sizeof(ocaps));
        dlist->AddString(ocaps.szPname);
    }
}


</p>


</p>


</p>

Back to Article


Copyright © 1997, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.