Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

ESC and SDC 2003


Jan04: Embedded Space

Ed is an EE, PE, and author in Poughkeepsie, New York. You can contact him at [email protected].


If there was an unofficial theme to the Embedded Systems and Software Development Best Practices Conferences held in Boston last September (and both sponsored by CMP Media, DDJ's parent company), it was that the tech industry has pretty much hit bottom with nowhere to go but up. Whether that's true can't be determined for a few years, although it seems likely based on the attendance figures.

ESC remains a shadow of its former self, neatly tucked into Hynes Convention Center Exhibition Hall C. Attendance was up, despite fewer companies exhibiting their wares, and the classes were well attended.

SDC took place in the Sheraton's conference center this year, rather than in the cavernous spaces of Hynes. Much to everyone's surprise, the venue was oversubscribed—too many people jammed in too small rooms. While that's better than having too few people to overcome the refrigerated air, it did make for some sweaty meetings. If SDC moves back to larger quarters next year, we'll count it as an up-tick.

This month, I'll report on some general trends and events, then get into more detail later.

Perverted Computing

Dave Stewart led a Shop Talk session on "Pervasive Computing," which the group defined as stuffing computing power into nearly all everyday objects, then connecting all those soup cans, faucets, magazines, and light fixtures together with low-speed networking. Unlike ubiquitous computing, where your laptop sniffs out a network connection anywhere you unfurl it, pervasive computing drags everything into the net.

Slipping into my Cassandra persona, I suggested a somewhat different application for those pervasive computers. Unit-cost requirements will demand ruthlessly standardized, high-volume, low-price machinery running open-source software that's free of per-unit royalties. There pretty much won't be any security at the device level because there's no budget for per-unit configuration—they'll be identical except for a unique serial number.

During the course of the conferences, several speakers reminded us that crackers (and security testers) can and will take advantage of any error to gain control of a system. Imagine what a cracker could do with known hardware, known software, network access, and all the time in the world!

If you think DDoS attacks from a few thousand workstations pose a problem, imagine what might happen when a few million soup cans get in on the act. Think it's not possible?

Contemporary viruses know how to flatten software virus checkers and firewalls. Given that most folks never change the default password of the administration account on their cable/DSL firewalls, eventually someone will write a virus that flattens those firewalls, too. After that, crackers will have access to your pervasive computing network and can pervert it as they see fit. Or maybe they don't need to flatten the firewall in the first place.

Why not? The reactor monitoring network at FirstEnergy's Davis-Besse nuclear plant was attacked by the Slammer worm. As it turns out, there was no firewall between the reactor network and the corporate datacenter network, so that when an employee's laptop carried Slammer behind the corporate firewall, both networks collapsed. If security on pervasive networks follows the same model we've been using so far, it'll fail in the same manner. More on this from the Risks Forum at http://catless.ncl.ac.uk/Risks/22.90.html.

One saving grace may be that a large fraction of the pervasive hardware will be too stupid for malignant behavior. I suspect we'll eventually have screen-printed micros with a 1-bit datapath running on Treddennick's weak ambient light because, after all, sending a few bits per second or even per minute will work, particularly when you must accumulate energy over the course of minutes for a single communication squirt. As with contemporary portable devices, the power budget limits what's achievable.

DARPA has been funding research on the low-speed, low-power networks that are the next-most-critical part of the pervasive problem. Search for DARPA along with "pervasive networks" to find a year's worth of reading or get a core dump at http://www.darpa.mil/body/pdf/FINAL2003FactFilerev1.pdf. Start at http://www.autoidcenter.org/ for a peek at the future of pervasive ID, with more at http://wow-robotics.usc.edu/~gaurav/Papers/IEEE-pervasive.pdf.

Cracking the Code

Testing seems to be regarded as a necessary evil: If we could just get it right the first time, there would be no need for any testing to verify the results. Unfortunately, we seem unable to get it right the first time and, worse, successive code iterations can make things worse. What's left? Having users root out bugs the hard way, one by one, in released code? That's what we've come to accept, it seems.

James Whittaker, a computer-science professor at Florida Institute of Technology and director of its Center for Information Assurance, demonstrated how to bump testing up a notch in a pair of talks based on his books How to Break Software (Pearson-Addison-Wesley, 2002; ISBN 0201796198) and How to Break Software Security (Pearson-Addison-Wesley, 2003; ISBN 0321194330). If you think of "attacking" instead of "testing," you have the right idea; he explains that those terms rivet the attention of his students to the problem at hand.

He demonstrated the principles on a pair of laptops running Windows with a collection of popular software. It seems that Windows itself will crash after a succession of application attacks, which provided a clear example of how to break code through unexpected mechanisms. He switched between two laptops to ensure that the presentation could continue despite repeated reboots.

Whittaker emphasizes that while any fool can stumble over a bug, it takes discipline and organization to track down, isolate, and systematize each one. Each point of his lecture had an accompanying live-fire crash, some requiring nothing more than moving the cursor over a carefully formatted spreadsheet cell or pasting a rather long string into a dialog box.

Many security attacks use features (or, more exactly, "malfeatures") of the code that aren't in the specifications. While comprehensive prerelease testing can establish that the software correctly performs all the functions detailed in the specification, those tests generally do not verify that the software does not have additional capabilities. Buffer overflows, debugging APIs left in the final code, and the program's response to invalid inputs can provide entry for a determined attacker.

One particularly chilling example showed how to make money on the Internet. What level of security should you expect when users can access your HTML code, modify it, then hand it back to your servers? Without going into details, suffice it to say that most online storefronts will blithely accept negative line-item quantities after you blitz their input range-checking code. You'll pay shipping, the store's back end might reject the transaction, and maybe it's a crime. But he showed how and why it works!

James Whittaker and Herbert Thompson coauthored "Testing for Software Security" (DDJ, November 2002), which should get you started, while Thompson and Scott Chase coauthored "Red-Team Application Security Testing" (DDJ, November 2003). Some of Whittaker's more pithy observations reside at http://www.se.fit.edu/people/James/misc.html. Find a Windows PC for the Software Aerobics link at the bottom of the page.

No Silver Bullet(s)

The last presentation on the last day of SDC's program was an eye opener. Jasper Kampermann of Reasoning Inc. (http://www.reasoning.com/) presented the results of the company's code inspections for several open-source and proprietary programs. Because he couldn't give specific numbers for proprietary code (that's why it's "proprietary," after all), he compared open-source numbers to both industry averages and their own findings.

Reasoning's code inspections pick out common coding errors: null pointers, memory leaks, uninitialized variables, bad allocations, and so forth. While these bloopers form only a subset of all possible errors, they're hard to find and have devastating consequences. Reasoning's analysis uses synthetic execution to identify possible problems, followed by manual examination of the results to weed out false positives.

Four different TCP/IP protocol stack implementations gave baseline numbers for mature routines with well-known functions and features. The Linux code had 0.1 error/KLOC, compared with 0.55 error/KLOC for the commercial versions. While one sample does not a trend make, Reasoning dryly concludes that, "Open Source is not inherently worse" than proprietary code.

An analysis of Apache's Tomcat server provoked some surprise, because Tomcat is programmed in Java—a language designed to reduce programming errors. Reasoning's results show a defect density of 0.24 error/KLOC, half that of the overall average and twice that of more mature C code. Reasoning observes that, "Java is not the silver bullet."

Code inspections can find programming errors early in the cycle, when they're easier and cheaper to fix than waiting until a customer stumbles over them in the field. Walkthroughs help, automated inspections help, more eyes help. If you write code, get help!

Reasoning has several white papers that present more details and information about their tools, techniques, and results at http://www.reasoning.com/downloads.html.

Testing by the Numbers

Maybe there's something about being last, because the final ESC presentation was also a delight. David Agrams explored "Debugging When Luck Fails and Prayers Go Unanswered." Because embedded systems now fuse hardware and software into a monolithic brick, you may find that you cannot debug them without a battle plan.

Agrams boiled down decades of experience into Nine Golden Rules of Debugging that can identify errors in any device, any system, any gizmo. He points out that these Rules don't help with certification, regulation, or prevention; they're for use when all else fails. In fact, the evidence I've seen suggests that errors in systems designed with high reliability in mind tend to be more baffling than simple failures in mundane gadgets.

Perhaps the most important rule is "Make It Fail" before you start fixing it. Convert that intermittent glitch into a hard failure so you can examine it carefully. Sometimes you can't do that (and he has suggestions for such situations), but if it doesn't fail all the time, you'll never know when it's fixed.

How often have you found a simple, obvious cause after a protracted session spent examining all the wrong things? He reminds you to "Check the Plug" right at the start: Make sure your assumptions are correct.

Do you jump instantly from a single observation to its cause? Instead, "Quit Thinking and Look" to find all the symptoms before you start reasoning. Only after you demonstrate the error and show the evidence can you begin looking for the cause. If you jump to the wrong conclusion, you'll inevitably fix the wrong cause and, most likely, obscure the real problem.

I learned the truth of "Fix the Bugs You Know About" long ago. Because errors interact in strange ways, there's no such thing as an isolated problem. When you find an error that's unrelated to the symptom you're tracking down, fix it immediately. You'll probably either fix or accentuate the original problem: Two wrongs may not make a right, but they often obscure each other.

More info, including a great poster for those of you with large-format printers and a pointer to his book, resides at http://www.debuggingrules.com/. You should buy the book if only to pass it along to somebody who desperately needs it. As Agrams points out, this stuff may seem obvious, but we tend to miss obvious evidence under stress.

Touch the Third Rail

Two words: "offshore outsourcing." Although they didn't crop up in every presentation, I overheard those words in nearly every after-hours and hallway discussion I passed. Suffice it to say, many attendees are, were, or had been working very close to the spot marked "X."

Software development remains more of a craft than an industrial process, in that we cannot predict performance, schedule, and budget to even one significant figure, but companies can evidently get much the same results from any group of programmers. There may be no magic in using American programmers rather than, say, European or Russian or Egyptian or Chinese programmers, except that American programmers cost far more per hour of delivered work.

Or the benefits of a common language and cultural background outweigh the cost savings. Or programming is destined to follow manufacturing to the lowest cost countries despite any legislative attempts to the contrary. Or we're best off concentrating on high-complexity, high-margin jobs, despite evidence suggesting that such concentration makes no long-term difference.

You can find support for nearly any vociferously held opinion by walking a few dozen feet in any direction in any conference hallway. And that's before you do the obligatory web search.

Is embedded systems design any different? Jack Ganssle pointed out in his "Managing Embedded Projects" class that, with the advent of ASIC and programmable logic chips, hardware has become just as soft as software. Hardware and software design are close to achieving the frictionless and weightless state that the Internet brings to other, more commercial, transactions. They can, in short, be done anywhere.

Several speakers pointed out that a key aspect of embedded systems is their need for reliability far beyond the desktop level. You simply cannot afford to reboot your car occasionally, as BMW discovered with its early iDrive systems. We're slowly learning that we must convert the software craft into an industrial process with predictable results, schedules, and budgets.

Assume, for a moment, that you are responsible for selecting a software company for your product's firmware development. If that product should subsequently misbehave in the field, wouldn't your PowerPoint slides look better in the courtroom if they showed that you picked a company with a documented, certified development process, as opposed to a bunch of code cowboys?

Certification at Level 5 ("Optimizing") of the Software Engineering Institutes' Capability Maturity Model may mean nothing more than that all their paperwork is up to par, but there are 104 offshore companies certified at CMM Level 5 (15 percent of 701 surveyed) compared to 19 U.S. companies (3 percent of 641). As Ganssle puts it, U.S. software developers simply don't buy into CMM.

But if due diligence requires that you pick a CMM Level 5 company, 85 percent of them are offshore and half of those are in India. Think about that the next time you wonder why software is moving offshore.

The SEI report on Software CMM is at http://www.sei.cmu.edu/sema/pdf/ SW-CMM/2003sepSwCMM.pdf. Pay particular attention to slide 13, "USA and Offshore Organization Maturity Profiles."

Now, assume for a moment that you're responsible for a software company that must compete in the global market. Where does your organization appear on that chart and how do you sell your competence to your customers?

And we won't even open a can of H-1B and L-1 visa contention!

Reentry Checklist

Last year, many Hynes Convention Center restrooms featured Falcon Waterfree no-flush urinals (http://www.falconwaterfree.com/, an irritating Flash-based site) that must have seemed like a good idea to someone with no experience in an outhouse. This year, Hynes reinstalled standard urinals sporting Sloan IR-sensing automatic flush valves, which is a nice embedded-systems application. Unbelievably, each valve contains four AA alkaline batteries that last about two years. More at http://www.sloanvalve.com/g2/default.asp.

The line "all my paperwork is up to par" comes from LL Cool J's classic "Illegal Search." You'll find "Touch the third rail" in Erik B. & Rakim's "Let the Rhythm Hit 'Em." Pump the bass (but keep the volume down)!

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.