Gene is a principal in Progressive Performance Software. He can be
contacted via the Web at
http://www.stgtech.com/
Today's organizations are placing an emphasis on diversity in
hiring, going to great efforts to hire people from many different
backgrounds. However, the opposite is occurring on the technological
front, where the trend is to standardize on one OS, one language, one
browser, one word processor, etc. I feel that this is a mistake, and
to illustrate why, I'll look at this issue from both an engineering
and a biological perspective.
An Engineering Perspective
Computers and software are simply tools to do a job. Rigid
standardization on an extremely small set of hardware and software
makes as much sense as a builder insisting that everyone on a job site
employ one standardized hammer, one standardized drill, one
standardized saw, and no other tools. All too often, these standard
tools will fit neither the worker nor the work.
Many of the current either/or questions currently being asked in IT
departments and R&D firms would probably be better answered with a
"both, please." Organizations debate whether they should standardize
on UNIX or NT, Java or Perl, Java or C++, and so on. But in these
cases each tool has its own set of tasks for which it is better than
the other. UNIX is superior to NT at batch processing and automation
of routine jobs; NT is superior at delivering a familiar UI for
administrative tasks and at serving Windows clients. Perl is superior
to Java for text processing and report creation; Java is superior for
interactive applications and large, multitiered system building.
Java, compared to C++, offers better type-checking, purer
object-orientation, fewer chances of memory leaks, better portability,
and other oft-touted advantages. C++ is faster than Java at runtime,
offers lower-level machine access, more programmer control over
memory, more flexibility in the choice of idioms -- for instance, you
can use it as just a "better" version of C -- and is more accessible
to C programmers.
A Biological Perspective
It has long been a tenet of evolutionary biology that
over-specialization is a major cause of species decline and
extinction. For example, an animal species perfectly adapted to
eating one particular plant will become extinct if that plant
disappears from its habitat. A plant species that has specialized in
a narrow set of climatic conditions may thrive as long as those
conditions persist, but will die off due to moderate changes in
temperature, humidity, rainfall, or sunlight. Another more adaptable
plant species would not have done as well in the first climate, but
would survive into the second.
The peppered moth of England was, a couple of centuries ago,
generally light colored. Fortunately, the moth had not completely
"standardized" on light coloring, as increasing pollution in its
habitat seemingly made the lighter coloring less desirable. Within
half a century, dark-colored moths were predominant. With the more
recent alleviation of air pollution in their environment, the number
of light-colored moths is again on the upswing. (There is some
uncertainty as to what, exactly, has caused these population swings.
See
"Second Thoughts about Peppered Moths"
The technological situation is analogous. Although we often act as
though we have a good understanding of the technology environment in
which we operate, in reality, we rarely anticipate the next great
shift in our terra firma. The blindness of all of the "experts" of
the 1960s and 1970s to the coming personal computer revolution is
symbolic of this lack of vision. The same phenomena was repeated this
decade, as we saw the publication of innumerable essays opining that
the Internet was just a fad, too hard for the average person to use,
filled with useless information, and impossible to make money from.
Just this month, a study revealed that the U.S. Internet economy was
approaching, in size, that of Switzerland. Since we don't, in fact,
know what the next great technology will be, our only defense against
"extinction" is to avoid over-specialization. A number of concrete
examples will illustrate this point.
In the 1980s, organizations that had Macintosh programmers and/or
users made the transition to GUIs easier than places where DOS and/or
a mainframe system were the sole choices. Economists Stan Liebowitz
and Stephen Margolis have pointed out, in their upcoming book,
Winners, Losers, and Microsoft: Competition and Antitrust in High
Technology, the fate of any number of IBM PC software packages was
decided, in no small part, by whether their publishers also had been
developing for the Macintosh. The contests for market share between
Microsoft Excel and Lotus 1-2-3, Quicken and Managing Your Money, and
Microsoft Word and WordPerfect all favored the second-named product of
each pair while DOS was predominant. With the coming of Windows, all
of the market shares reversed dramatically. In each case, the
developers of the first-named product had extensive GUI experience
from developing for the Macintosh, while the other developers did
not.
Similarly, in the 1990s, having some UNIX knowledge in-house
greatly eased the transition to the Internet. I remember being at a
UNIX shop that had access to the Internet in 1987 -- even before Al
Gore invented it! Such organizations already had familiarity with
HTTP, TCP/IP, telnet, ftp, SMTP, and other such technologies before
knowing them became a sine qua non of high-tech survival.
Other examples abound. OS/2 was the sure thing of the late 1980s,
as both IBM and Microsoft backed it. However, organizations that bet
everything on this OS have no doubt come to regret their decision. And
what happened to shops full of 6502 assembler programmers when the
Apple II stopped selling?
The Downside
There are, of course, disadvantages to deploying multiple operating
systems, languages, word processors, or spreadsheets in an
organization. These include increased infrastructure complexity,
higher support costs, and the effort involved in porting. However,
even these disadvantages can have compensating benefits. For
instance, Brian Kernighan and Rob Pike, in The Practice of
Programming, point out that a portable program is also less likely
to break when a new OS or compiler version is released, and that "the
effort invested to make a program portable also makes it better
designed, better constructed, and more thoroughly tested."
Conclusion
Of course, the opposite extreme of under-specialization is
dangerous as well. No individual or organization can be good, or even
competent, at all aspects of information technology. Being equally
inept in 40 or 50 programming languages is not a path likely to lead
to success. However, this is not a situation I have ever seen in
practice, at least at the organizational level. The siren song of
over-specialization is much more alluring. While extreme
specialization in a particular technology can achieve good short-term
results and large immediate financial rewards, it is a dubious
long-term strategy to pursue, either for individuals or
organizations. If you're in it for the long haul, your best bet is
to diversify.
Related web sites
These op/eds do not necessarily reflect the opinions of the author's
employer or of Dr. Dobb's Journal. If you have comments, questions,
or would like to contribute your own opinions, please contact us at