The appliance metaphor evokes an instrument that does a single function extremely well, often a function that is well understood, is used similarly by many people, and takes considerable expertise to get right. Computing appliances have established several important niches recently. For instance, the NetApp storage servers hide tremendous complexity beneath simple interfaces and make it easy for customers to expand storage to immense sizes without becoming storage experts.
How might the appliance metaphor be useful for high-performance computing (HPC)? For many scientists and engineers, their computing is the essence of their science, and hence they change their models constantly for their custom work. This doesn't match the appliance approach very well. But there are some computing tasks that naturally lend themselves to this metaphor, and whose computing needs are growing rapidly. Two common examples are the initial processing of data coming out of DNA microarrays and the processing of images in biomedical research. Since this work is done nearly identically by many scientists or engineers, groups often step forward to implement common functionality for a community. Some examples include BLAST (the "Basic Linear Alignment Search Tool" used for aligning nucleotide and amino acid sequences) and HMMER (profile "hidden Markov models" used for protein sequence analysis) in the genomics world and SPM ("statistical parametric mapping" for testing hypotheses) for functional imaging. Commercial companies also make appliances, such as MRI machines, that often include a hardware component as well as significant software expertise.
Appliance developers want to provide excellent results to their users (who often include themselves!), which usually requires adapting quickly to the latest scientific advances, and running very fast on modern hardware. These needs are often in direct conflict, as the software tools that support fastest development, such as the high-productivity desktop languages like MATLAB, Python, and R, are often not viewed as high performance. Further, modern hardware is multi-core -- and soon to be many-core -- so being able to decompose the work to exploit the multiple cores is essential for top performance. Further, some algorithms, notably imaging, map well to GPUs (graphics processing units), often with speed-ups on the order of 100X, which researchers who depend on imaging need to remain competitive in their own science. The desktop languages do not have strong support for parallelism or the use of hardware accelerators.
So how can software appliance developers practically respond to these conflicting demands, for both faster adaptability and much higher performance? A new generation of tools is emerging that combine the high-productivity of desktop languages such as MATLAB, Python, and R with access to parallelism and accelerators. These include the Parallel Computing Toolbox from The MathWorks, Dynamic Application Virtualization from IBM, and Star-P from Interactive Supercomputing (the company I work for). For example, Star-P bridges the productivity languages to the power of parallel clusters. Because the language differences between Star-P and the desktop language (using the M language of MATLAB for the examples below) are slight, algorithm developers will continue to develop their codes in a familiar environment, yet have access to massive acceleration where their algorithms demand it.