Parallel Paths to Parallelization
Bill was too optimistic about how smart compilers would get, but the satisfaction (smugness?) he referred to is probably not a big factor today. We don't run into too many software developers who are obsessing over being smarter than their tools. There is resistance to putting parallelization smarts into compilers, though, when the result is to make the programmer's work harder.
And that's why those looking seriously at the future of compilers see a multitiered approach to parallelization.
"What we won't see," Reinders says, "is automatic parallelism [realizing] the dream of some to have the compiler do it all." The Holy Grail, Reinders says, would be "to let me express my parallelism without coding the details."
Tackling parallelism will require multiple approaches, Tarditi thinks, and the compiler research efforts he leads at Microsoft are in fact pursuing multiple tracks. "Compiler extensions for parallelism, new programming languages for scalable parallelism, better ways to extract parallelism" from code are all part of the solution, he says. Microsoft's research efforts are focusing on two concepts: data parallelism and transactional memory.
In data parallelism, their idea is to provide a new library that supports data parallelism at runtime. Operators, reminiscent of APL code, only work over entire arrays of data, not individual variables. Tarditi sees this approach as well suited for programming GPUs, but not for all applications, although he says, "my belief is we ought to be able to use it [productively] on multicore computers." While others (IBM, for example) are looking at new programming languages for parallelism, Microsoft's approach is more modest, Tarditi says. They'll be building support for data parallelism onto C#.
Transactional memory treats memory operations like database transactions, locking them so that "either everything happens or nothing happens." Integrating the transactional model into a programming language and applying it to main memory gives you a parallel programming model that is familiar and easier to use. Again, Microsoft's target for implementing this parallelism is C#, but these research ideas are being tested in the Bartok compiler for Microsoft's Singularity research OS. Tarditi also works on Phoenix, Microsoft's next-generation compiler and programming tools infrastructure, "the basis for all future Microsoft compiler technologies."
Intel is pursuing something similar, a fact that became clear to Tarditi when he and an Intel team both presented papers on transactional memory at last year's ACM SIGPLAN 2006 Conference on Programming Language Design and Implementation.
Reinders also sees multiple approaches to parallelism in the near future, and predicts that it will be exciting. "I think we'll see more experimentation than we have in a long time. Things like transactional memory, thread pools, domain-specific extensions, heterogeneous targets[but] exciting as it is, it won't represent a fundamental change in compiler technologyjust an amazing number of sophisticated additions."