DDJ: Today we're talking with Michael McCool, chief scientist and founder of RapidMind. RapidMind is a software development platform for both the Cell/B.E. multicore processor and Graphic Processing Units (GPUs) from AMD and NVIDIA.
Michael, what was the greatest challenge in building a tool that could be used for both of these processors?
MM: Our greatest challenge was to design and implement a system which could take an algorithm and map it to efficient implementations on radically different hardware architectures. The RapidMind platform needs to perform many complex code optimization, load balancing, and data management tasks internally. This complexity is, however, hidden behind a simple and easy-to-use API. There was also a need to define a programming model that would fully expose the computational power of both GPUs and the Cell/B.E. while remaining portable and easy to use. The SPMD Stream Programming model we use not only accomplishes this but applies to a wide range of application problems, while also mapping efficiently onto multicore CPUs from AMD and Intel.
DDJ: What about scaling applications to yet-to-be-released multicore processors that might have as many as, say, 100 cores? What will this mean for developers?
MM: Moore's Law still holds, but individual serial cores will no longer scale significantly in performance. Instead, processors will grow in performance by adding cores. In fact, the number of cores will grow exponentially over time. This means that scalability will be very important, and applications must be written to use any number of cores.
The RapidMind platform is based around the concept of data parallelism and not task parallelism ("multi-threading"). One advantage of data parallelism is that programs can be written without regard for the number of cores in the hardware target. Instead, the available parallelism is proportional to the amount of data, and the platform maps this onto the number of cores available. As data sets get larger, the amount of available parallelism grows, allowing applications to efficiently take advantage of any number of cores without recoding.
DDJ: What if we want to port existing C++ applications to multicore platforms? Where would we start?
MM: The RapidMind API works with existing C++ compilers. You just link to it as if it were a library. Therefore, you can use RapidMind with existing build systems and IDEs. Porting an application to a multicore processor with RapidMind is an incremental process. You first identify, using profiling tools, a section of the application which is consuming a large amount of execution time. The kernel of this "hotspot" can then be RapidMind-enabled by converting existing numerical and array types to their RapidMind equivalents, and using the RapidMind API to "capture" the computation. The RapidMind platform then maps the computation to a hardware target with any number of cores, offloading the main program. This process can be repeated as many times as necessary. Defining a multicore RapidMind computation is essentially as easy as defining a C++ function. Using RapidMind does not prevent using other programming techniques, tools, or libraries. To achieve results quickly, developers RapidMind-enable the more performance critical parts of their application. The remainder of their code is unaffected.
DDJ: Is there a web site where readers can get more information?
MM: Yes, readers can visit the RapidMind web site. The Developer Edition of the RapidMind platform is available as a free download from either RapidMind, or the RapidMind Developer Portal.