Design
Multi-core & Multi-threaded: A New Era in Computing
By Antonio Gonzalez, May 10, 2006
Helping advance a new era in computing through new methods of thread-level parallelization
Speculative Threading in Memory Dependences
One application of speculation is in memory dependences. Compilers are severely limited because they cannot do an exact analysis of which memory locations are going to be touched by every single instruction. This makes it hard for compilers to determine whether two instructions have dependence. Not being able to determine if they are independent, the compiler assumes they are dependent. Using speculative threads, it could be assumed in many cases that dependences do not exist. Each speculative thread created would then be checked at runtime. If, at some point in time, it is detected that a speculative thread reads a memory location that is later written by a thread that comes earlier in the sequence, that memory violation would be spotted and the speculative thread squashed. In most cases though, the speculative threads would be successful and the program sped up.
Speculative Threading in Values
Another speculation that has enormous possibilities is values. In Figure 1(a), for instance, the rectangle represents a sequence of instructions.
Figure 1: (a) The creation of a spawning pair--a set of two points in the program (each at the beginning of a basic block). The former is called the "spawning point" (SP), marked by the spawn instruction, and it identifies when a new speculative thread is created. The latter is called the "control quasi-independent point" (CQIP) and represents where the speculative thread starts executing (after some initialization) and is identified as an operand of the spawn instruction.
In a conventional sequential processor, such instructions are executed one after another (or sometimes out of order in a small window and then reassembled in order). Suppose for a multi-core platform we want to parallelize the code that is in red with the code that is in yellow so they execute simultaneously. Doing a dependence analysis, today's approaches would find a dependence through variable A. Because the yellow part reads from variable A, a synchronization would be inserted for the yellow part to wait for the red part to produce this value. The same happens for variable R1. In other words, between these two sections there are two true dependences through R1 and A. Consequently, a conventional compiler will put in a synchronization between each pair of potentially dependent instructions.
But what if there were a way to make a well-reasoned "guess" which values these variables are going to take in the red part? Then you wouldn't need to wait for these values to be produced and could execute the yellow part in parallel with the red part. When the red part finishes, a check could verify if the guess for these values were correct. If the guesses were correct, everything is fine. The code is parallelized. If the guesses were wrong, the thread is squashed and the yellow part is executed after the red and there's no gain (nor much of anything lost).