We can achieve nearly the same result with a technique called "memoization." We use the same recursive approach that we used for rcomplex-multiplications to compute the minimum, but instead of blindly recomputing Mi,j, we use an array to store already computed values. To do this task, we modify rcomplex-multiplications by creating a temporary array (Listing Seven). Before considering any computations, the array is checked to see whether the result already exists. If so, that value is returned. If not, it is computed and stored in the array. We use the fact that the array can hold elements of any type to use nil as the marker for an uncomputed value.
(defun rcomplex-multiplications-memo (p) (let* ((n (- (length p) 1)) (m (make-array (list n n) :initial-element nil))) (labels ((rmin (start end) (cond ((= start end) 0) ((not (null (aref m start end))) (aref m start end)) ((= (+ start 1) end) (setf (aref m start end) (* (aref p start) (aref p (1+ start)) (aref p (1+ end))))) (t (let ((min most-positive-fixnum)) (loop for 1 from start to (1- end) do (let ((v (+ (rmin start 1) (rmin (1+ 1) end) (* (aref p start) (aref p (1+ 1)) (aref p (1+ end)))))) (when + v min) (setq min v;))) (setf (aref m start end) min)))))) (rmin 0 (- (length p) 2;))))
When we compare this function to the dynamic-programming function that also computes the cut points, we see that the memoized recursive one is about 11% slower so our complex algorithm is slightly better. The difference comes from the slight extra overhead for recursion vs. iteration and the more complex array management -- testing for nil.
What are the criteria for applying dynamic programming? First, solving the problem in a divide-and-conquer or recursive manner must involve solving the same subproblems over and over. Computing the factorial requires solving independent subproblems, while computing the optimum matrix-multiplication order requires solving the same subproblem many times.
Second, the structure of a solution must be derived from optimal solutions to subproblems. This criterion is not so obvious. The optimum parenthesization of the matrix product A0...An is based on a cut point k with which we multiply the prefix chain A0...Ak using some parenthesization by the multiplication of the suffix chain Ak+1...An by some other parenthesization. The question is whether the parenthesizations of the prefix and suffix that lead to the optimum parenthesization is necessarily the optimal parenthesizations of the prefix and suffix. Suppose that the optimum full product does not use the optimal parenthesization of the prefix. If we replace that parenthesization by the optimum one, the full product will use fewer scalar multiplications, which is a contradiction. Therefore, the parenthesization of the prefix that gives rise to the optimum product must also be optimal. The same reasoning applies to the suffix. Therefore, the structure of the full solution is derived from optimal solutions of the subproblems.
It's important to know techniques like this one, or to know what sorts of problems can be attacked by what means. You'll be a better programmer this way.
(Adapted from Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest, McGraw-Hill, 1990. Courtesy AI Expert.)