Appendix A. Example Engine Diagram
The main game loop begins processing (see Figure 4, "Main Game Loop" for a graphical representation of this).
Appendix B. Engine and System Relationship Diagram
Appendix C. The Observer Design Pattern
The observer design pattern is documented in the book Design Patterns: Elements of Reusable Object-Oriented Software, written by Erich Gamma et al., and originally published by Addison-Wesley in 1995.
The basic premise of this pattern is that any items interested in data or state changes in other items are not burdened with having to poll the items from time to time to see if there are any changes. The pattern defines a subject and an observer that are used for the change notification. It works by having an observer observe a subject for any changes. The change controller acts as a mediator between the two. The following diagram illustrates the relationship:
The following is the flow of events:
- The observer registers itself with the subject that it wants to observe changes for via the change controller.
- The change controller is actually an observer. Instead of registering the observer with the subject it registers itself with the subject and keeps its own list of which observers are registered with which subject.
- The subject inserts the observer (actually the change controller) in its list of observers that are interested in it; optionally there can also be a change type which identifies what type of changes the observer is interested in – this helps speed up the change notification distribution process.
- When the subject makes a change to its data or state it notifies the observer via a callback mechanism and passes information of the types that were changed.
- The change controller queues up the change notifications and waits for the signal to distribute them.
- During distribution the change controller calls the actual observers.
- The observers query the subject for the changed data or state (or get the data from the message).
- When the observer is no longer interested in the subject or is being destroyed, it deregisters itself from the subject via the change controller.
Appendix D. Tips on Implementing Tasks
While task distribution can be implemented in many different ways, it is best to keep the number of worker threads equal to the number of available logical processors of the platform. Avoid setting the affinity of tasks to a specific thread as the tasks from the different systems will not complete at the same time and can lead to a load imbalance among the worker threads, effectively reducing your parallelization. It will also be worth your while to investigate using a tasking library, like Intel's Threading Building Blocks for example, which can simplify this process.
There are some optimizations that can be done in the task manager to ensure CPU friendly execution of the different task submitted. They are as follows:
- Reverse Issuing, if the order of primary tasks being issued is fairly static, the tasks can then be alternately issued in reverse order from frame to frame. The last task to execute in a previous frame will more than likely still have its data in the cache, so by issuing the tasks in reverse order for the next frame it will all but guarantee that the CPU caches will not have to be repopulated with the correct data.
- Cache Sharing, some multi-core processors have their shared cache split into sections so that two processors may share a cache, while another two share a separate cache. By issuing sub-tasks from the same system onto processors sharing a cache it will increase the likelihood that the data will already be in the shared cache.