Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Real-time CORBA, Part 3: Thread Pools and Synchronizers


March 2002 C++ Experts Forum/Object Interconnections

Object Interconnections: Real-time CORBA, Part 3: Thread Pools and Synchronizers

by Douglas C. Schmidt and Steve Vinoski


Introduction

This is the third installment in our series of columns describing how CORBA has evolved to support DRE (distributed real-time and embedded) applications. Examples of DRE systems include telecommunication networking (e.g., central office switching), telemedicine (e.g., remote surgery), manufacturing process automation (e.g., hot rolling mills), and defense systems (e.g., avionics mission computing). One of the hallmarks of DRE applications is their need for strict control over the scheduling and execution of CPU and memory resources. The Real-time CORBA specification therefore enables client and server applications to:

  • Determine the priority at which client invocations will be processed at a server.
  • Allow servers to predefine thread pools to process client requests concurrently within a bounded amount of CPU and memory resources.
  • Ensure that intra-process thread synchronizers used by middleware and applications have consistent semantics.

Our previous column [1] focused on the Real-time CORBA policies and mechanisms that enable clients and servers to indicate the relative priorities of their requests so that ORB end systems can enforce these priorities end-to-end. This column describes Real-time CORBA's support for thread pools and synchronizers. A thread pool is a concurrency model that allocates a bounded group of threads to process requests simultaneously. A synchronizer is a locking mechanism that serializes access to a shared resource and coordinates the order in which threads access the shared resource.

Developers of DRE applications can use thread pools and synchronizers to bound CPU and memory resources to support DRE applications with real-time QoS (quality of service) requirements effectively. These applications are typically limited in the amount of processing time and memory they can use. It's therefore essential for real-time ORB end systems to ensure that these resources are conserved and managed carefully.

Thread pools and synchronizers are also used to bound priority inversion, which is a scheduling hazard that occurs when a low-priority thread or request blocks the execution of a higher priority thread or request. DRE applications often monitor and control entities in the physical world, such as regulating the temperature of coolant in a nuclear reactor. These entities must be serviced or actuated within strict real-time deadlines to avoid catastrophic failures. It's therefore essential for real-time ORB end systems to ensure that high-priority operations can predictably receive preferential treatment relative to lower priority operations.

Older ORBs that lacked support for thread pools and synchronizers typically used reactive concurrency models [2], where a server ORB read each request from the OS, processed it to completion, retrieved the next request, and so forth. If all requests require a fixed, relatively short amount of processing, a reactive model can be implemented with low overhead. Many DRE applications have complex object implementations, however, whose operations run for variable and/or long durations. To avoid unbounded priority inversion therefore, these DRE applications may need some form of preemptive multithreading, where the OS preempts low-priority threads when a higher priority thread is ready to run.

Real-time Application Review

This column continues to illustrate key Real-time CORBA capabilities using the planetary mapping system described in [1]. In this system, semi-autonomous drone vehicles are involved in a distributed computation to map a planet's surface. Human operators define high-level exploration goals for the drones via a Base_Station object, which provides set points for Controller objects. Controller objects control the drones remotely using Drone objects, which reside on the drone vehicles. Drone objects expose operations for monitoring and controlling individual drone behavior. Each drone sends information obtained from its sensors back to the Base_Station via its Controller object. Figure 1 illustrates the roles and relationships between these objects.

Our example real-time application applies the Real-time CORBA features found in Table 1 to address key design challenges related to controlling thread pools and synchronization mechanisms. The remainder of this column illustrates how to apply the Real-time CORBA features outlined in Table 1 to address key threading and synchronization design challenges that arise in our planetary mapping application.

Supporting Thread Pools Effectively

The base station server in our planetary mapping system must be able process requests at different levels of importance. For example, high-priority edge_alarm requests should be given preference to lower-priority battery_low requests to prevent drones from falling off the surface they are exploring. If a battery_low request is executing when an edge_alarm request arrives, the ORB end system should therefore quickly and predictably preempt the battery_low request and let the edge_alarm request run.

This example follows the time-honored tradition of DRE applications that use multithreading to:

  • distinguish between different types of service, such as high-priority vs. low-priority tasks
  • support thread preemption to prevent unbounded priority inversion

Prior to the Real-time CORBA specification [3], there was no standard API for programming multithreaded CORBA servers. It was therefore not possible to use CORBA to program multithreaded real-time systems without using proprietary ORB features.

To address this issue and enhance portability, the Real-time CORBA specification defines a standard thread pool model that allows server developers to:

  • preallocate a certain number of threads statically
  • bound the total amount of dynamic thread creation
  • partition threads into different groups of priority levels
  • allow high-priority groups of threads to "borrow" unused threads from groups that have lower priorities
  • buffer requests when threads aren't available to process them immediately.

Thread pools are useful for DRE applications that want to leverage the benefits of multithreading, while simultaneously bounding their consumption of system resources, such as stack space and CPU time.

Thread pools can be defined and associated with POAs in a Real-time CORBA server. Each POA must be associated with one thread pool, although a thread pool can be associated with multiple POAs. Figure 2 illustrates the association of a thread pool and a POA in a server. A subsequent example illustrates how to associate a thread pool with multiple POAs in a server.

The simplest Real-time CORBA thread-pool model allows developers to control the overall concurrency level within server ORBs and applications. A thread pool can be created with a fixed number of statically allocated threads that an ORB will use to process client messages. These preallocated threads will consume system resources even if they are not used, however. Real-time CORBA therefore defines the following RTCORBA::RTORB interface that allows server developers to preallocate an initial number of statically allocated threads, while allowing this pool to grow dynamically to handle bursts of client requests:

interface -RTORB {     
  typedef unsigned long ThreadpoolId;     

  ThreadpoolId create_threadpool     
    (in unsigned long stacksize,     
     in unsigned long dynamic_threads,     
     in Priority default_priority,     
     in boolean allow_request_buffering,     
     in unsigned long max_buffered_requests,     
     in unsigned long max_request_buffer_size);     
  void destroy_threadpool      
    (in ThreadpoolId threadpool) raises (InvalidThreadpool);     
  };     

Server applications can use the create_threadpool operation above to specify (1) the default number of static threads that are created initially, (2) the maximum number of threads that can be created dynamically, and (3) the default priority of all the threads in a pool. The priorities of threads within a pool can change dynamically in accordance with the CLIENT_PROPAGATED and SERVER_DECLARED priority policies described in [1]. If a request arrives and all existing threads are busy, a new thread may be created to handle the request. No additional thread will be created, however, if the maximum number of dynamic threads in the pool has already been spawned.

The create_threadpool operation returns a ThreadpoolID, which uniquely identifies a thread pool within a CORBA server. This ID can be passed to the destroy_threadpool operation in RTCORBA ::RTORB to remove a thread pool when it's no longer needed. It can also be used to associate a thread pool with one or more POAs, as shown in the following example. We start by using the standard CORBA ::ORB_init factory operation to obtain an object reference to the ORB:

CORBA::ORB_var orb = CORBA::ORB_init (argc, argv);
CORBA::Object_var obj =
  orb->resolve_initial_references ("RTORB");

We then narrow obj to obtain an object reference to the Real-time CORBA ORB:

RTCORBA::RTORB_var rt_orb =  
  RTCORBA::RTORB::_narrow (obj);

Using the rt_orb object reference returned from _narrow, we can create a thread pool containing three threads preallocated statically:

RTCORBA::ThreadpoolId pool_id = rt_orb->create_threadpool 
  (0, // Default stack size
   3, // # of static threads
   0, // Allow no dynamic threads
   20, // Default priority of 20
   false, // No thread borrowing
   false, 0, 0); // No request buffering

Now that we've created the thread pool, we can associate it with two different types of POAs, one of which uses the CLIENT_PROPAGATED priority policy and the other of which uses the SERVER_ DECLARED policy. We first call a factory method on the rt_orb object reference to create a thread-pool policy object:

RTCORBA::ThreadpoolPolicy_var tp_policy = 
  rt_orb->create_threadpool_policy (pool_id); 

We then create and initialize the policy lists for the POAs, starting with the CLIENT_PROPAGATED policy:

CORBA::PolicyList RTPOA_policies_a (2);    
RTPOA_policies_a.length (2);   
RTPOA_policies_a[0] = tp_policy;   
RTPOA_policies_a[1] = rt_orb->create_priority_model_policy    
  (RTCORBA::CLIENT_PROPAGATED,    
   DEFAULT_PRIORITY); 

Next, we initialize the SERVER_DECLARED policy:

CORBA::PolicyList RTPOA_policies_b (2); 
RTPOA_policies_b.length (2);
RTPOA_policies_b[0] = tp_policy;
RTPOA_policies_b[1] = rt_orb->create_priority_model_policy 
  (RTCORBA::SERVER_DECLARED, 20);

Finally, we create the POAs, initializing them with the policies defined above:

PortableServer::POA_var rt_poa_a =                                              
  root_poa->create_POA ("POA A",                                               
                        PortableServer::POAManager::_nil (),                                             
                        RTPOA_policies_a);                                             
PortableServer::POA_var rt_poa_b =                                              
  root_poa->create_POA ("POA B",                                               
                        PortableServer::POAManager::_nil (),                                             
                        RTPOA_policies_b);  

Figure 3 illustrates the association between the thread pool and the two POAs that we've created.

When client requests arrive for servants in POA A, they will be processed at the priority propagated in the service context field of the GIOP request. In contrast, regardless of the priority at which clients invoke requests, servants in POA B will be processed at either (1) the default priority 20 or (2) whatever priority the object was activated to run at using the RTPortableServer::POA::activate_object_with_priority operation.

Buffering Client Requests

The base station server must handles requests from drones and human operators. Since it cannot control when these requests occur, it may need to buffer requests to handle "bursty" traffic from clients. To prevent denial of service attacks, many operating systems restrict the amount of data that they will buffer on behalf of an application. If this buffer space is too small, it may be necessary to buffer the data within the middleware instead of within the OS.

To support this use case, Real-time CORBA thread pools can be optionally preconfigured for a maximum buffer size or number of requests. As shown in the create_threadpool operation in the previous section, Real-time CORBA thread -pool buffer capacities can be configured according to the maximum number of bytes and the maximum number of requests.

If buffering is enabled for the pool, the request will be queued until a thread is available to process it. If no queue space is available or request buffering was not specified, the ORB should raise a TRANSIENT exception, which indicates a temporary resource shortage. When the client receives this exception it can reissue the request at a later point.

The following code illustrates how to modify the earlier example to buffer client requests (for variety, we explicitly set the size of the stack to be 10 Kbytes):

RTCORBA::ThreadpoolId pool_id = rt_orb->create_threadpool
  (1024 * 10,  // Stacksize   
   3, // Static threads   
   0, // Dynamic threads   
   20, // Default priority is 20   
   true, // Enable buffering   
   128,  // Maximum # of messages   
   64 * 1024); // Maximum # of bytes to buffer

Some Real-time CORBA ORBs don't use queues at the middleware layer in order to minimize priority inversions and excessive context switching, synchronization, and data movement overheads. Such ORBs can validly reject requests to create a thread pool with buffers (i.e., their maximum buffer capacity is always zero). In this case, queueing must be done within the I/O subsystem of the underlying OS.

Prevent Exhaustion of Threads by Low-Priority Requests

If drones send many low-priority requests to a base station server simultaneously, it may run out of threads in its thread pool, in which case no threads are available to process high-priority requests. For example, a number of drones could send battery_low requests to the base station at the same time. If all the threads in the base station's thread pool are processing these requests, a higher priority edge_alarm request may be delayed, causing damage to the drone. To prevent the exhaustion of threads by low-priority requests, Real-time CORBA allows servers to partition their thread pools into groups of threads, called lanes, where each lane has a different priority. The thread pool with lanes model enables developers to bound both the overall concurrency of a server and the amount of work performed at a given priority level.

The following IDL defined in the RTCORBA::RTORB interface is used to create thread pool with lanes:

  // ...     
  struct ThreadpoolLane {     
    Priority lane_priority;     
    unsigned long static_threads;     
    unsigned long dynamic_threads;     
  };     
  typedef sequence<ThreadpoolLane> ThreadpoolLanes;     
  ThreadpoolId create_threadpool_with_lanes      
    (in unsigned long stacksize,     
     in ThreadpoolLanes lanes,     
     in boolean allow_borrowing     
     in boolean allow_request_buffering,     
     in unsigned long max_buffered_requests,     
     in unsigned long max_request_buffer_size);     
  // ...

The create_threadpool_with_lanes operation provides a superset of the functionality of the create_threadpool operation. The difference is that a sequence of ThreadpoolLanes is created each time this operation is called. For each group of threads in a thread pool with lanes model, the server specifies the CORBA priority, static thread count, and dynamic thread count. Dynamic threads are assigned the lane priority.

Thread pools with lanes can be configured to allow lanes with higher priorities to borrow threads from lanes with lower priorities. If a thread is borrowed, its priority is temporarily raised to that of the lane that borrows it. When the invocation processing is complete, its priority reverts to its previous value and the thread returns to its original lane. Thread pools with lanes also can be configured to support request buffering if no threads are available to process incoming requests.

The following C++ code illustrates how to create a thread pool with lanes. We start by defining the following three lanes:

RTCORBA::ThreadpoolLane high_priority = {  
  50, // Default priority  
   1, // Static threads 
   0  // Dynamic threads   
}; 


RTCORBA::ThreadpoolLane mid_priority = {   
  35, // Default priority  
   3, // Static Threads 
   2  // Dynamic Threads 
}; 

RTCORBA::ThreadpoolLane low_priority = {   
  10, // Default priority  
   2, // Static Threads 
   20 // Dynamic Threads 
};

The high_priority lane will contain one static thread running at a default priority of 50 and no dynamic threads. The mid_priority lane will contain three static threads with a default priority of 35 and up to 2 dynamic threads. The low_priority lane will contain two static threads with a default priority of 10 and up to 20 dynamic threads.

We next create a RTCORBA::ThreadpoolLanes object; initialize it with the high_priority, mid_priority, and low_priority lanes; and call the create_threadpool_with_lanes factory method:

RTCORBA::ThreadpoolLanes lanes (3); lanes.length (3);  
lanes[0] = high_priority;   
lanes[1] = mid_priority;   
lanes[2] = low_priority;  

RTCORBA::ThreadpoolId pool_id =   
  rt_orb->create_threadpool_with_lanes   
    (0, // Default stacksize  
     lanes, // Thread pool lanes  
     false, // No thread borrowing  
     false, 0, 0); // No request buffering

After this sequence of operations, we'll end up with the thread pool with three lanes configuration as shown in Figure 4, with groups of threads running at priority 10, 35, and 50.

Many DRE applications statically associate global CORBA priorities to thread pools. For example, a telecommunications application may select three distinct priorities to represent low-latency, high-throughput, and best-effort request classes. Alternatively, a fixed set of rate-groups with corresponding global CORBA priorities is a convenient model for DRE applications, such as avionics mission computing, with real-time periodic processing requirements. In these scenarios, it is desirable to partition the threads in a thread pool into different subsets, each with different priorities.

Synchronizing Operations Consistently

Before the Real-time CORBA specification was released in CORBA 2.4 [3], CORBA did not define a threading model. There was thus no standard, portable API that CORBA applications could use to ensure semantic consistency between their synchronization mechanisms and the internal synchronization mechanisms used by ORB middleware. Real-time applications, however, require this consistency to enforce priority inheritance and priority ceiling protocols [4]. Priority inheritance means that when a thread waits on a mutex owned by a lower priority thread, the priority of the owner is increased to that of the waiter. Priority ceiling means that while a thread owns the mutex it runs at a priority higher than any other thread that may acquire the mutex.

To ensure semantic consistency between applications and ORB middleware, the Real-time CORBA specification defines a standard set of locality constrained mutex operations. Figure 5 illustrates the RTCORBA::Mutex interface defined by Real-time CORBA.

An instance of the RTCORBA::Mutex interface can be created by an application using the following factory method:

RTCORBA::Mutex_var mutex = rt_orb->create_mutex ();

After being created, the mutex can be acquired and released to serialize access to a critical section and prevent corruption from race conditions, as follows:

mutex->lock ();
// Critical section here...
mutex->unlock ();

When the mutex is no longer required, it can be destroyed using the following operation:

rt_orb->destroy_mutex (mutex);

It is important to note that Real-time CORBA doesn't prescribe a particular priority inversion protocol, such as priority inheritance or priority ceiling. It does require, however, that the protocol used by mutexes in the ORB implementation is the same as the one for the RTCORBA::Mutex. Moreover, Real-time CORBA doesn't prescribe any standard API to control the priority inversion protocol to be used, which means that users must rely on ORB-specific APIs.

Concluding Remarks

The Real-time CORBA specification adds QoS control capabilities to regular CORBA to improve application predictability by bounding priority inversions and managing system resources end-to-end.

The ability to control how thread pools are assigned to POAs and to customize memory buffers for those pools is critical to ensure that servants have the resources they need to carry out their requests in a timely and predictable manner. You can also control predictability more precisely by using thread pools with lanes to ensure that priority inversions do not occur. Achieving proper operation synchronization that ensures proper priority inheritance requires cooperation between the ORB and the application. Our next column will show how to program Real-time CORBA features that control communication resources end-to-end.

A PowerPoint tutorial version of our columns on the Real-time CORBA specification is available at <www.cs.wustl.edu/~schmidt/RT-CORBA.ppt>. If you have comments, questions, or suggestions regarding Real-time CORBA or our column, please let us know at [email protected].

References

1. D. Schmidt and S. Vinoski. "Object Interconnections: Real-time CORBA, Part 2: Applications and Priorities," C/C++ Users Journal C++ Experts Forum, January 2002, <www.cuj.com/experts/2001/vinoski/vinoski.htm>.

2. D. Schmidt, M. Stal, H. Rohnert, and F. Buschmann. Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects (Wiley and Sons, 2000), <www.cs.wustl.edu/~schmidt/POSA>.

3. Object Management Group, "The Common Object Request Broker: Architecture and Specification Revision 2.4, OMG Technical Document formal/01-02-33", October 2000, <www.omg.org/cgi-bin/doc?formal/01-02-33>.

4. R. Rajkumar, L. Sha, and J. Lehoczky. "Real-Time Synchronization Protocols for Multiprocessors," Proceedings of the Real-Time Systems Symposium, Huntsville, Alabama, December 1988.

About the Authors

Steve Vinoski is vice president of Platform Technologies and chief architect for IONA Technologies and is also an IONA Fellow. A frequent speaker at technical conferences, he has been giving CORBA tutorials around the globe since 1993. Steve helped put together several important OMG specifications, including CORBA 1.2, 2.0, 2.2, and 2.3; the OMG IDL C++ Language Mapping; the ORB Portability Specification; and the Objects By Value Specification. In 1996, he was a charter member of the OMG Architecture Board. He is currently the chair of the OMG IDL C++ Mapping Revision Task Force. He and Michi Henning are the authors of Advanced CORBA Programming with C++, published in January 1999 by Addison Wesley Longman.

Doug Schmidt is an associate professor at the University of California, Irvine. His research focuses on patterns, optimization principles, and empirical analyses of object-oriented techniques that facilitate the development of high-performance, real-time distributed object computing middleware on parallel processing platforms running over high-speed networks and embedded system interconnects. He is the lead author of the books Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, published in 2000 by Wiley and Sons, and C++ Network Programming: Mastering Complexity with ACE and Patterns, published in 2002 by Addison-Wesley. He can be contacted at [email protected].


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.