Reactor pattern
The reactor software design pattern is an event handling strategy that can respond to many potential service requests concurrently. The pattern's key component is an event loop, running in a single thread or process, which demultiplexes incoming requests and dispatches them to the correct request handler.[1]
By relying on event-based mechanisms rather than blocking I/O or multi-threading, a reactor can handle many concurrent I/O bound requests with minimal delay.[2] A reactor also achieves this scalability in a simple way that also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations.[1]
With its balance of simplicity and scalability, the reactor has become a central architectural element in several server applications and software frameworks for networking. Derivations such as the multireactor and proactor also exist for special cases where even greater throughput, performance, or request complexity is necessary.[1][2][3][4]
Overview
Practical considerations for the client–server model in large networks, such as the C10k problem for web servers, were the original motivation for the reactor pattern.[5]
A naive approach to handle service requests from many potential endpoints, such as network sockets or file descriptors, is to listen for new requests from within an event loop, then immediately read the earliest request. Once the entire request is read, it can be processed and forwarded on by directly calling the appropriate handler. An entirely "iterative" server like this, which handles one request from start-to-finish per iteration of the event loop, is logically valid. However, it will fall behind once it receives multiple requests in quick succession. The iterative approach cannot scale because reading the request blocks the server's only thread until the full request is received, and I/O operations are typically much slower than other computations.[2]
One strategy to overcome this limitation is multi-threading: by immediately splitting off each new request into its own worker thread, the first request will no longer block the event loop, which can immediately iterate and handle another request. This "thread per connection" design scales better than a purely iterative one, but it still contains multiple inefficiencies and will struggle past a point. From a standpoint of underlying system resources, each new thread or process imposes overhead costs in memory and processing time (due to context switching). The fundamental inefficiency of each thread waiting for I/O to finish isn't resolved either.[1][2]
Then from a design standpoint, both approaches tightly couple the general demultiplexer with specific request handlers in the event loop, making the server code brittle and tedious to modify whenever request logic changes. These considerations suggest a few major design decisions:
- Retain a single-threaded event handler; multi-threading introduces overhead and complexity without resolving the real issue of blocking I/O
- Use an event notification mechanism to demultiplex requests only after I/O is complete (so I/O is effectively non-blocking)
- Register request handlers as callbacks with the event handler for better separation of concerns
Combining these insights leads to the reactor pattern, which balances the advantages of single-threading with high throughput and scalability.[1][2]
Usage
The reactor pattern's balance of scalability and simplicity, makes it a good starting point for any application that requires concurrent demultiplexing. The pattern's isn't restricted to network socket either; it can apply to hardware I/O, file system or database access, inter-process communication, and even abstract message passing systems.[citation needed]
However, the reactor pattern does have some limitations, a major one being the use of callbacks, which make program analysis and debugging more difficult, a problem common to designs with inverted control.[1] The simpler thread-per-connection and fully iterative approaches avoid this complication. Both can still be valid solutions when scalability or high-throughput are known to be unnecessary.[a][citation needed]
Another drawback of the reactor pattern is that its single-threaded design is still sub-optimal for applications that require maximum throughput or heavy processing in the request handlers. Different multi-threaded designs can overcome these limitations, and in fact, some still include the reactor pattern as a sub-component for handling events and I/O.[1]
Applications
The reactor pattern (or a variant of it) has found a place in many web servers, application servers, and networking frameworks:
- Adaptive Communication Environment[1]
- EventMachine[citation needed]
- Netty[3]
- Nginx[4]
- Node.js[2]
- Perl Object Environment[citation needed]
- POCO C++ Libraries[citation needed]
- Spring Framework (version 5 and later)[citation needed]
- Twisted[citation needed]
- Vert.x[3]
Structure
![]() | This section needs expansion. You can help by adding to it. (September 2023) |
- Resources
- Any resource that can provide input to or consume output from the system.
- Synchronous Event Demultiplexer
- Uses an event loop to block on all resources. The demultiplexer sends the resource to the dispatcher when it is possible to start a synchronous operation on a resource without blocking (Example: a synchronous call to
read()
will block if there is no data to read. The demultiplexer usesselect()
on the resource, which blocks until the resource is available for reading. In this case, a synchronous call toread()
won't block, and the demultiplexer can send the resource to the dispatcher.) - Dispatcher
- Handles registering and unregistering of request handlers. Dispatches resources from the demultiplexer to the associated request handler.
- Request Handler
- An application defined request handler and its associated resource.
UML diagram
Variants
See also
Related patterns:
- Active object
- Observer pattern
- Proactor pattern, which allows mixing synchronous & asynchronous event handling
- Thread pool, a complementary technique to lower the start-up and tear-down overhead of threads
Notes
- ^ That said, a rule-of-thumb in software design is that if application demands can potentially increase past an assumed limit, one should expect that someday they will.
References
- ^ a b c d e f g h Schmidt, Douglas C. (1995). "Chapter 29: Reactor: An Object Behavioral Pattern for Demultiplexing and Dispatching Handles for Synchronous Events" (PDF). In Coplien, James O. (ed.). Pattern Languages of Program Design. Vol. 1 (1st ed.). Addison-Wesley. ISBN 9780201607345.
- ^ a b c d e f Devresse, Adrien (20 June 2014). "Efficient parallel I/O on multi-core architectures" (PDF). 2nd Thematic CERN School of Computing. CERN. Archived (PDF) from the original on 8 August 2022. Retrieved 14 September 2023.
- ^ a b c Escoffier, Clement; Finnegan, Ken (November 2021). "Chapter 4. Design Principles of Reactive Systems". Reactive Systems in Java. O'Reilly Media. ISBN 9781492091721.
- ^ a b Garrett, Owen (10 June 2015). "Inside NGINX: How We Designed for Performance & Scale". NGINX. F5, Inc. Archived from the original on 20 August 2023. Retrieved 10 September 2023.
- ^ Kegel, Dan (5 February 2014). "The C10k problem". Dan Kegel's Web Hostel. Archived from the original on 6 September 2023. Retrieved 10 September 2023.
External links
Specific applications:
- Alexeev, Andrew (30 March 2012). "Chapter 14: nginx". In Brown, Amy; Wilson, Greg (eds.). The Architecture of Open Source Applications. Vol. 2. ISBN 9781105571817.
Sample implementations: