User:JamesQueue/sandbox
Drift Plus Penalty
[edit]This article describes the drift-plus-penalty method for optimization of queueing networks and other stochastic systems.
Introduction to the Drift-Plus-Penalty Method
[edit]The drift-plus-penalty method refers to a technique for stabilizing a queueing network while also minimizing the time average of a network penalty function. It can be used to optimize performance objectives such as time average power, throughput, and throughput utility. In the special case when there is no penalty to be minimized, and when the goal is to design a stable routing policy in a multi-hop network, the method reduces to backpressure routing. [1] [2] The drift-plus-penalty method can also be used to minimize the time average of a stochastic process subject to time average constraints on a collection of other stochastic processes. [3] This is done by defining an appropriate set of virtual queues. It can also be used to produce time averaged solutions to convex optimization problems. [4]
Methodology
[edit]The drift-plus-penalty method applies to queueing systems that operate in discrete time with time slots t in {0, 1, 2, ...}. First, a non-negative function L(t) is defined as a scalar measure of the state of all queues at time t. The function L(t) is typically defined as the sum of the squares of all queue sizes at time t, and is called a Lyapunov function. The Lyapunov drift is defined:
Every slot t, the current queue state is observed and control actions are taken to greedily minimize a bound on the following drift-plus-penalty expression:
where p(t) is the penalty function and V is a non-negative weight. The V parameter can be chosen to ensure the time average of p(t) is arbitrarily close to optimal, with a corresponding tradeoff in average queue size. Like backpressure routing, this method typically does not require knowledge of the probability distributions for job arrivals and network mobility.[3]
Origins and Applications
[edit]When V=0, the method reduces to greedily minimizing the Lyapunov drift. This results in the backpressure routing algorithm originally developed by Tassiulas and Ephremides (also called the max-weight algorithm).[1] The Vp(t) term was added to the drift expression by Neely [5] and Neely, Modiano, Li [6] for stabilizing a network while also maximizing a throughput utility function. For this, the penalty p(t) was defined as -1 times a reward earned on slot t. This drift-plus-penalty technique was later used to minimize average power and optimize other penalty and reward metrics.[2][3]
The theory was developed primarily for optimizing communication networks, including wireless networks, ad-hoc mobile networks, and other computer networks. However, the mathematical techniques can be applied to optimization and control for other stochastic systems, including renewable energy allocation in smart power grids [7] [8] and inventory control for product assembly systems. [9]
How it works
[edit]This section shows how to use the drift-plus-penalty method to minimize the time average of a function p(t) subject to time average constraints on a collection of other functions. The analysis below is based on the material in [3].
The Stochastic Optimization Problem
[edit]Consider a discrete time system that evolves over normalized time slots t in {0, 1, 2, ...}. Define p(t) as a function whose time average should be minimized, called a penalty function. Suppose that minimization of the time average of p(t) must be done subject to time-average constraints on a collection of K other functions:
Every slot t, the network controller observes a new random event. It then makes a control action based on knowledge of this event. The values of p(t) and y_i(t) are determined as functions of the random event and the control action on slot t:
The small case notation p(t), y_i(t) and upper case notation P(), Y_i() is used to distinguish the penalty values from the function that determines these values based on the random event and control action for slot t. The random event is assumed to take values in some abstract event space . The control action is assumed to be chosen within some abstract control space .
As an example in the context of communication networks, the random event can be a vector that contains the slot t arrival information for each node and the slot t channel state information for each link. The control action can be a vector that contains the routing and transmission decisions for each node. The functions P() and Y_i() can represent power expenditures or throughputs associated with the control action and channel condition for slot t.
For simplicity of exposition, assume the P() and Y_i() functions are bounded. Further assume the random event process is independent and identically distributed (i.i.d.) with some possibly unknown probability distribution. The goal is to design a policy for making control actions over time to solve the following problem:
It is assumed throughout that this problem is feasible.
The Drift-Plus-Penalty Expression
[edit]For each constraint i in {1, ..., K}, define a virtual queue with dynamics over slots t in {0, 1, 2, ...} as follows:
The queues can be initialized to 0 for slot t=0. Intuitively, stabilizing these virtual queues ensures the time averages of the constraint functions are non-negative, so that the desired constraints are satisfied. To stabilize these queues, define the Lyapunov function L(t) as a measure of the total queue backlog on slot t:
Squaring the queueing equation results in the following bounds:
where B is a positive constant that upper bounds the term with the sum of squares of the y_i(t) values (such a constant exists because these values are bounded). Adding Vp(t) to both sides of the above inequality results in the following bound on the drift-plus-penalty expression:
The drift-plus-penalty algorithm (defined below) makes control actions every slot t that greedily minimize the right-hand-side of the above inequality. Intuitively, taking an action that minimizes the drift alone would be beneficial in terms of queue stability but would not minimize time average penalty. Taking an action that minimizes the penalty every slot would not necessarily stabilize the queues. Thus, taking an action to minimize the weighted sum incorporates both objectives of queue stability and penalty minimization. The weight V is chosen to place more or less emphasis on penalty minimization, which results in a performance tradeoff.
Drift-Plus-Penalty Algorithm
[edit]Let be the abstract set of all possible control actions. Every slot t, observe the random event and the current queue values:
Given these observations, greedily choose a control action to minimize the following expression:
Then update the queues for each i in {1, ..., K} according to (Eq. 1).
Performance Analysis
[edit]This section shows the algorithm results in a time average penalty that is within O(1/V) of optimality, with a corresponding O(V) tradeoff in average queue size.[3]
Average Penalty Analysis
[edit]Define an -only policy to be a stationary and randomized policy for choosing the control action based on the observed only. That is, an -only policy specifies, for each possible random event , a conditional probability distribution for selecting a control action . Such a policy makes decisions independent of current queue backlog. Assume there exists an -only policy that satsifies the following:
The expectations above are with respect to the random variable for slot t, and the random control action chosen on slot t after observing . Such a policy can be shown to exist whenever the desired control problem is feasible and the event space for and action space for are finite, or when mild closure properties are satisfied.[3]
Let represent the action taken by the drift-plus-penalty algorithm of the previous section, and let represent the -only decision:
By (Eq. 2), the drift-plus-penalty expression under the policy satisfies:
where the last inequality follows because is defined to minimize the second-to-last expression over all other decisions in the action space A, including the (randomized) decision . Taking expectations of the above inequality gives:
where the second-to-last equality follows because are independent of , and the last inequality follows by (Eq.3)-(Eq.4). Summing the above inequality over the first t>0 slots gives:
Using the fact that together with the law of telescoping sums gives:
Using the fact that L(t) is non-negative and assuming L(0) is identically zero gives:
Dividing the above by t and rearranging the terms yields the following result, which holds for all slots t>0:
Thus, the time average expected penalty can be made arbitrarily close to the optimal value p* by choosing V suitably large.
Average Queue Size Analysis
[edit]Assume now there exists an -only policy , possibly different from the one that satisfies (Eq. 3)-(Eq.4), that satisfies the following for some :
An argument similar to the one in the previous section shows:
Now assume there are upper and lower bounds on the penalty function P(), so that:
Then the above inequality reduces to:
Taking expectations of the above and using (Eq. 5) gives:
A telescoping series argument similar to the one in the previous section can thus be used to show the following for all t>0:
This shows that average queue size is indeed O(V).
Treatment of Queueing Systems
[edit]The above analysis considers constrained optimization of time averages in a stochastic system that did not have any explicit queues. Each time average inequality constraint was mapped to a virtual queue according to (Eq. 1). In the case of optimizing a queueing network, the virtual queue equations in (Eq. 1) are replaced by the actual queueing equations.
Delay Tradeoffs and Related Work
[edit]The mathematical analysis in the previous section shows that the drift-plus-penalty method produces a time average penalty that is within O(1/V) of optimality, with a corresponding O(V) tradeoff in average queue size. This method, together with the O(1/V), O(V) tradeoff, was developed in Neely[5] and Neely, Modiano, Li [6] in the context of maximizing network utility subject to stability.
A related algorithm for maximizing network utility was developed by Eryilmaz and Srikant. [10] The Eryilmaz and Srikant work resulted in an algorithm very similar to the drift-plus-penalty algorithm, but used a different analytical technique. That technique was based on Lagrange multipliers. A direct use of the Lagrange multiplier technique results in a worse tradeoff of O(1/V), O(V^2). However, the Lagrange multiplier analysis was later strengthened by Huang and Neely to recover the original O(1/V), O(V) tradeoffs, while showing that queue sizes are tightly clustered around the Lagrange multiplier of a corresponding deterministic optimization problem. [11] This clustering result can be used to modify the drift-plus-penalty algorithm to enable improved O(1/V), O(log^2(V)) tradeoffs. The modifications can use either place-holder backlog[11] or Last-in-First-Out (LIFO) scheduling. [12] [13]
When implemented for non-stochastic functions, the drift-plus-penalty method is similar to the dual subgradient method of convex optimization theory, with the exception that its output is a time average of primal variables, rather than the primal variables themselves.[2][4] A related primal-dual technique for maximizing utility in a stochastic network is developed by Stolyar using a fluid model analysis. [14] [15] The Stolyar analysis does not provide analytical results for a performance tradeoff between utility and queue size. A later analysis of the primal-dual method for stochastic networks provides a limited form of utility and queue size tradeoffs, and also shows local optimality results for minimizing non-convex functions of time averages, under an additional convergence assumption.[3]
Extensions to Non-I.I.D. Event Processes
[edit]The drift-plus-penalty algorithm is known to ensure similar performance guarantees for more general ergodic processes , so that the i.i.d. assumption is not crucial to the analysis. The algorithm is robust to non-ergodic changes in the probabilities for , and provides desirable analytical guarantees, called universal scheduling guarantees, for arbitrary processes.[3]
Extensions to Variable Frame Length Systems
[edit]The drift-plus-penalty method can be extended to treat systems that operate over variable size frames.[3] [16] In that case, the frames are labeled with indices r in {0, 1, 2, ...} and the frame durations are denoted {T[0], T[1], T[2], ...}, where T[r] is a non-negative real number for each frame r. The extended algorithm takes a control action over each frame r to minimize a bound on the following ratio of conditional expectations:
where Q[r] is the vector of queue backlogs at the beginning of frame r. In the special case when all frames are the same size and are normalized to 1 slot length, so that T[r]=1 for all r, the above minimization reduces to the standard drift-plus-penalty technique. This frame-based method can be used for constrained optimization of Markov decision problems (MDPs) and for other problems that experience renewals.[16]
References
[edit]- ^ a b L. Tassiulas and A. Ephremides, "Stability Properties of Constrained Queueing Systems and Scheduling Policies for Maximum Throughput in Multihop Radio Networks, IEEE Transactions on Automatic Control, vol. 37, no. 12, pp. 1936-1948, Dec. 1992.
- ^ a b c L. Georgiadis, M. J. Neely, and L. Tassiulas, "Resource Allocation and Cross-Layer Control in Wireless Networks," Foundations and Trends in Networking, vol. 1, no. 1, pp. 1-149, 2006.
- ^ a b c d e f g h i M. J. Neely. Stochastic Network Optimization with Application to Communication and Queueing Systems, Morgan & Claypool, 2010.
- ^ a b M. J. Neely, "Distributed and Secure Computation of Convex Programs over a Network of Connected Processors," DCDIS Conf, Guelph, Ontario, July 2005
- ^ a b M. J. Neely. Dynamic Power Allocation and Routing for Satellite and Wireless Networks with Time Varying Channels. Ph.D. Dissertation, Massachusetts Institute of Technology, LIDS. November 2003.
- ^ a b M. J. Neely, E. Modiano, and C. Li, "Fairness and Optimal Stochastic Control for Heterogeneous Networks," Proc. IEEE INFOCOM, March 2005.
- ^ R. Urgaonkar, B. Urgaonkar, M. J. Neely, A. Sivasubramaniam, "Optimal Power Cost Management Using Stored Energy in Data Centers," Proc. SIGMETRICS 2011.
- ^ M. J. Neely, A. S. Tehrani, and A. G. Dimakis, "Efficient Algorithms for Renewable Energy Allocation to Delay Tolerant Consumers," 1st IEEE International Conf. on Smart Grid Communications, 2010.
- ^ M. J. Neely and L. Huang, "Dynamic Product Assembly and Inventory Control for Maximum Profit," Proc. IEEE Conf. on Decision and Control, Atlanta, GA, Dec. 2010.
- ^ A. Eryilmaz and R. Srikant, "Fair Resource Allocation in Wireless Networks using Queue-Length-Based Scheduling and Congestion Control," Proc. IEEE INFOCOM, March 2005.
- ^ a b L. Huang and M. J. Neely, "Delay Reduction via Lagrange Multipliers in Stochastic Network Optimization," IEEE Trans. on Automatic Contro, vol. 56, no. 4, pp. 842-857, April 2011.
- ^ S. Moeller, A. Sridharan, B. Krishnamachari, and O. Gnawali, "Routing without Routes: The Backpressure Collection Protocol," Proc. IPSN 2010.
- ^ L. Huang, S. Moeller, M. J. Neely, and B. Krishnamachari, "LIFO-Backpressure Achieves Near Optimal Utility-Delay Tradeoff," IEEE/ACM Transactions on Networking, to appear.
- ^ A. Stolyar, "Maximizing Queueing Network Utility subject to Stability: Greedy Primal-Dual Algorithm," Queueing Systems, vol. 50, no. 4, pp. 401-457, 2005.
- ^ A. Stolyar, "Greedy Primal-Dual Algorithm for Dynamic Resource Allocation in Complex Networks," Queueing Systems, vol. 54, no. 3, pp. 203-220, 2006.
- ^ a b M. J. Neely, "Dynamic Optimization and Learning for Renewal Systems," IEEE Transactions on Automatic Control, vol. 58, no. 1, pp. 32-46, Jan. 2013.