Jump to content

Uniform-machines scheduling

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Erel Segal (talk | contribs) at 15:07, 18 June 2021 (Algorithms). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m different machines. The goal is to minimize the makespan - the total time required to execute the schedule. The time that machine i needs in order to process job j is denoted by pi,j. In the general case, the times pi,j are unrelated, and any matrix of positive processing times is possible. In the specific variant called uniform machine scheduling, some machines are uniformly faster than others. This means that, for each machine i, there is a speed factor si, and the run-time of job j on machine i is pi,j = pj / si.

In the standard three-field notation for optimal job scheduling problems, the uniform-machine variant is denoted by Q in the first field. For example, the problem denoted by " Q||" is a uniform machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time.

A special case of uniform machine scheduling is identical machine scheduling, in which all machines have the same speed. This variant is denoted by P in the first field. For example, " P||" is an identical machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. This problem is also a special case of the multiway number partitioning problem. In both problems, the goal is to partition numbers into subsets with nearly-equal sum; identical machine scheduling is a special case in which the objective is to minimize the largest sum. Many algorithms for multiway number partitioning, both exact and approximate, can be used to attain this objective.

In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs). More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight.

Algorithms

The SPT algorithm (Shortest Processing Time First), which runs in time O(n log n), minimizes the average completion time on identical machines.[1]

Bruno, Coffman and Sethi[2] present an algorithm, running in time , for minimizing the average completion time on unrelated machines. They show that minimizing the weighted average completion time is NP-hard even on identical machines, by reduction from the knapsack problem.

Minimizing the maximum completion time is NP-hard even for identical machines, by reduction from the partition problem.

The simple algorithm called greedy number partitioning, or the LPT algorithm (Longest Processing Time First), sorts the jobs by their length, longest first, and then assigns them to the processor with the earliest end time so far. While originally developed for identical machines, it has good asymptotic convergence properties even for uniform machines.[3]

Horowitz and Sahni[4] present several algorithms for both uniform machines and unrelated machines, with various objective functions:

  • An exact algorithm for minimizing the maximum completion time on both uniform and unrelated machines, with exponential run-time (note that these problems are NP-hard).
  • An exact algorithm for minimizing the average completion time on uniform machines, with run time O(n log m n).
  • An exact algorithms for minimizing the weighted-average completion time on uniform machines, with exponential run-time (this problem is NP-hard).

Hochbaum and Shmoys,[5] who developed a PTAS for identical processors, extended their PTAS to handle processors with different speeds.

Epstein and Sgall[6] generalized this PTAS to handle more general objective functions. Let si (for i between 1 and k) be the makespan of processor i in a given schedule. Instead of minimizing the objective function max(si), one can minimize the objective function max(f(si)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(si)).

Extensions

Dependent jobs: In some cases, the jobs may be dependent. For example, take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of ordering exists between the tasks. In fact it is clear that it can be modelled with partial ordering. Then, by definition, the set of tasks constitute a lattice structure. This adds further complication to the multiprocessor scheduling problem.

Static versus Dynamic: Machine scheduling algorithms are static or dynamic. A scheduling algorithm is static if the scheduling decisions as to what computational tasks will be allocated to what processors are made before running the program. An algorithm is dynamic if it is taken at run time. For static scheduling algorithms, a typical approach is to rank the tasks according to their precedence relationships and use a list scheduling technique to schedule them onto the processors.[7]

Multi-stage jobs: In various settings, each job might have several operations that must be executed in parallel. Some such settings are handled by open shop scheduling, flow shop scheduling and job shop scheduling.

References

  1. ^ Horowitz, Ellis; Sahni, Sartaj (1976-04-01). "Exact and Approximate Algorithms for Scheduling Nonidentical Processors". Journal of the ACM. 23 (2): 317–327. doi:10.1145/321941.321951. ISSN 0004-5411.
  2. ^ Bruno, J.; Coffman, E. G.; Sethi, R. (1974-07-01). "Scheduling independent tasks to reduce mean finishing time". Communications of the ACM. 17 (7): 382–387. doi:10.1145/361011.361064. ISSN 0001-0782.
  3. ^ Frenk, J. B. G.; Rinnooy Kan, A. H. G. (1987-05-01). "The Asymptotic Optimality of the LPT Rule". Mathematics of Operations Research. 12 (2): 241–254. doi:10.1287/moor.12.2.241. ISSN 0364-765X.
  4. ^ Horowitz, Ellis; Sahni, Sartaj (1976-04-01). "Exact and Approximate Algorithms for Scheduling Nonidentical Processors". Journal of the ACM. 23 (2): 317–327. doi:10.1145/321941.321951. ISSN 0004-5411.
  5. ^ Hochbaum, Dorit S.; Shmoys, David B. (1988-06-01). "A Polynomial Approximation Scheme for Scheduling on Uniform Processors: Using the Dual Approximation Approach". SIAM Journal on Computing. 17 (3): 539–551. doi:10.1137/0217033. ISSN 0097-5397.
  6. ^ Epstein, Leah; Sgall, Jiri (2004-05-01). "Approximation Schemes for Schedulingon Uniformly Related and Identical Parallel Machines". Algorithmica. 39 (1): 43–57. doi:10.1007/s00453-003-1077-7. ISSN 1432-0541.
  7. ^ Kwok, Yu-Kwong; Ahmad, Ishfaq (1999-12-01). "Static scheduling algorithms for allocating directed task graphs to multiprocessors". ACM Computing Surveys. 31 (4): 406–471. CiteSeerX 10.1.1.322.2295. doi:10.1145/344588.344618. ISSN 0360-0300.