251 pág.

# Pinedo_problemas_deterministicos

Pré-visualização50 páginas

the same amount of processing to do. Now Cmax(LPT ) \u2264 pn + \u2211n\u22121 j=1 pj m = pn(1\u2212 1 m ) + \u2211n j=1 pj m . Since Cmax(OPT ) \u2265 \u2211n j=1 pj m the following series of inequalities holds for the counterexample: 4 3 \u2212 1 3m < Cmax(LPT ) Cmax(OPT ) \u2264 pn(1\u2212 1/m) + \u2211n j=1 pj/m Cmax(OPT ) = pn(1\u2212 1/m) Cmax(OPT ) + \u2211n j=1 pj/m Cmax(OPT ) \u2264 pn(1 \u2212 1/m) Cmax(OPT ) + 1. 114 5 Parallel Machine Models (Deterministic) Thus 4 3 \u2212 1 3m < pn(1 \u2212 1/m) Cmax(OPT ) + 1 and Cmax(OPT ) < 3pn. Note that this last inequality is a strict inequality. This implies that for the smallest counterexample the optimal schedule may result in at most two jobs on each machine. It can be shown that if an optimal schedule is a schedule with at most two jobs on each machine then the LPT schedule is optimal and the ratio of the two makespans is equal to one (see Exercise 5.11.b). This contradiction completes the proof of the theorem. unionsq Example 5.1.2 (A Worst Case Example of LPT) Consider 4 parallel machines and 9 jobs, whose processing times are given in the table below: jobs 1 2 3 4 5 6 7 8 9 pj 7 7 6 6 5 5 4 4 4 Scheduling the jobs according to LPT results in a makespan of 15. It can be shown easily that for this set of jobs a schedule can be found with a makespan of 12 (see Figure 5.1). This particular instance is thus a worst case when there are 4 machines in parallel. || What would the worst case be, if instead of LPT an arbitrary priority rule is used? Consider the case where at time t = 0 the jobs are put in an arbitrary list. Whenever a machine is freed the job that ranks, among the remaining jobs, highest on the list is put on the machine. It can be shown that the worst case of this arbitrary list rule satis\ufb01es the inequality Cmax(LIST ) Cmax(OPT ) \u2264 2\u2212 1 m . (This result can be shown via arguments that are similar to the proof of Theo- rem 5.6.1 in the section on online scheduling.) However, there are also several other heuristics for the Pm || Cmax problem that are more sophisticated than LPT and that have tighter worst-case bounds. These heuristics are beyond the scope of this book. Consider now the same problem with the jobs subject to precedence con- straints, i.e., Pm | prec | Cmax. From a complexity point of view this problem has to be at least as hard as the problem without precedence constraints. To obtain some insights into the e\ufb00ects of precedence constraints, a number of special cases have to be considered. The special case with a single machine 5.1 The Makespan without Preemptions 115 0 4 8 12 16 4 3 5 6 2 1 Cmax(LPT) = 15 7 8 9 t 0 4 8 12 87 3 9 4 2 1 Cmax(OPT) = 12 5 6 t Fig. 5.1 Worst case example of LPT is clearly trivial. It is enough to keep the machine continuously busy and the makespan will be equal to the sum of the processing times. Consider the special case where there are an unlimited number of machines in parallel, or where the number of machines is at least as large as the number of jobs, i.e., m \u2265 n. This problem may be denoted by P\u221e | prec | Cmax. This is a classical problem in the \ufb01eld of project planning and its study has led to the development of the well-known Critical Path Method (CPM) and Project Evaluation and Re- view Technique (PERT). The optimal schedule and the minimum makespan are determined through a very simple algorithm. Algorithm 5.1.3 (Minimizing the Makespan of a Project) Schedule the jobs one at a time starting at time zero. Whenever a job has been completed, start all jobs of which all predecessors have been completed (that is, all schedulable jobs.) || That this algorithm leads to an optimal schedule can be shown easily. The proof is left as an exercise. It turns out that in P\u221e | prec | Cmax the start of the processing of some jobs usually can be postponed without increasing the makespan. These jobs are referred to as the slack jobs. The jobs that cannot be postponed are referred to as the critical jobs. The set of critical jobs is referred to 116 5 Parallel Machine Models (Deterministic) 1 2 4 5 6 7 8 9 3 Fig. 5.2 Precedence constraints graph with critical path in Example 5.1.4 as the critical path(s). In order to determine the critical jobs, perform the same procedure applied in Algorithm 5.1.3 backwards. Start at the makespan, which is now known, and work towards time zero, while adhering to the precedence relationships. Doing this, all jobs are completed at the latest possible completion times and therefore started at their latest possible starting times as well. Those jobs whose earliest possible starting times are equal to their latest possible starting times are the critical jobs. Example 5.1.4 (Minimizing the Makespan of a Project) Consider nine jobs with the following processing times. jobs 1 2 3 4 5 6 7 8 9 pj 4 9 3 3 6 8 8 12 6 The precedence constraints are depicted in Figure 5.2. The earliest completion time C\u2032j of job j can be computed easily. jobs 1 2 3 4 5 6 7 8 9 C\u2032j 4 13 3 6 12 21 32 24 30 This implies that the makespan is 32. Assuming that the makespan is 32, the latest possible completion times C\u2032\u2032j can be computed. jobs 1 2 3 4 5 6 7 8 9 C\u2032\u2032j 7 16 3 6 12 24 32 24 32 Those jobs of which the earliest possible completion times are equal to the latest possible completion times are said to be on the critical path. So the critical path is the chain 3\u2192 4\u2192 5\u2192 8\u2192 7. 5.1 The Makespan without Preemptions 117 Level 5 Level 4 Level 3 Level 2 Level 1 Level 5 Level 4 Level 3 Level 2 Level 1 Fig. 5.3 Intree and outtree The critical path in this case happens to be unique. The jobs that are not on the critical path are said to be slack. The amount of slack time for job j is the di\ufb00erence between its latest possible completion time and its earliest possible completion time. || In contrast to 1 | prec | Cmax and P\u221e | prec | Cmax, the Pm | prec | Cmax is strongly NP-hard when 2 \u2264 m < n. Even the special case with all processing times equal to 1, i.e., Pm | pj = 1, prec | Cmax, is not easy. However, constraining the problem further and assuming that the precedence graph takes the form of a tree (either an intree or an outtree) results in a problem, i.e., Pm | pj = 1, tree | Cmax, that is easily solvable. This particular problem leads to a well-known scheduling rule, the Critical Path (CP) rule, which gives the highest priority to the job at the head of the longest string of jobs in the precedence graph (ties may be broken arbitrarily). Before presenting the results concerning Pm | pj = 1, tree | Cmax it is necessary to introduce some notation. Consider an intree. The single job with no successors is called the root and is located at level 1. The jobs immediately preceding the root are at level 2; the jobs immediately preceding the jobs at level 2 are at level 3, and so on. In an outtree all jobs with no successors are located at level 1. Jobs that have only jobs at level 1 as their immediate successors are said to be at level 2; jobs that have only jobs at levels 1 and 2 as their immediate successors are at level 3, and so on (see Figure 5.3). From this de\ufb01nition it follows that the CP rule is equivalent to the Highest Level \ufb01rst rule. The number of jobs at level l is denoted by N(l). Jobs with no predecessors are referred to as starting jobs; the nodes in the graph corresponding to these jobs are often referred to in graph theory terminology as leaves. The highest level in the graph is denoted by lmax. Let H(lmax + 1\u2212 r) = r\u2211 k=1 N(lmax + 1\u2212 k). 118 5 Parallel Machine Models (Deterministic) Clearly, H(lmax+1\u2212 r) denotes the total number of nodes at level lmax+1\u2212 r or higher, that is, at the highest r levels. Theorem 5.1.5. The CP rule is optimal for Pm | pj = 1, intree | Cmax and for Pm | pj = 1, outtree | Cmax. Proof. The proof for intrees is slightly harder than the proof for outtrees. In what follows only the proof for intrees is given (the proof for outtrees is left as an exercise). In the proof for intrees a distinction has to be made between two cases. Case I.