251 pág.

# Pinedo_problemas_deterministicos

DisciplinaPlanejamento da Producao27 materiais264 seguidores
Pré-visualização50 páginas
```Assume the tree satis\ufb01es the following condition:
max
r
(\u2211r
k=1 N(lmax + 1\u2212 k)
r
)
\u2264 m.
In this case, in every time interval, all the jobs available for processing can be
processed and at most lmax time units are needed to complete all the jobs under
the CP rule. But lmax is clearly a lower bound for Cmax. So the CP rule results
in an optimal schedule.
Case II. Find for the tree the (smallest) integer c \u2265 1 such that
max
r
(\u2211r
k=1 N(lmax + 1\u2212 k)
r + c
)
\u2264 m < max
r
(\u2211r
k=1 N(lmax + 1\u2212 k)
r + c\u2212 1
)
.
The c basically represents the smallest amount of time beyond time r needed
to complete all jobs at the r highest levels. Let
max
r
(\u2211r
k=1 N(lmax + 1\u2212 k)
r + c\u2212 1
)
=
(\u2211r\u2217
k=1 N(lmax + 1\u2212 k)
r\u2217 + c\u2212 1
)
> m.
The number of jobs completed at time (r\u2217+ c\u22121) is at most m(r\u2217+ c\u22121). The
number of jobs at levels higher than or equal to lmax+1\u2212 r\u2217 is
\u2211r\u2217
k=1 N(lmax+
1\u2212 k). As
r\u2217\u2211
k=1
N(lmax + 1\u2212 k) > (r\u2217 + c\u2212 1)m
there is at least one job at a level equal to or higher than lmax + 1 \u2212 r\u2217 that
is not processed by time r\u2217 + c \u2212 1. Starting with this job there are at least
lmax +1\u2212 r\u2217 time units needed to complete all the jobs. A lower bound for the
makespan under any type of scheduling rule is therefore
Cmax \u2265 (r\u2217 + c\u2212 1) + (lmax + 1\u2212 r\u2217) = lmax + c.
To complete the proof it su\ufb03ces to show that the CP rule results in a makespan
that is equal to this lower bound. This part of the proof is left as an exercise
(see Exercise 5.14). 	unionsq
5.1 The Makespan without Preemptions 119
1 4
2 5
3 6
Fig. 5.4 Worst case example of the CP rule for two machines
(Example 5.1.6)
The question arises: how well does the CP rule perform for arbitrary prece-
dence constraints when all jobs have equal processing times? It has been shown
that for two machines in parallel
Cmax(CP )
Cmax(OPT )
\u2264 4
3
.
When there are more than two machines in parallel, the worst case ratio is
larger. That the worst case bound for two machines can be attained is shown
in the following example.
Example 5.1.6 (A Worst-Case Example of CP)
Consider 6 jobs with unit processing times and two machines. The precedence
relationships are depicted in Figure 5.4. Jobs 4, 5 and 6 are at level 1, while
jobs 1, 2 and 3 are at level 2. As under the CP rule ties may be broken
arbitrarily, a CP schedule may prescribe at time zero to start with jobs 1
and 2. At their completion only job 3 can be started. At time 2 jobs 4 and 5
are started. Job 6 goes last and is completed by time 4. Of course, an optimal
schedule can be obtained by starting out at time zero with jobs 2 and 3. The
makespan then equals 3. ||
Example 5.1.6 shows that processing the jobs with the largest number of
successors \ufb01rst may result in a better schedule than processing the jobs at the
highest level \ufb01rst. A priority rule often used when jobs are subject to arbitrary
precedence constraints is indeed the so-called Largest Number of Successors \ufb01rst
(LNS) rule. Under this rule the job with the largest total number of successors
(not just the immediate successors) in the precedence constraints graph has the
highest priority. Note that in the case of intrees the CP rule and the LNS rule
are equivalent; the LNS rule therefore results in an optimal schedule in the case
of intrees. It can be shown fairly easily that the LNS rule is also optimal for
Pm | pj = 1, outtree | Cmax. The following example shows that the LNS rule
may not yield an optimal schedule with arbitrary precedence constraints.
120 5 Parallel Machine Models (Deterministic)
1 2 3
4 5
6
Fig. 5.5 The LNS rule is not necessarily optimal with arbitrary
precedence constraints (Example 5.1.7)
Example 5.1.7 (Application of the LNS rule)
Consider 6 jobs with unit processing times and two machines. The precedence
constraints are depicted in Figure 5.5. The LNS rule may start at time 0 with
jobs 4 and 6. At time 1 jobs 1 and 5 start. Job 2 starts at time 2 and job
3 starts at time 3. The resulting makespan is 4. It is easy to see that the
optimal makespan is 3 and that the CP rule actually achieves the optimal
makespan. ||
Both the CP rule and the LNS rule have more generalized versions that
can be applied to problems with arbitrary job processing times. Instead of
counting the number of jobs (as in the case with unit processing times), these
more generalized versions prioritize based on the total amount of processing
remaining to be done on the jobs in question. The CP rule then gives the
highest priority to the job that is heading the string of jobs with the largest
total amount of processing (with the processing time of the job itself also being
included in this total). The generalization of the LNS rule gives the highest
priority to that job that precedes the largest total amount of processing; again
the processing time of the job itself is also included in the total. The LNS name
is clearly not appropriate for this generalization with arbitrary processing times,
as it refers to a number of jobs rather than to a total amount of processing.
Another generalization of the Pm || Cmax problem that is of practical interest
arises when job j is only allowed to be processed on subsetMj of the m parallel
machines. Consider Pm | pj = 1,Mj | Cmax and assume that the sets Mj are
nested, that is, one and only one of the following four conditions holds for jobs
j and k.
(i) Mj is equal to Mk (Mj =Mk)
(ii) Mj is a subset of Mk (Mj \u2282Mk)
(iii) Mk is a subset of Mj (Mk \u2282Mj)
(iv) Mj and Mk do not overlap (Mj \u2229Mk = \u2205)
5.1 The Makespan without Preemptions 121
Under these conditions a well-known dispatching rule, the Least Flexible Job
\ufb01rst (LFJ) rule, plays an important role. The LFJ rule selects, every time a
machine is freed, among the available jobs the job that can be processed on the
smallest number of machines, i.e., the least \ufb02exible job. Ties may be broken
arbitrarily. This rule is rather crude as it does not specify, for example, which
machine should be considered \ufb01rst when several machines become available at
the same time.
Theorem 5.1.8. The LFJ rule is optimal for Pm | pj = 1,Mj | Cmax
when the Mj sets are nested.
Proof. By contradiction. Suppose the LFJ rule does not yield an optimal sched-
ule. Consider a schedule that is optimal, and that di\ufb00ers from a schedule that
could have been generated via the LFJ rule. Without loss of generality, the jobs
on each machine can be ordered in increasing order of |Mj |. Now consider the
earliest time slot that is \ufb01lled by a job that could not have been placed there by
the LFJ rule. Call this job job j. There exists some job, say job j\u2217, that is sched-
uled to start later, and that could have been scheduled by LFJ in the position
taken by job j. Note that job j\u2217 is less \ufb02exible than job j (since Mj \u2283 Mj\u2217).
Job j\u2217 cannot start before job j, since j is the earliest non-LFJ job. It cannot
start at the same time as job j, for in that case job j is in a position where
LFJ could have placed it. These two jobs, j and j\u2217, can be exchanged without
altering Cmax, since pj = 1 for all jobs. Note that the slot previously \ufb01lled by
j is now \ufb01lled by j\u2217, which is an LFJ position. By repeating this process of
swapping the earliest non-LFJ job, all jobs can be moved into positions where
they could have been placed using LFJ, without increasing Cmax. So it is pos-
sible to construct an LFJ schedule from an optimal schedule. This establishes