Pinedo_problemas_deterministicos
251 pág.

Pinedo_problemas_deterministicos


DisciplinaPlanejamento da Producao27 materiais261 seguidores
Pré-visualização50 páginas
than m. There are
at least m+ 1 jobs available for processing and each one of these jobs is at the
head of a subtree that includes a string of a given length.
The proof that applying CP from t1 is optimal is by contradiction. Suppose
that after time t1 another rule is optimal. This rule must, at least once, prescribe
132 5 Parallel Machine Models (Deterministic)
an action that is not according to CP. Consider the last point in time, say t2, at
which this rule prescribes an action not according to CP. So at t2 there are m
jobs, that are not heading the m longest strings, assigned to the m machines;
from t2 + 1 the CP rule is applied. Call the schedule from t2 onwards CP\u2032. It
su\ufb03ces to show that applying CP from t2 onwards results in a schedule that is
at least as good.
Consider under CP\u2032 the longest string headed by a job that is not assigned
at t2, say string 1, and the shortest string headed by a job that is assigned at
t2, say string 2. The job at the head of string 1 has to start its processing under
CP\u2032 at time t2 + 1. Let C\u20321 and C
\u2032
2 denote the completion times of the last jobs
of strings 1 and 2, respectively, under CP\u2032. Under CP\u2032 C\u20321 \u2265 C\u20322. It is clear that
under CP\u2032 all m machines have to be busy at least up to C\u20322\u2212 1. If C\u20321 \u2265 C\u20322+1
and there are machines idle before C\u20321 \u2212 1, the application of CP at t2 results
in less idle time and a smaller total completion time. Under CP the last job of
string 1 is completed one time unit earlier, yielding one more completed job at
or before C\u20321 \u2212 1. In other cases the total completion time under CP is equal
to the total completion time under CP\u2032. This implies that CP is optimal from
t2 on. As there is not then a last time for a deviation from CP, the CP rule is
optimal. 	unionsq
In contrast to the makespan objective the CP rule is, somewhat surprisingly,
not necessarily optimal for intrees. Counterexamples can be found easily (see
Exercise 5.24).
Consider the problem Pm | pj = 1,Mj |
\u2211
Cj . Again, if the Mj sets are
nested the Least Flexible Job \ufb01rst rule can be shown to be optimal.
Theorem 5.3.4. The LFJ rule is optimal for Pm | pj = 1,Mj |
\u2211
Cj
when the Mj sets are nested.
Proof. The proof is similar to the proof of Theorem 5.1.8. 	unionsq
The previous model is a special case of Rm ||\u2211Cj . As stated in Chapter 2,
the machines in the Rm environment are entirely unrelated. That is, machine
1 may be able to process job 1 in a short time and may need a long time for
job 2, while machine 2 may be able to process job 2 in a short time and may
need a long time for job 1. That the Qm environment is a special case is clear.
Identical machines in parallel with job j being restricted to machine set Mj is
also a special case; the processing time of job j on a machine that is not part
of Mj has to be considered very long making it therefore impossible to process
the job on such a machine.
The Rm || \u2211Cj problem can be formulated as an integer program with
a special structure that makes it possible to solve the problem in polynomial
time. Recall that if job j is processed on machine i and there are k \u2212 1 jobs
following job j on this machine i, then job j contributes kpij to the value of
the objective function. Let xikj denote 0 \u2212 1 integer variables, where xikj = 1
5.3 The Total Completion Time without Preemptions 133
if job j is scheduled as the kth to last job on machine i and 0 otherwise. The
integer program is then formulated as follows:
minimize
m\u2211
i=1
n\u2211
j=1
n\u2211
k=1
kpijxikj
subject to
m\u2211
i=1
n\u2211
k=1
xikj = 1, j = 1, . . . , n
n\u2211
j=1
xikj \u2264 1, i = 1, . . . ,m, k = 1, . . . , n
xikj \u2208 {0, 1}, i = 1, . . . ,m, k = 1, . . . , n j = 1, . . . , n
The constraints make sure that each job is scheduled exactly once and each
position on each machine is taken by at most one job. Note that the processing
times only appear in the objective function.
This is a so-called weighted bipartite matching problem with on one side the
n jobs and on the other side nm positions (each machine can process at most n
jobs). If job j is matched with (assigned to) position ik there is a cost kpij . The
objective is to determine the matching in this so-called bipartite graph with
a minimum total cost. It is known from the theory of network \ufb02ows that the
integrality constraints on the xikj may be replaced by nonnegativity constraints
without changing the feasible set. This weighted bipartite matching problem
then reduces to a regular linear program for which there exist polynomial time
algorithms. (see Appendix A).
Note that the optimal schedule does not have to be a non-delay schedule.
Example 5.3.5 (Minimizing Total Completion Time with Unrelated
Machines)
Consider 2 machines and 3 jobs. The processing times of the three jobs on
the two machines are presented in the table below.
jobs 1 2 3
p1j 4 5 3
p2j 8 9 3
The bipartite graph associated with this problem is depicted in Figure 5.8.
According to the optimal schedule machine 1 processes job 1 \ufb01rst and
job 2 second. Machine 2 processes job 3. This solution corresponds to
134 5 Parallel Machine Models (Deterministic)
p11
p12
2p11
2p12
3p12
2p22
2p13
3p13
2p23 3p23
3p22
2p21
3p11
3p21
p11
1 2
p22
p13
p23
3
2, 32, 22, 11, 31, 21, 1
Job j
Position
i, k
Fig. 5.8 Bipartite graph for Rm ||\u2211Cj with three jobs
x121 = x112 = x213 = 1 and all other xikj equal to zero. This optimal
schedule is not non-delay (machine 2 is freed at time 3 and the waiting job
is not put on the machine). ||
5.4 The Total Completion Time with Preemptions
In Theorem 5.3.1 it is shown that the nonpreemptive SPT rule minimizes
\u2211
Cj
in a parallel machine environment. It turns out that the nonpreemptive SPT
rule is also optimal when preemptions are allowed. This result is a special case
of the more general result described below.
Consider m machines in parallel with di\ufb00erent speeds, i.e., Qm | prmp |\u2211
Cj . This problem leads to the so-called Shortest Remaining Processing Time
on the Fastest Machine (SRPT-FM) rule. According to this rule at any point
in time the job with the shortest remaining processing time is assigned to the
fastest machine, the second shortest remaining processing time on the second
fastest machine, and so on. Clearly, this rule requires preemptions. Every time
the fastest machine completes a job, the job on the second fastest machine
moves to the fastest machine, the job on the third fastest machine moves to the
second fastest machine, and so on. So, at the \ufb01rst job completion there arem\u22121
preemptions, at the second job completion there are m \u2212 1 preemptions, and
so on until the number of remaining jobs is less than the number of machines.
From that point in time the number of preemptions is equal to the number of
remaining jobs.
The following lemma is needed for the proof.
Lemma 5.4.1. There exists an optimal schedule under which Cj \u2264 Ck
when pj \u2264 pk for all j and k.
Proof. The proof is left as an exercise. 	unionsq
Without loss of generality it may be assumed that there are as many machines
as jobs. If the number of jobs is smaller than the number of machines then the
5.4 The Total Completion Time with Preemptions 135
m \u2212 n slowest machines are disregarded. If the number of jobs is larger than
the number of machines, then n\u2212m machines are added with zero speeds.
Theorem 5.4.2. The SRPT-FM rule is optimal for Qm | prmp |\u2211Cj.
Proof. Under SRPT-FM Cn \u2264 Cn\u22121 \u2264 · · · \u2264 C1. It is clear that under SRPT-
FM the following equations have to be satis\ufb01ed:
v1Cn = pn
v2Cn + v1(Cn\u22121 \u2212 Cn) = pn\u22121
v3Cn + v2(Cn\u22121 \u2212 Cn) + v1(Cn\u22122 \u2212 Cn\u22121) = pn\u22122
...
vnCn + vn\u22121(Cn\u22121 \u2212 Cn) + · · ·+ v1(C1 \u2212 C2) = p1
Adding these equations yields the following set of equations.
v1Cn = pn
v2Cn + v1Cn\u22121 = pn + pn\u22121
v3Cn + v2Cn\u22121 + v1Cn\u22122 = pn + pn\u22121 + pn\u22122
...
vnCn + vn\u22121Cn\u22121 + · · ·+ v1C1 = pn + pn\u22121 + · · ·+ p1
Suppose schedule S\u2032 is optimal. From the previous lemma it follows that
C\u2032n \u2264 C\u2032n\u22121 \u2264 · · · \u2264 C\u20321.
The shortest job cannot be completed before pn/v1, i.e., C\u2032n \u2265 pn/v1 or
v1C
\u2032
n \u2265 pn.
Given that jobs n