# Job Scheduling Algorithm by Greedy Method

Posted By on September 18, 2014

Scheduling

Two problems concerning the optimal way to schedule jobs on a single machine:

First Case:

• The problem is to minimize the average time that a job spends in the system.

Second Case:

• Jobs bring profit if completed by deadline.
• So the aim is to maximize the profitability.

In this section, for both problems greedy methods are used.

## Minimize time in the system (First Case)

• A single server with N customers to serve
• Customer i will take time ti, 1≤ i ≤ N to be served.
• Minimize average time that a customer spends in the system.

Since N is fixed, we may try to minimize time spend by all customers to reach our goal, that is

N

Minimize T = ∑ (time in system for customer i)

i=1

, where time in system for customer i = total waiting time + ti

Greedy Approach: At each step, add to the end of the schedule, the customer requiring the least service time among those who remain;

So; serve least time consuming customer first.

Example:

Assume that we have 4 jobs with t1 = 5, t2 = 3, t3 = 7, t4 = 2.

Greedy algorithm selects the sequence {4, 2, 1, 3}

This selection ends up with T = 2 + (2+3) + (2+3+5) + (2+3+5+7)

= 34.

Can you find a more optimal solution to this example?

Implementation of this method is easy, only sorting of customers into order of non-decreasing service time is required, which may take a time in O(nlogn). So details are omitted.

 Theorem 1: Greedy Algorithm mentioned is optimal.

## Scheduling with deadlines (Second Case)

• N jobs to execute, each of which takes unit time.
• At any time t = 1, 2, 3… we can execute exactly one job.
• Job i earns us a profit gi > 0 if and only if it is executed no later than time di.

That is; we have N Jobs, Profits gi and Deadlines di for each job.

 Feasibility of a Set of Jobs : A set of jobs is feasible if there exists at least one sequence that allows all the jobs to be executed no later than their deadlines.

Greedy Approach:

• Construct the schedule step by step.
• Add the job with highest value of gi among those not considered, provided that
• The chosen set of jobs remains feasible.

Example:

Assume that we have 4 jobs such that

 i 1 2 3 4 gi 50 10 15 30 di 2 1 2 1

• At step 1; our job set is {1}, since g1 > gi for all i ≠ 1.
• We may run job 1 before its deadline, so our set is feasible.
• At step 2; we will select job 4 and try to schedule it. That is we have job set {1, 4} now.
• By running in the order of 4, 1; we can keep the set feasible. So job 4 is added to our set.
• At step 3; we will select job 3 and try to schedule it. That is we have job set {1, 3, 4} now.
• But there is no way to keep the set {1, 3, 4} feasible. So scheduling job 3 is rejected.
• At step 4; we will select job 2 and try to schedule it. That is we have job set {1, 2, 4} now.
• But there is no way to keep the set {1, 2, 4} feasible. So scheduling job 2 is rejected.
• So as a result; our solution set is {1, 4} and the order is 4, 1. □

We need to show that;

Greedy algorithm always finds an optimal schedule

And we need to

Find an efficient way to implement it.

 Lemma 2 Let J be a set of k jobs. Suppose without loss of generality that the jobs are numbered so that d1 ≤ d2 ≤…≤dk. Then the set J is feasible if and only if the sequence 1,2,3,…,k is feasible.
 Theorem 3 The greedy algorithm outlined earlier always finds an optimal schedule. For The Proof : have a look at page 208

For first implementation;

Suppose without loss of generality, the jobs are numbered so that

g1 ≥ g2 ≥…≥ gn

n > 0 and di > 0, 1 ≤ i ≤ n

and additional storage is available at the front of the arrays d and j.

(These 2 cells are known as “sentinels”. We avoid range checks by storing an appropriate value in the sentinels)

 function sequence(d[0..n]):k, array[1..k] array j[0..n] {Schedule is constructed step by step in the array j. The variable k keeps how many jobs are already in the schedule } d[0] = j[0] = 0 // Sentinels k = j[1] = 1 // Job 1 is already chosen //Greedy loop for i = 2 to n do // Decreasing order of g r = k while d[j[r]] > max(d[i],r) do r = r-1 if d[i] > r then for m = k step -1 to r+1 do j[m+1] = j[m] j[r+1] = i k = k+1 return k,j[1..k]

The k jobs in the array j are in order of increasing deadline. When a job i is considered, the algorithm checks whether it can be inserted into j at the appropriate place without pushing some job already in j past its deadline. If so, i is accepted, else it is rejected.

For this algorithm, sorting jobs into order of decreasing profit takes a time in Θ(nlogn). The worst case occurs, when the procedure turns out to also to sort the jobs by order of decreasing deadline, and when they can all fit into the schedule. So, when job i is considered, algorithm looks at each of the k = i-1 jobs already in the schedule. That is; there are

n-1

∑ k trips round the while loop

k=1

and

n-1

∑ m trips round the inner for loop.

m=1

So the algorithm takes time in Ω(n2).

A more efficient algorithm is base on the lemma 6.6.4.

 Lemma 6.6.4 A set of n jobs J is feasible if and only if we can construct a feasible sequence including all the jobs in J as follows;Start with an empty schedule of length n. Then for each job iJ, in turn, schedule i at time t, where t is the largest integer such that 1 ≤ t ≤ min(n,di) and the job to be executed at time t is not yet decided. For The Proof: have a look at page 211.

That is;

• Consider each job in turn
• And add it to schedule being built as late as possible, but no later than its deadline.
• If a job cannot be scheduled before its deadline, then the set J is not infeasible.

The faster algorithm will have those steps;

1. Initialization:

Each position 0,1,2,..,n is different set and F({i}) = i, 0 ≤ i ≤ n.

Find the set that contains d; let this be set K. If F(K) = 0 reject the job; otherwise:

• Assign the new job to position F(K)
• Find the set that contains F(K) – 1. Call this set L (It cannot be the same as K)
• Merge K and L. The value of F for this new set is the old value of F(L).

 function sequence2(d[1..n]):k, array[1..k]array j,F[0..n] {Initialization} for i = 0 to n do j[i] = 0 F[i] = i initialize set {i} {Greedy loop} for i = 1 to n do {Decreasing order of g} k = find(min(n,d[i])) m = F[k] if m != 0 then j[m] = i l = find(m-1) F[k] = F[l] merge(k,l) {The resulting set has label k or l} {It remains to compress the solution} k = 0 for i = 1 to n do if j[i] > 0 then k = k+1 j[k] = j[i] return k,j[1..k]

If jobs are ordered by decreasing profit, an optimal sequence can be obtained by just calling the algorithm. Since there are at most 2n find operations and m merge operations to execute, the required time is in O(na(2n,n)), where a is the slow-growing function . This is essentially linear. If, on the other hand, the jobs are given in arbitrary order, then we have to begin by sorting them, and obtaining initial sequence takes a time in Θ(nlogn).

#### Posted by Akash Kurup

Founder and C.E.O, World4Engineers Educationist and Entrepreneur by passion. Orator and blogger by hobby

Website: http://world4engineers.com