The Curt Scheduling Algorithm

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract-

This paper presents a modified algorithm named Curt based on the Linux kernel 2.6.11 for Real-time Tasks. Researchers in the realtime system community have designed and studied many advanced scheduling algorithms. However, most of these algorithms have not been implemented since it is very difficult to support new scheduling algorithms on most operating systems. To solve this problem, we enhance the scheduling mechanism in Linux to provide a flexible scheduling framework , we can use the kernel in 2.6.11 edition to ameliorate, because the O(1) schedule algorithm is very high-powered and fair . The main goal of the proposed architecture is to provide a fast prototyping scheduling algorithms, which makes a perfect balance between fairness and quick response. We reserve I/O waiting queue to reduce the response time, remove the expired queue to enhance the stability of realtime tasks, and use dynamic calculational methods to distribute timeslice and priority.

Keywords:

real-time OS, schedule, kernel, runqueue.

1. Introduction

Real-time computing is required in many application domains, such as avionics systems, traffic control system, automated factory systems. Each application has peculiar characteristics in terms of timing constraints and computational requirements (such as periodicity, criticality of the deadlines, response time, etc). Some mission-critical real-time systems may suffer irreparable damages if a deadline is missed. It is the system builder’s responsibility to choose an operating system that can support and schedule these jobs according to their timing specifications so that no deadline will be missed.

On the other hand, some soft real-time applications such as streaming audio/video and multiplayer games also have timing constraints and require performance guarantees from the underlying operating system. The application output provided to users is optimized by meeting the maximum number of real-time constraints (e.g., deadlines). But unlike hard real-time applications, occasional violations of these constraints may not result in a useless execution of the application or catastrophic consequences.

Advances in computer technology have also dramatically changed the design of many real-time controller devices that are being used on a daily basis. Many traditional mechanical controllers have been gradually replaced by digital chips that are much cheaper and more powerful. In fact, we believe that the computing power of future embedded digital controllers will be at the same level as that in today’s big system servers. As a result, future embedded devices must be able to handle complex application requirements, real-time or otherwise. How we can design real-time operating systems (RTOSs) to support applications with mixed real-time and nonreal-time performance requirements will be an important issue.

These three types of timing requirements (hard real-time, soft real-time,and nonreal-time) are all important for many real-time systems. It is the goal of our research to make Curt to satisfy these different requirements.

2. Related Works

Many real-time scheduling algorithms have been

proposed in literature to deal with timing constraints, starting from the classical Rate Monotonic (RM)[1] or Earliest Deadline First (EDF) algorithms[1] to least slack first(LSF) and highest value first (HVF)[2]. These alogrithms’ priority of tasks are all based on some special characteristic variable such as (deadline , idle time or value), and these algorithms was performed under very restrictive assumptions (independent tasks, fixed execution times and periods, completely preemptive scheduling, and so on).However, it’s far from enough that the priority only bases on some special variable[4-6].For example, EDF gives the highest priority to the task who has the earliest deadline, and LSF give the highest priority to the task who has the longest free timeslice. Although they perform perfect in common situation, when the system overload, it’s hard to make sure that all tasks can finish before deadline. In this case both EDF and LSF will be running rough, even the domino phenomena [3] will happen. Some authors propose that modification in a conventional OS based on a monolithic kernel approach is a better choice. They [7] modify the kernel in order to introduce schedule the interrupt handlers, preemption model and save some limited form of device scheduling. In general, all these works introduce a new scheduling algorithm in the kernel. However, since conventional kernels provide a quantum based resource allocation and their purpose is to make tasks running fairly, so it’s very difficult to modify them for real-time system, only a few algorithms can be implemented on them easily. Proportional Share algorithms [8], being based on a per-quantum CPU allocation, are expressly designed to be implemented on a conventional kernel. Another interesting technology named Resource Kernels (RK) is growing up recently. An Resource Kernels is a resour cecentric kernel that complements the OS kernel providing support for QoS, and enabling the use of reservation techniques in traditional OSs.

At all events, scheduling flexibility is becoming a hot topic in OS research. As the development in computer technology, Real-time OS should also be suit for normal tasks. There are sufficient reasons for us to choose the Linux 2.6.11 edition to modify, 1) Linux is open source, it’s easy to understand and rewrite; 2)It’s almost supported by diversified hardware; 3)It’s designed in a modularizationmind, so it can easily transplant to an embedded system by simplify its module. 4) Linux 2.6.11 edition’s kernel use O(1) schedule algorithm, which try to decrease the real-time task’s response time. To sum up, Linux 2.6.11 edition match our requirement perfectly.

3. Analysis on O(1) algorithm

During the kernel development series, the Linux

kernel received a new scheduler, commonly called the O(1) scheduler because its algorithmic behavior(O(1) is an example of big-o notation. In short, it means the scheduler can do its thing in constant time, regardless of the size of the input), solved the short comings of the previous Linux scheduler and introduced powerful new features and performance characteristics.[9]

A common type of scheduling algorithm is priority-based scheduling. The idea is to rank processes based on their worth and need for processor time. Processes with a higher priority run before those with a lower priority, Linux builds on this idea and provides dynamic priority-based scheduling. This concept begins with an initial base priority and then enables the scheduler to increase or decrease the priority dynamically to fulfill scheduling objectives. Linux 2.6.11 is preemptive, when a process enters the TASK_RUNNING state, the kernel checks whether its priority is higher than the priority of the currently executing process. If it is, the scheduler is invoked to preempt the currently executing process and run the newly runnable process. Additionally, when a process's timeslice reaches zero, it is preempted and the scheduler is again invoked to select a new process.

Many operating systems (older versions of Linux included) use loop over each task to recalculate each task's timeslice when they have all reached zero. The new Linux schedule change this to alleviates the need for a recalculate loop. Instead, it maintains two priority arrays for each processor: both an active array and an expired array. The active array contains all the tasks in the associated runqueue that have timeslice left. The expired array contains all the tasks in the associated runqueue that have exhausted their timeslice. When each task's timeslice reaches zero, its timeslice is recalculated before it is moved to the expired array. Recalculating all the timeslices is then as simple as just switching the active and expired arrays. Because the arrays are accessed only via pointer, switching them is as fast as swapping two pointers.

This is performed in schedule():

struct prio_array *array = rq->active;

if (!array->nr_active) {

rq->active = rq->expired;

// rp->expired points to an expired array

rq->expired = array;

// rp->active points to an active array ,they exchanged here

}

This switch is the key feature of O(1) schedule algorithm.

4. Curt Scheduling Algorithm

Linux has lots of advantage, but it is designed as a universal time-sharing system, not for real-time system. Despite Linux supports SCHED_FIFO algorithm, real-time signal and memory locking. It still has many faults to be used as a real-time system as follows: 1) Linux sacrifice tasks’ real-time response time to fairness of all tasks; 2)When all tasks in active array exhaust their timeslice, scheduler switch active and expired array. This makes real-time task can’t response on time. 3) Random preemption make the estimate of task’s finish time difficult. Pic1 describe Curt’s framework. In the previous section, we discuss process scheduling algorithm in abstract. Now Let us dive into how the Curt algorithm realizes

(1) Improvement of queue management

The basic data structure in the scheduler is the runqueue. The runqueue is the list of runnable processes on a given processor; there is one runqueue per processor. Each runnable process is on exactly one runqueue. We define the runqueue as struct runqueue:

struct runqueue {

spinlock_t lock; /* spin lock that protects this runqueue */

unsigned long nr_running; /* number of runnable tasks */

.

.

struct mm_struct *prev_mm; /* mm_struct of last ran task

*/

struct prio_array *active; /* active priority array */

struct prio_array arrays; /* the actual priority arrays */

atomic_t nr_iowait; /* number of tasks waiting on I/O */

};

struct prio_array {

int nr_active; /* number of tasks in the queues */

unsigned long bitmap[BITMAP_SIZE]; /* priority bitmap */

struct list_head queue[MAX_PRIO]; /* priority queues */

};

We simplify the runqueue to make each one contains a priority array(the active array). Priority arrays are defined as struct prio_array. Priority array is the data structure in Curt scheduling. Each priority array contains one queue of runnable processors per priority level. These queues contain lists of the runnable processes at each priority level. The priority arrays also contain a priority bitmap (bitmap [BITMAP_SIZE]) used to efficiently discover the highest priority runnable task in the system. By using one priority array, we save time caused by context switching and arrays swapping. When a process’s time reach zero, it will be moved to the tail of its priority runqueue, and wait for next timeslice. This method will improve the efficiency of realtime tasks obviously.

The advantage of using priority bitmap is that we can implement a fast find first set algorithm to quickly search the bitmap. Each priority array contains a bitmap field that has at least one bit for every priority on the system. Initially, all the bits are zero. When a task of a given priority becomes runnable, the corresponding bit in the bitmap is set to one. For example, if a task with priority is runnable, then bit seven is set. By using this method, finding the highest priority task is as trivial as finding the first set bit in bitmap. Because the number of priorities is static, the time to complete this search will only taking a couple of times Each priority array also contains an array named queue of struct list_head queues, each priority has one queue. Each list corresponds to a given priority and in fact contains all the runnable processes in this priority. Finding the next task to run is as simple as selecting the next element in the list. Within a given priority, new tasks will join into the priority’s queue. The priority array also contains a counter, nr_active. This is the number of runnable tasks in this priority array. We use RR algorithm to schedule the processes in priority’s queues. When tasks in the higher priority’s queue have finished, the lower priority’s queue starts to work. There are no different between real-time processes and normal processed. In this strategy, the task’s working time is easy to forecast. This characteristic highly contents real-time requirement.

(2) Improvement of process analysis

The key feature of Curt in process analysis is that the algorithm distributes timeslice and priority dynamically. Processes can be classified as either I/O-bound or processor-bound. I/O-bound is characterized as a process that spends much of its time waiting on I/O requests. Consequently, such a process is often runnable. Conversely, processor-bound processes spend much of their time executing code. The scheduling policy in a system must attempt to satisfy two conflicting goals: fast process response time (low latency) and maximal system utilization (high throughput).

We aim to provide good interactive response, optimizes for process response (low latency), thus favoring I/O-bound processes over processor-bound processors. We make an I/O queue array. Each array contains one queue of I/O-bound processes. Each queue correspond to one I/O response. When the I/O finish, we can find the waiting process quickly in I/O queue and put it back to runqueue in a short time. During the time of we are moving the process from I/O waiting queue to runqueue, we recalculate its priority, if the task take too much time on processor, we’ll debase its priority for punish. If it’s not, we’ll enhance its priority.

Static int recalculate_prio(task * p) {

Bonus = Calculate_bouns(p); /*based I/O consume to

give bonus*/

Prio = p->static_prio - Bonus; /* reset priority by bonus*/

}

In the running-queue, when a task’s timeslice reaches zero, it is preempted. We’ll recalculate a timeslice for it and move it to the tail of the queue,

Static unsigned int recalculate_timeslice(task * p) {

return prio_timeslic((100 * HZ / 1000) * 4 , p->static_prio);

//base the priority the task has, calculate timeslice

}



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now