Talk:Fixed-priority pre-emptive scheduling
![]() | Computing Stub‑class ![]() | ||||||||||||
|
In my opinion it is wrong to say advantage of making sure no task hogs the processor for any time longer than the time slice fixed priority means only the highest priority thread will run.
advantages of this model
There are many good things about this approach left unsaid, especially when the tasks (threads) not only have fixed priorities but have a bounded number. If the priorities are fixed the number is likely to be also, but this is unsaid; however, there are significant consequential advantages in this case.
The implementation can be truly compact. I doubt any other approach can use less memory or less code.
When the number of tasks is bounded, they can be represented as members of a set by assigning them positions in a bit map, so only one bit is needed per task in the queue of tasks ready to run. If there are 7 or fewer tasks then the queue only requires one byte of storage. The sign bit should be set in this queue and be assigned to the idle task that is executed when no real task is ready to run. The idle task is typically something like this: <here: jump here> or else the console/control panel.
Task priority can be equated to the bit position in the map. The convention that the right-most (least significant) bit has the highest priority allows selecting the highest priority task by a simple, parallel operation, in fixed time. The result of and-ing a number with it's two's-complement is to isolate the right-most bit in a register, in this convention, representing the highest priority task in the set.
The discussion misses the point that tasks may not need to run to completion but may be cyclical; that is they may be allowed run until they block on I/O or the the need for a result of some other task. This is perhaps a minor point, but the blocking operation can be managed by a semaphore. In a real-time system with dedicated tasks, no time slicing may be needed.
A semaphore must maintain a resource counter and a queue of tasks waiting for the resource. If the resource count is negative, that is if tasks are waiting for the resource controlled by the semaphore, then the count is bounded by the number of tasks. In this case the waiting count can be represented as a bit-map set containing bits for the waiting tasks instead of as a number, just like the ready queue. Therefore, to minimize storage, the same register that is used as the semaphore resource count can be used as the semaphore waiting task queue. The sign bit can be used to indicate the state of the register, if negative it is a queue, otherwise it is a number. For 7 or fewer tasks this only requires one byte per semaphore.
Transferring a task from one queue to another involves isolating the task bit, as described above, then xor-ing it into both queues.
The only other ram storage required by the scheduler is a place to store a stack pointer for each suspended task. Of course the tasks themselves need stack space and other work space, but this does not count against the scheduler.
Without going into the details, it should be clear that such a minimal system can be implemented in a small amount of code; experience with several simple microprocessors shows this is typically 100-200 bytes. And this scheme only requires a few bytes of ram, as outlined above. I doubt any other scheduling approach can use less memory or less code. Also, small code usually results in fast execution and few bugs.