Jump to content

Input queue

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Tilucsi (talk | contribs) at 09:14, 25 October 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
  • Bulleted list item

Input Queue

In computer science, an input queue stores data that is waiting to be turned into memory or transferred to another location. Computers need a “safe house” for their data because there is often a surfeit of information for the Central Processing Units (CPU) to process at any given time. Basically, a queue is a collection which has elements added to the rear position and removed from the front position.

CPUs process numerous types of data of different priorities. Audio data, for example, is given a higher priority than other types. If a CPU treated audio on par with other types of data, we couldn’t utilize computers for music because of the resulting delay. By separating data into different groups and arranging them in a queue, computers know which data should be processed first by the CPU for maximum efficiency.

Every company develops its own techniques as to handling and processing data into a queue. The following are queue mechanisms used by Cisco – a networking device company – in most of their products:

1) First in, first out queue (FIFO):

In this mode, data are taken out from the queue in the order that they are coming from the queue. There is no special case here, so every data is treated the same. If a large stream of data A comes before a small stream data B, B still has to wait until A is completely served even though B is the higher priority data for the Operating System (OS). If a system treats every type of data the same, users can experience the delay of OS because the OS has to wait for its turn.

2) Weighted fair queue (WFQ):

Weighted fair queue uses the min-max-fair-share algorithm to distribute data. The min fair-share means the OS will distribute equally minimum resource for each type of data. The max fair-share means the OS will provide more resource for data that need to transmit a lot at that moment, but it will take the resource back after that. “Weighted” means the OS will assign weight for each type of data and it will base on that to put data into the queue and serve them.

3) Priority queue (PQ):

Priority queue is divided into 4 sub queues with different priority. Data in each queue are only served when the higher priority queues are empty. If data come into the empty higher priority queue while the OS is transferring data of lower priority queue, OS will hold data of the lower priority queue and process data of that higher priority queue first. The OS does not care how long lower priority queues have to wait for their turn because it always finishes each queue from highest to lowest priority first before moving the next queue. Within each queue, data are forwarded based on First-In-First-Out basis.

4) Custom queue (CQ):

Custom queue is divided into 17 different sub queues. The first queue, queue 0, is reserved for the OS to transmit system data, the other 16 queues are for user-defined data. User can define various important data and assign them into each queue. Each queue has limited size and it will drop all coming data if it reaches that limit. Each queue is serviced based on how much data is served in each queue. If that limit is met, the OS will hold data of current queue and services the next queue until that queue is empty or it reaches its data limit. If one queue is empty, the OS will skip that queue and service the next queue.

References:

1) CCIE Practical Studies Volume II by Karl Solie CCIE No. 4599, Leah Lynch CCIE No. 7220