|
|
Line 1: |
Line 1: |
| '''Generalized processor sharing''' (GPS) is a service [[policy]] for multiple classes of customers where service capacity is shared between customer classes according to some fixed weights.<ref name="parekh">{{cite doi|10.1109/90.234856}}</ref>
| | The writer is known as Irwin Wunder but it's not the most masucline title out there. What I love doing is doing ceramics but I haven't produced a dime with it. My working day job is a meter reader. North Dakota is our birth location.<br><br>Feel free to surf to my page - over the counter std test ([http://festival.munhwatongsin.co.kr/eventboard2011/656726 click through the following page]) |
| Service is shared between all non-empty classes in the same ratio as the weight factors (positive values for each service class).<ref>{{cite doi|10.1145/1243401.1243409}}</ref>
| |
| | |
| In [[scheduling (computing)|processor scheduling]], generalized processor sharing is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness."<ref>{{cite doi|10.1145/1594835.1504188}}</ref> Generalized processor sharing assumed that traffic is fluid ([[infinitesimal]] packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as [[weighted fair queuing]] (WFQ)<ref name="demers">{{cite doi|10.1145/75247.75248}}</ref> and known as packet-by-packet generalized processor sharing (PGPS).
| |
| | |
| == Details ==
| |
| | |
| In a network such as the internet, different application types require different levels of performance. For example, email is a genuinely [[store and forward]] kind of application, but [[videoconferencing]] isn't since it requires [[low latency]]. When packets are queued up on one end of a congested link, the node usually has some freedom in deciding the order in which it should send the queued packets. One example ordering is simply [[first-come, first-served]], which works fine if the sizes of the queues are small, but can result in problems if there are latency sensitive packets being blocked by packets from bursty, higher bandwidth applications.
| |
| | |
| [[Weighted round robin]] over the application types assigns each application type i a weight <math>w_i</math> such that the weights for all the application types add up to 1. In every "round" of the round robin, the server serves each application type in proportion to its weight. Suppose that in a given round, <math>B</math> is the set of application types that currently have queued packets. Then the fraction of bits belonging to type i that would be served is:
| |
| <math>{w_i\over \sum_{j\in B} w_j}.</math>
| |
| | |
| Generalized processor sharing assumes that the traffic is fluid, i.e., infinitely divisible so that whenever an application type has packets in the queue, it will receive exactly the fraction of the server given by the formula above. However, traffic is not fluid and consists of packets, possibly of variable sizes. Therefore, GPS is mostly a theoretical tool for benchmarking practical scheduling algorithms that approximate the GPS ideal. GPS and its approximations have been studied extensively.
| |
| | |
| == See also ==
| |
| | |
| * [[Fair queuing]]
| |
| * [[Weighted fair queuing]]
| |
| * [[Deficit round robin]]
| |
| * [[Weighted round robin]]
| |
| * [[Statistical multiplexing]]
| |
| | |
| == References ==
| |
| <div class="references-small">
| |
| <references/>
| |
| </div>
| |
| | |
| [[Category:Scheduling algorithms]]
| |
The writer is known as Irwin Wunder but it's not the most masucline title out there. What I love doing is doing ceramics but I haven't produced a dime with it. My working day job is a meter reader. North Dakota is our birth location.
Feel free to surf to my page - over the counter std test (click through the following page)