{"id":1011,"date":"2026-04-19T07:40:20","date_gmt":"2026-04-19T07:40:20","guid":{"rendered":"https:\/\/networkingnotebook.com\/?p=1011"},"modified":"2026-04-19T07:42:07","modified_gmt":"2026-04-19T07:42:07","slug":"qos-red-wred-tail-drop","status":"publish","type":"post","link":"https:\/\/networkingnotebook.com\/?p=1011","title":{"rendered":"QOS (RED\/WRED\/Tail Drop)"},"content":{"rendered":"\n<p>Today I am going to talk more about QOS and the congestion avoidance and management behaviors that take place. One of the major reasons why performance lacks in networks is because there is either not enough bandwidth or there are too many devices\/applications competing for bandwidth. They can be either, but both reasons lead to network congestion. Network congestion occurs when the incoming traffic arrives at a faster rate than the device or network link can transmit it. This means that data is coming in way faster than data exiting the device. The result of this causes network devices to place the data in a queue. A queue is a temporary storage place for packets that have not yet been transmitted. These packets will be stored in a queue until it is time for it to be transmitted.&nbsp;<\/p>\n\n\n\n<p><strong>FIFO- <\/strong>If no QOS policies are applied on network devices yet, the default queuing mechanism is FIFO which stands for \u201cFirst In First Out.\u201d This means that the packets are processed in the exact order that they arrived. So if a packet arrives in 10th place, it will be processed as the 10th packet. This resembles a grocery line, where customers are served on a first come first serve basis. Although it may seem fair, it is not a sustainable setting on devices because critical data such as voice packets and video streaming packets may end up behind bulk data file transfers.\u00a0 This will cause the file transfer to be served first while the voice communication remains unusable to other users on the same network.\u00a0<\/p>\n\n\n\n<p><strong>Tail Drop-<\/strong> Another behavior that occurs when network congestion is being experienced is called \u201ctail drop.\u201d This is a congestion management behavior where a network device drops any newly arriving packets because the queue is 100 percent full. This occurs when the queue has no more memory to support any more data, and packets will be dropped until space has freed up. This is a big reason why packet loss is prevalent.\u00a0<\/p>\n\n\n\n<p>Tail drop is a basic behavior and does not perform any type of classification or prioritization causing it to treat all packets equally. This means that voice packets as well as file transfer packets will all be dropped with no differentiation. Tail Drop is a major reason for something called \u201cTCP Global Synchronization.\u201d TCP Global Synchronization is a phenomenon where multiple TCP flows simultaneously slow down their transmission rate because of a shared packet loss event. When tail drop occurs, multiple TCP flows experience packet loss without any differentiation. TCP is known for using packet loss as a sign to reduce transmission rates. Although that seems like a good thing, packet loss being shared across multiple TCP connections will cause ALL of them to slow down. When all of the TCP flows slow down, the networks total bandwidth will be underutilized, and after a while they will all increase their speeds gradually again. This will produce a sawtooth pattern of throughput with periods of under-utilization to network congestion to under-utilization to network congestion again and it will go through this never ending loop.\u00a0<\/p>\n\n\n\n<p><strong>RED- <\/strong>In order to combat TCP global synchronization, RED (random early detection ) was created. Instead of waiting for the queue to fill up and then deciding to drop packets after there is no more memory, RED actively monitors the average queue size (depth.) During this monitoring there is a manually configured minimum threshold and maximum threshold. While the queue is less than the minimum threshold, no packets will be dropped and traffic is forwarded normally. If the average queue size ever reaches the minimum threshold, RED will begin to drop packets randomly to prevent the queue from filling up and network congestion. RED has a lower drop probability when it is near the minimum threshold, but a higher drop probability when it is near the max threshold. RED randomly drops packets meaning it doesn&#8217;t matter if the packet is less or more important. This fights against TCP global synchronization because TCP flows slow down at different times but can be limiting because time sensitive packets have an equal chance of getting dropped as a bulk file download.\u00a0<\/p>\n\n\n\n<p><strong>WRED- <\/strong>Because of REDs lack of classification and prioritization, a much more sophisticated mechanism was established called WRED (Weighted Random Early Detection.) WRED works in the same way as RED in that it monitors the average queue size (depth), and drops packets when it reaches a configured minimum threshold. Where WRED strays away from RED is that it drops packets based on the priority of the packet. It assigns different minimum thresholds and drop probabilities depending on its priority. WRED selectively drops lower priority packets more aggressively while allowing higher priority packets to pass through with a lower chance of being dropped. Low priority traffic like bulk file downloads have a lower minimum threshold and a higher drop probability, while more critical and higher priority packets have a higher minimum threshold and a lower drop probability. Prioritization is based on DSCP markings that are assigned to packets when the IPv4 or IPv6 header was formed at Layer 3 of the OSI model.\u00a0\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today I am going to talk more about QOS and the congestion avoidance and management behaviors that take place. One of the major reasons why performance lacks in networks is because there is either not enough bandwidth or there are too many devices\/applications competing for bandwidth. They can be either, but both reasons lead to&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1011","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1011","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1011"}],"version-history":[{"count":2,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1011\/revisions"}],"predecessor-version":[{"id":1013,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1011\/revisions\/1013"}],"wp:attachment":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1011"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}