{"id":1007,"date":"2026-04-19T07:38:21","date_gmt":"2026-04-19T07:38:21","guid":{"rendered":"https:\/\/networkingnotebook.com\/?p=1007"},"modified":"2026-04-19T07:39:37","modified_gmt":"2026-04-19T07:39:37","slug":"qosbandwidth-latency-jitter-packet-loss","status":"publish","type":"post","link":"https:\/\/networkingnotebook.com\/?p=1007","title":{"rendered":"QOS (Bandwidth\/Latency\/Jitter\/Packet Loss)"},"content":{"rendered":"\n<p>Today I am going to talk about QOS (Quality of Service) and the criteria that impacts how well an application or service performs. QOS is a mechanism used to classify, mark, and prioritize certain kinds of traffic to ensure they perform well under congestion. This allows for critical applications and services to maintain performance despite the network undergoing congestion. When thinking of QOS, there are certain criteria that must be monitored in order to dictate whether an application will function well or not. These criteria are total bandwidth, delay or latency, jitter and packet loss. If any of these suffer, then the overall performance of the application may suffer as well.<br><\/p>\n\n\n\n<p><strong>Bandwidth- <\/strong>Bandwidth refers to the maximum rate at which data can be transmitted by a network link or device and is usually measured in bits per second such as Mbps or Gbps. Bandwidth is not the actual speed of the connection, but the capacity of the connection. For example if the bandwidth of a port is 1 Gbps, after accounting for protocol overhead or network congestion the actual rate may be more like 800 Mbps. This is called \u201cthroughput,\u201d which is the actual amount of data that is transferred successfully. Bandwidth is the maximum \u201ctheoretical\u201d capacity, while throughput is the actual speed that is recorded. The more bandwidth a connection or network has, the higher the performance and throughput is although this is not always the case if there is latency, jitter or packet loss.<\/p>\n\n\n\n<p><strong>Latency-<\/strong> Another statistic that is important for measuring performance is delay or latency. Delay (latency) is the amount of time that it takes a packet to travel from the source to destination which is measured in milliseconds.<br><\/p>\n\n\n\n<p><strong>Jitter- <\/strong>Jitter is the variation in delay (latency) between packets as they travel across a network. This means that packets do not arrive at consistent, evenly spaced intervals. While latency is the total amount of time, jitter means that the total amount of time varies each time. An example of jitter would be if one packet arrives at 150 ms, the second packet arrives at 200ms, ,the third packet arrives at 230 ms, and the final packet arrives at 170ms. As you can see they all arrive at different times, and this is a problem when it comes to real-time or time-sensitive applications that rely on predictable packet arrival. A real life example of jitter would be a video stream where the audio is choppy, the voice sounds robotic, a lot of stuttering, and the video is distorted and pixelated.\u00a0<\/p>\n\n\n\n<p><strong>Packet Loss- <\/strong>The last metric to be measured for performance is packet loss. Packet loss occurs when one or more packets fail to reach their intended destination. The most common reason for packet loss is network congestion. Packet loss increases processing overhead because in TCP connections the packet has to be retransmitted causing delay and reduced throughput. For UDP packets they are not retransmitted at all, which can result in degraded user experience.\u00a0<\/p>\n\n\n\n<p>For a quality audio call, the metric for good performance should be:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Delay (latency) = 150ms or less\u00a0<\/li>\n\n\n\n<li>Jitter = 30 ms or less\u00a0<\/li>\n\n\n\n<li>Packet loss = 1% or less\u00a0<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Today I am going to talk about QOS (Quality of Service) and the criteria that impacts how well an application or service performs. QOS is a mechanism used to classify, mark, and prioritize certain kinds of traffic to ensure they perform well under congestion. This allows for critical applications and services to maintain performance despite&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1007","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1007","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1007"}],"version-history":[{"count":3,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1007\/revisions"}],"predecessor-version":[{"id":1010,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=\/wp\/v2\/posts\/1007\/revisions\/1010"}],"wp:attachment":[{"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1007"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1007"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/networkingnotebook.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1007"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}