Problem & Solution
         Vision & Goals People Articles Email us

Problem & Solution

 

 

The Problem

 

Growing demand for applications such as Voice over IP (VoIP), Interactive video/voice streaming, mission-critical applications, interactive games and interactive remote learning refocus attention into the need for quality of service (QoS) communication and its deployment across multiple ISPs and Enterprise networks. The standard solutions of Differentiated Services (DiffServ) and Integrated Services (IntServ) which are limited by functionality and scalability, respectively, prevent them from being universal solutions that can be deployed across multiple network domains.

 

The performance requirement of the applications above stresses the need for a scalable and economical solution supporting QoS communication in Enterprise networks and across multiple ISP domains. Our novel patent pending protocol and network devices, described below, provide such a solution.  

 

The Need for QoS

 

Different Applications - Different requirements

Different classes of network applications may have very different performance requirements from the network. For instance, FTP and Email have zero tolerance for packet loss but high tolerance for delay and jitter (between seconds to hours). Their bandwidth requirement is relevant only through its impact on the overall delivery time. For VoIP applications, packet loss rate of 2.5% in a random fashion is annoying but such loss rate in a bursty fashion result in distortion. An upper bound of 250 ms for packet round trip time (RTT) is required for natural VoIP conversation.

 

Voice and video streaming applications have medium tolerance for delay and jitter (of few seconds) since they can compensate excess delay by buffering. Their tolerance for packet loss is low and their bandwidth requirements depend on the required streaming quality. Interactive video/voice, X-window, telnet and interactive remote learning have some tolerance for packet loss but very low tolerance for delay and jitter. Their bandwidth requirement could be enormous. 

 

Mission critical applications have zero tolerance for packet loss and their tolerance for delay and jitter may vary; so is their bandwidth requirement. Web-based applications have low bandwidth requirement, zero tolerance to packet loss and application dependent response time.       

 

QoS can be specified by four parameters:

  • Packet loss rate;

  • Packet RTT;

  • Packet jitter

  • Bandwidth requirement

Overall, some applications can live in a best-effort service network (as Internet today) and some applications require strict and different QoS parameter values that cannot be guaranteed in a best-effort network  

 

The Misconception of Over-Provisioning

There is an approach claiming that bandwidth over-provisioning can address all the  QoS requirements. Namely, network operators should allocate bandwidth that exceeds the peek rate.  This approach, however, has several limitations, where the economical drawback is clearly the most obvious one. Technicalities bear other drawbacks. Whereas, LAN capacities are simple to increase, deploying new long haul lines with broader bandwidth in the core network is slow and not always feasible due to either technology lagging or switch ports limitations.

 

Interoperability is yet another limitation. A typical flow of packets in an enterprise network that does not own a private backbone network traverse multiple ISP domains as illustrated in Figure 1. This topology also illustrates a typical network traversed by Internet users. Since QoS communication is an end-to-end performance property, the operators of all ISP domains need to agree, coordinate and synchronize the bandwidth over-provisioning. In practice, it is an extremely problematic issue.

 

 

Figure 1: An example of an enterprise network layout.  

 

Most importantly, even when all links have maximum bandwidths (as determined by the respective switch ports) and are operating below their capacity, traffic congestion may still occur due to temporary influx mismatch in the network. Such mismatch may occur, e.g., when a 1G Ethernet is attached to a 100M Ethernet (Figure 2-a); when three 1G Ethernet LANs access the Internet/Intranet through an OC-48 core link (Figure 2-b); and when a link interface fails (Figure 2-c) or as a result of the well known "flash crowd" phenomenon. 

 

 

Figure 2: Influx mismatch occurrences.  

 

When network congestion does occur, applications such as VoIP, interactive video/voice streaming, remote interactive learning and mission-critical applications cannot function. Consequently, managing and controlling the traffic flows across network domains in a manner that accounts for their QoS requirements is a must.     

 

Current QoS-Enabled Networks 

 

The current IETF standards of IntServ and DiffServ fail in providing practical solution for enterprise networks crossing multiple network domains and clearly fail in the public Internet network. The major reasons are as follows:

  • IntServ is designed to support end-to-end QoS requirements and requires from each router/switch along an end-to-end flow path to maintain the flow state. Consequently, the solution is not scalable, namely, inefficient in large networks with many flows.   

  • IntServ also reserves strict bandwidth resources for the entire life-time of each flow. It results in a low degree of time multiplexing, namely, inefficient network resource utilization.  

  • DiffServ is designed to be scalable which is achieved by sophisticated traffic shaping (Token Bucket) and packet scheduling (Weighted Fair Queueing, aka, WFQ). The result is that packet performance can be guaranteed only per hop, but not throughout the end-to-end path. This is known as "Per Hop Behavior" (PHB).      

  • DiffServ configuration is one of the most problematic tasks. Each router port/interface/PVC along the end-to-end path needs to be configured. Furthermore, setting up the configuration parameters is extremely complex and often requires a network provisioning tool.

Furthermore, current QoS-enabled networks do not pay any attention to fair bandwidth allocation within each QoS application class. Although not being a primary objective as performance is, fair bandwidth allocation result in optimal network utilization without discriminating between users having the same performance needs.         

 

Ideal QoS-Enabled Networks

 

An ideal QoS-enabled network can be characterized as follows.

  • End-to-end QoS communication guarantee.

  • Scalable with respect to network size and number flows.

  • Simple configuration at the edge routers where only the end-to-end bandwidth, delay, jitter and packet loss requirements are specified.    

  • Adaptive bandwidth allocation based on temporal application demands.

  • Applicable to flows across multiple ISP domains.

  • Fair bandwidth allocation.    

 

A Solution

 

Fundamental Technology and Intellectual Property

We developed a distributed protocol, referred to as Resource Management Protocol (RMP), addressing all the drawbacks of the current QoS-enabled networks and provides the ideal solution as specified above. The protocol has been proven mathematically to be stable and fast converging under asynchronous operation. These results have been also substantiated by extensive and detailed simulations of very large ISP networks using the NS2 simulation tool as well as by our own specially designed network simulator. Our protocol is USPTO patent pending.   

 

Our Solution

We propose a solution based on our innovative patent-pending, distributed, adaptive and scalable protocol, RMP, which enables an ideal QoS support as specified above.

 

RMP comprises two protocol components, a core and an edge component. As illustrated in Figure 3, the edge component is executed at each edge router and the core component is executed at each core router. All the individual components exchange control information in the network layer and in an orchestrated manner so as to achieve the current QoS requirements of the current flows in the network.      

 

FairQoSSupport1

 

Figure 3: RMP network devices.

 

The manner by which QoS objectives are achieved is by controlling the rates of each QoS flow in the network so as all QoS requirements can be attained. The RMP edge component continuously computes at the edge routers the optimal fair rate of each QoS flow based on the feedback information received from the RMP core components. Having the optimal flow rates, the flow traffic rate is shaped at the edge routers using a Token Bucket shaper and packets are scheduled for transmission using a priority scheduler.  In addition, packets are classified according to their QoS level at their entry edge router and if network resources are not sufficient, admission control is applied.  

 

The flow control is applied on aggregated packets flows having the same path and QoS level. The traffic shaping is done at the IP packet level from the simple reason that almost all network applications generate IP packets. Although IP packets could be encapsulated in various transport protocol messages such as ATM. MPLS, and Frame Relay, shaping the traffic at the IP level is a generic solution which will endure.    

      

Configuration is done only at the Edge Routers where end-to-end QoS requirements are specified.

 

Implementation 

The most natural way to implement the edge and core RMP components is inside the edge and ore routers, respectively. However, such implementation requires a painful penetration into routers from "dinosaurs" such as Cisco and Juniper.

 

A simpler solution which is also faster to deploy (until the solution will find its way into the "dinosaurs" routers) is to implement the RMP components in independent network devices as depicted in Figure 3. As partially illustrated in Figure 4, each RMP core device will be attached to the core links of a core and an edge router, and each RMP edge device will be attached to the access links of an edge router.  

 

The protocol logic can be developed either in a Network Processor Unit (NPU) based network chip or in an ASIC based packet processor chip.

 

 

Figure 4: RMP devices in the network.

 

 

 

 

©2006 Fair QoS Flows. All Rights Reserved.