Excerpt

## Contents

I. INTRODUCTION

II. PROBLEMS AT LARGE WINDOWS:

A .Packet and flow level modeling

B. Equilibrium Problem:

C. Dynamic Problems:

III. Delay-Based Approach:

A. Motivation

B. Implementation Strategy:

IV. Architecture and Algorithms:

A. Estimation:

B. Window Control:

C. Packet- Level Implementation:

V. Equilibrium and Stability Of Window Control Algorithm:

VI. PERFORMANCE:

A. Testbed And Kernel Implementation:

B. Case study: static scenario:

C. Case study: dynamic scenario I:

D. Case Study: Dynamic scenario II:

E. Overall evaluation

F. Torrents –A real-time application presently using TCP download:

G. Coding for FAST TCP in NS2:

VII. Future Enhancement

VIII.Conclusion:

REFERENCES

## I. INTRODUCTION

Congestion control is a distributed algorithm to share network resources among competing users. It is important in situations where the availability of resources and the set of competing users vary over time unpredictably, yet efficient sharing is desired. These constraints, unpredictable supply and demand and efficient operation, necessarily lead to feedback control as the preferred approach, where traffic sources dynamically adapt their rates to congestion in their paths. On the Internet, this is performed by the Transmission Control Protocol (TCP) in source and destination computers involved in data transfers. The congestion control algorithm in the current TCP, which we refer to as Reno, was developed in 1988 and has gone through several enhancements since. It has performed remarkably well and is generally believed to have prevented severe congestion as the Internet scaled up by six orders of magnitude in size, speed, load, and connectivity, if is also well-known, however, that as bandwidth-delay product continues to grow, TCP Reno will eventually become a performance bottleneck itself. The following four difficulties contribute to the poor performance of TCP Reno in networks with large bandwidth-delay products:

1) At the packet level, linear increase by one packet per Round-Trip Time (RTT) is too slow, and multiplicative decrease per loss event is too drastic.

2) At the flow level, maintaining large average congestion windows *requires* an extremely *small* equilibrium loss probability.

3) At the packet level, oscillation is unavoidable because

TCP uses a binary congestion signal (packet loss).

4) At the flow level, the dynamics is unstable, leading to severe oscillations that can only be reduced by the accurate estimation of packet loss probability and a stable design of the How dynamics.

We explain these difficulties in detail in Section II. In our project, we motivate delay-based approach. Delay-based congestion control has been proposed. Its advantage over loss-based approach is small at low speed, but decisive at high speed, as we will argue below. As pointed out in *.* delay can be a poor or untimely predictor of packet loss and therefore using a delay-based algorithm to augment the basic AIMD (Additive Increase Multiplicative Decrease) algorithm of TCP Reno is the wrong approach to address the above difficulties at large windows. Instead, a new approach that fully exploits delay as a congestion measure, augmented with loss information, is needed. FAST TCP uses this approach. Using queuing delay as the congestion measure has two advantages.

First, queuing delay can be more accurately estimated than loss probability both because packet losses in networks with large bandwidth-delay product are rare events (probability on the order 10-8 or smaller), and because loss samples provide coarser in formation than) queuing delay samples. Indeed, measurements of delay are noisy, just as those of loss probability. Each measurement of packet loss (whether a packet is lost) provides one bit of information for the filtering of noise. Whereas each measurement of queuing delay provides multi-bit information, this makes it easier for the equation-based implementation to stabilize a network into a steady state with a target fairness and high utilization. Second, the dynamics of queuing delay seems to have the right scaling with respect to network capacity. This helps maintain stability as a network scales up in capacity. In Section III we explain how we exploit these advantages to address the four difficulties of TCP Reno.

In Section IV, we lay out architecture to implement our design; Even though the discussion is in the context of FAST TCP the architecture can also serve as a general framework to guide the design of other congestion control mechanisms. Not necessarily limited to TCP, for high-speed networks. The main components in the architecture can be designed separately and upgraded asynchronously. Unlike the conventional design, FAST TCP can use the same window and burstiness control algorithms regardless of whether a source is in the normal state or the loss recovery state. This leads to a clean separation of components in both functionality and code structure. We then present an overview of some of the algorithms implemented in our current prototype.

In Section V, we present a mathematical model of the window control algorithm. We prove that FAST TCP has the same equilibrium properties as TCP Vegas. In particular, it does not penalize flows with large propagation delays, and it achieves weighted proportional fairness. For the special case of single bottleneck link with heterogeneous flows, we prove that the window control algorithm of FAST is globally stable, in the absence of feedback delay. Moreover, starting from any initial state, a network converges exponentially to a unique equilibrium.

In Section VI, we present the results to illustrate throughput, fairness, stability, and responsiveness of FAST TCP, in the presence of delay and in heterogeneous and dynamic environments where flows of different delays join and depart asynchronously. We compare the performance of FAST TCP with Reno, HSTCP (High-speed TCP, and STCP (Scalable TCP), using their default parameters. In these experiments, FAST TCP achieved the best performance under each criterion, while HSTCP and STCP improved throughput and responsiveness over Reno at the cost of fairness and stability. We conclude in Section VII.

## II. PROBLEMS AT LARGE WINDOWS:

A congestion control algorithm can be designed at two levels. The flow- level (macroscopic) design aims to achieve high utilization, low queuing delay and loss, fairness, and stability. The packet - level design implements these (low- level goals within the constraints imposed by end-to-end control. Historically for TCP Reno, packet-level implementation was introduced first. The resulting flow -level properties, such as fairness, stability, and the relationship between equilibrium window and loss probability, were then understood as an afterthought. In contrast, the packet-level designs of HSTCP, STCP, and FAST TCP arc explicitly guided by flow-level goals.

We elaborate in this section on the four difficulties of TCP Reno listed in Section I. It is important to distinguish between packet-level and flow- level difficulties because they must be addressed by different means.

### A .Packet and flow level modeling

The congestion avoidance algorithm of TCP Reno and its variants have the form of AIMD. The pseudo code for window adjustment is:

Abbildung in dieser Leseprobe nicht enthalten

This is a packet-level model, but it induces certain flow-level properties such as throughput, fairness, and stability. These properties can be understood with a flow-level model of the AIMD algorithm. The window of size increases by 1 packet per RTT and decreases per unit time by

Abbildung in dieser Leseprobe nicht enthalten

where

Abbildung in dieser Leseprobe nicht enthalten

**Ti(t)** is the round-trip time and **pi(t)** is the (delayed) end to end loss probability, in period **t**. Here **4wi(t)/3** is the peak window size that gives the “average” window of **wi(t)**. Hence. a flow-level model of AIMD is:

Abbildung in dieser Leseprobe nicht enthalten

Setting **wi(t)** = 0 in yields the well known **1/√p** formula for TCP Reno discovered, which relates loss probability to window size in equilibrium.

Abbildung in dieser Leseprobe nicht enthalten

In summary (1) and (2) describe the flow-level dynamics and the equilibrium, respectively, for TCP Reno. It turns out that different variants of TCP all have the same dynamic structure at the flow level.

By defining

Abbildung in dieser Leseprobe nicht enthalten

and noting that

Abbildung in dieser Leseprobe nicht enthalten

where we have used the shorthand **ki(t)= ki(wi(t), Ti(t))** and **ui(t)= ui** (**wi(t), Ti(t)).** Equation **(3)** can be used to describe all known TCP variants, and different variants differ in their choices of the gain function **ki** and the marginal utility function **ui**, and whether the congestion measure **pi** is loss probability or queuing delay.

Next, we illustrate the equilibrium and dynamics problems of TCP Reno, at both the packet and flow levels, as bandwidth-delay product increases.

### B. Equilibrium Problem:

The equilibrium problem at the flow level is expressed in (2): the end-to-end loss probability must be exceedingly small to sustain a large window size, making the equilibrium difficult to maintain in practice, as bandwidth-delay product increases.

Even though equilibrium is in flow-level notion, this problem manifests itself at the packet level, where a source increments its window too slowly and decrements it too drastically. When the peak window is 80,000-packct (corresponding to an "average" window of 60,000 packets), which is necessary to sustain 7.2Gbps using 1,500-byte packets with a RTT of l00ms, it takes 40,000 RTTs or almost 70 minutes, to recover from a single packet loss. This is illustrated in Figure 1a, where the size of window increment per RTT and decrement per loss. 1 and 0.5wi, respectively, are plotted as functions of wi . The increment function for Reno (and for HSTCP) is almost indistinguishable from the x axis. Moreover, the gap between the increment and decrement functions grows rapidly as wi increases. Since the average increment and decrement must be equal in equilibrium, the required loss probability can be exceedingly small at large wi. This picture is thus simply a visualization of (2).To address the difficulties of Reno at large window sizes, HSTCP and STCP increase more aggressively and decrease more gently.

### C. Dynamic Problems:

The causes of the oscillatory behavior of TCP Reno lie in its design at both the packet and flow levels. At the packet level, the choice of binary congestion signal necessarily leads to oscillation, and the parameter setting in Reno worsens the situation as bandwidth-delay product increases. At the flow level, the system dynamics given by (I) is unstable at large bandwidth-delay products. These must be addressed by different means, as we now elaborate.

Abbildung in dieser Leseprobe nicht enthalten

**Fig 1**.(a).Window increment per RTT and decrement per loss as a function of current window in Packet level Implementation.

(b)Window update as a function of distance from equilibrium in FAST TCP

Abbildung in dieser Leseprobe nicht enthalten

**Fig 2:** (a).Operating points of TCP algorithms :R-Reno,C-CARD,D-DUAL,

F-Vegas,T-TFRC

Figure 2(a) illustrates the operating points chosen by various TCP congestion control algorithms, using the single-link, single-flow scenario. It shows queueing delay as a function of window size. Queueing delay starts to build up after point *C* where window equals bandwidth-propagation-delay product, until point *R* where the queue overflows. Since Reno oscillates around point *R,* the peak window size goes beyond point *R.* The minimum window in steady slate is half of the peak window. This is the basis for the rule of thumb that bottleneck buffer should be at least one bandwidth-delay product: the minimum window will then be above point *and* buffer will not empty in steady state operation, yielding full utilization.

In the loss-based approach, full utilization, even if achievable, comes at the cost of severe oscillations and potentially large queueing delay. The DUAL scheme proposes to oscillate around point *D,* the midpoint between *C* and *R* when the buffer is half-full. DUAL increases congestion window linearly by one packet per RTT, as long as queueing delay is less than half of the maximum value, and decreases multiplicatively by a factor of I /8. When queueing delay exceeds half of the maximum value. The scheme CARD (Congestion Avoidance using Round-trip Delay) proposes to oscillate around point *C* through A1MD with the same parameter **(1, 1/8)** as DUAL, based on the ratio of round-trip delay and delay gradient, to maximize power. In all these schemes, the congestion signal is used as a binary signal, and hence congestion window *must* oscillate.

Congestion window can be stabilized only if multi-bit feedback is used. This is the approach taken by the equation-based algorithm, where congestion window is adjusted based on the estimated loss probability in an attempt to stabilize around a target value given by (2). Its operating point is *T* in Figure 2(b), near the overflowing point. This approach eliminates the oscillation due to packet-level AIMD, but two difficulties remain at the flow level.

*First*, equation-based control requires the explicit estimation of end-to-end loss probability. This is difficult when the loss probability is small. *Second*, even if loss probability can be perfectly estimated, Reno's flow dynamics, described by equation (1) leads to a feedback system that becomes unstable as feedback delay increases, and more strikingly, as network capacity increases. The instability at the flow level can lead to severe oscillations that can be reduced *only* by stabilizing the flow-level dynamics. We will return to both points in Section III.

## III. Delay-Based Approach:

In this section we, motivate delay-based approach to address the four difficulties at large window sizes.

### A. Motivation

Although improved loss-based protocols such as HSTCP and STCP have been proposed as replacements to TCP Reno, we showed in [5] that they don't address all four problems (Section I) of TCP Reno. To illustrate this, we plot the increment and decrement functions of HSTCP and STCP in Figure l(a) alongside TCP Reno. Both protocols upper bound **TCP Reno**: each increases more aggressively and decreases less drastically, so that the gap between the increment and decrement functions is narrowed. This means, in equilibrium, both HSTCP and STCP can tolerate larger loss probabilities than TCP Reno, thus achieving larger equilibrium windows. However, neither solves the dynamics problems at both the packet and the flow levels.

In [5], we show that the congestion windows in Reno, HSTCP and STCP all evolve according to:

**wi(t) = Ki(t).(1-(pi(t)/ui(t))..(4)**

where **k(t) := ki(wi(t),Ti(t))** and wi **(t) := ui(wi(t),Ti(t))**. Moreover, the dynamics of FAST TCP also takes the same form. They differ only in the choice of the gain function **ki(wi,Ti)**,the marginal utility function **ui(wi,Ti),** and the end-to-end congestion measure pi. Hence at the flow level, there are only three design decisions:

- **ki(wi ,Ti)**: the choice of the gain function **ki** determines the dynamic properties such as stability and responsiveness, but does not affect the equilibrium properties.

- **ui(wi , Ti ):** the choice of the marginal utility function **ui** mainly determines equilibrium properties such as the , equilibrium rate allocation and its fairness.

- **pi :** In the absence of explicit feedback, the choice of congestion measure **pi** is limited to loss probability or queueing delay. The dynamics of **pi** (t) is determined at links.

The design choices in Reno, HSTCP, STCP and FAST are shown in Table 1:

**Table I:**

Abbildung in dieser Leseprobe nicht enthalten

This common model (4) can be interpreted as follows: the goal at the flow level is to equalize marginal utility **ui(t)** with the end-to-end measure of congestion **pi(t)** *.* This interpretation immediately suggests an equation-based packet-Ievel implementation where *both* the direction and size of the window adjustment **wi(t)** are based on the difference between the ratio **pi(t)/ui(t)** and the target of 1. Unlike the approach taken by Reno, HSTCP, and STCP, this approach eliminates packet-level oscillations due to the binary nature of congestion signal. It however requires the *explicit* estimation of the end-to-end congestion measure **pi(t).**

Without explicit feedback, **pi(t)** can only be loss probability, as used in TFRC [34]. or queueing delay, as used in TCP Vegas [8] and FAST TCP. Queueing delay can be more accurately estimated than loss probability both because packet losses in networks with large bandwidth-delay products are rare events (probability on the order 10-8 or smaller, and because loss samples provide coarser information than queueing delay samples. Indeed, each measurement of packet loss (whether a packet is lost) provides one bit of information for the filtering of noise, whereas each measurement of queueing delay provides multi-bit information. This allows an equation-based implementation to stabilize a network into a steady state with a target fairness and high utilization.

At the flow level, the dynamics of the feedback system must be stable in the presence of delay, as the network capacity increases. Here, again, queueing delay has an advantage over loss probability as a congestion measure: the dynamics of queueing delay seems to have the right scaling with respecl to network capacity. This helps maintain stability as network capacity grows.

### B. Implementation Strategy:

The delay-based approach, with proper flow and packet level designs, can address the four difficulties of Reno at large windows. First, by explicitly estimating how far the current state **pi(t)/ui(t)** is from the equilibrium value of 1, our scheme can drive the system rapidly, yet in a fair and stable manner, toward the equilibrium. The window adjustment is small when the current state is close to equilibrium and large otherwise, independent of whether the equilibrium is as illustrated in fig B. This is in stark contrast to the approach taken by Reno,HSTCP and STCP, where window adjustment depends on just the current window size and is independent of where the current state is with respect to the target (compare Figures l(a) and (b)). Like the equation-based scheme in [34], this approach avoids the problem of slow increase and drastic decrease in Reno. as the nefuork scales up.

Second, by choosing a multi-bit congestion measure, this approach eliminates the packet-level oscillation due to binary feedback, avoiding Reno's third problem.

Third, using queueing delay as the congestion measure **pi(t)** allows the network to stabilize in the region below the overflowing point, around point **F** in Figure 2(b), when the buffer size is sufficiently large. Stabilization at this operating point eliminates large queueing delay and unnecessary packet loss. More importantly, it makes room for buffering "mice" traffic. To avoid the second problem in Reno, where the required equilibrium congestion measure (loss probability for Reno, and queueing delay here) is too small to practically estimate, the algorithm must adapt its parameter **ai** with capacity to maintain small but sufficient queueing delay.

Finally, to avoid the fourth problem of Reno, the window control algorithm must be stable, in addition to being fair and efficient at the flow level. The use of queue ing delay as a congestion measure facilitates the design as queueing delay naturally scales with capacity [22], [23], [24].

The design of TCP congestion control algorithm can thus be conceptually divided into two levels:

**1)** At the flow level, the goal is to design a class of function pairs, **ui(wi , Ti )** and **ki(wi,Ti)** so that the feedback system described by (4), together with link dynamics in **pi(t)** and the interconnection, has an equilibrium that is fair and efficient, and that the equilibrium is stable, in the presence of feedback delay.

**2)** At the packet level, the design must deal with issues that are ignored by the flow-level model or modeling assumptions that are violated in practice, in order to achieve these flow-level goals. Those issues include burstiness control, loss recovery, and parameter estimation.

The implementation then proceeds in three steps:

1) Determine various system components;

2) Translate the flow-level design into packet- level algorithms.

3) Implement the packet- level algorithms in a specific operating system.

The actual process iterates intimately between flow and packet level designs, between theory, implementation, and experiments, and among the three implementation steps.

The emerging theory of large-scale networks under end-to-end control forms the foundation of the flow-level design. The theory plays an important role by providing a framework to understand issues, clarify ideas, and suggest directions, leading to a robust and high performance implementation.

We lay out the architecture of FAST TCP next.

## IV. Architecture and Algorithms:

We separate the congestion control mechanism of TCP into four components in Figure 3. These four components are functionally independent so that they can be designed separately and upgraded asynchronously. In this section, we focus on the two parts that we have implemented in the current prototype.

**Fig 3. ** Architecture of FAST TCP:

Abbildung in dieser Leseprobe nicht enthalten

The data control component determines which packets to transmit, window control determines how many packets to transmit, and burstiness control determines when to transmit these packets. These decisions are made based on information provided by the estimation component. Window control regulates packet transmission at the RTT timescale, while burstiness control works at a smaller timescale. In the following subsections, we provide an overview of window control and algorithms implemented in our current prototype.

**[...]**

- Quote paper
- Christo Ananth (Author), 2017, Fast Active Queue Management Stability Transmission Control Protocol (FAST TCP), Munich, GRIN Verlag, https://www.grin.com/document/376497

Publish now - it's free

Comments