QOS Aware TCP Congestion Control Variants for processing Multimedia Data in mobile adhoc Networks

Thesis (M.A.), 2019

138 Pages





List of Figures

List of Tables

List of Acronyms

1 Introduction
1.1 Background
1.1.1 Mobile Ad-hoc Networks
1.2 Transport Layer for Ad-hoc Networks
1.2.1 Modified TCP
1.2.2 Cross Layer Solution in TCP
1.3 Transport Layer Protocol for Ad-hoc Networks
1.3.1 TCP over Ad-hoc Network
1.3.2 TCP overview
1.3.3 TCP split Approach
1.3.4 TCP F
1.3.5 TCP Bus
1.3.6 Ad-hoc Transmission Protocol(ATP)
1.4 Congestion control in TCP
1.4.1 Tahoe TCP
1.4.2 Reno TCP
1.4.3 New Reno TCP
1.5 Motivation for the research work
1.6 Scope of the Research Work
1.7 Definition of the Problem
1.8 Aim and Objectives of the Proposed Work
1.9 Organization of the Thesis
2.0 Chapter Summery

2 Literature Work
2.1 Background
2.2 Related Work on TCP Congestion Control Variants
2.2.1 High Speed TCP
2.2.2 TCP NewReno
2.3 Related Work on Multimedia Data Processing in MANTETs
2.3.1 UDP Streaming
2.3.2 Multimedia Streaming-HTTP
2.3.3 Content Distribution Network (CDN)
2.3.4 Voice over IP
2.3.5 Multimedia Networking Protocols RTP RTP Protocol Components SIP (Session Initiation Protocol) RTSP(Real Time Streaming Protocol) Multimedia Network Support Dimensioning of Best Effort Networks
2.4 Chapter Summary

3 TCP Congestion Control Techniques
3.1 Background
3.2 TCP Congestion Control Variants
3.3 TCP Variants
3.3.1 New Re-Transmission Mechanism
3.3.2 Congestion Avoidance
3.3.3 Modified Slow- start
3.3.4 TCP- Cubic
3.3.5 TCP-Hybla
3.3.6 Highspeed-TCP
3.3.7 Scalable- TCP
3.3.8 TCP-Westwood
3.3.9 TCP-Veno
3.3.10 TCP-YeAH
3.3.11 TCP-Illinois
3.3.12 H-TCP
3.3.13 TCP-Low Priority (TCP-LP)
3.3.14 Compound-TCP
3.3.15 TCP-Westwood NR
3.4 Result and Discussion
3.4.1 TCP Performance Under Congestion
3.4.2 Packet Drop Rate
3.4.3 Latency
3.4.4 Throughput
3.4.5 Packet Drop Rate
3.4.6 Latency
3.5 Chapter Summary

4 Modified HSTCP
4.1 Background
4.2 Objectives
4.3 M-HSTCP Algorithm
4.4 Mathematical Model
4.5 Simulation Results
4.6 Findings and Interpretations
4.7 Chapter Summary

5 Switching TCP
5.1 Background
5.2 Switching TCP Algorithm
5.3 Simulation Results
5.4 Findings and Interpretations
5.5 Mathematical Model
5.5.1 System Model
5.6 Chapter Summary

6 Switching TCP in Multimedia Data
6.1 Background
6.2 Objectives
6.3 Processing of Multimedia Data using Switching TCP
6.4 Result Analysis
6.4.1 Frame Rate
6.4.2 Frame Loss Rate
6.4.3 Error Rate
6.5 Finding and Interpretations
6.6 Chapter Summery

7 Conclusion


Publications Details


The digital era of communications shapes the globe from wired networks to wireless network, offline media to online media. The mobile adhoc network is one of the major categories of network where the users depends upon and accessing the services too. As the number of users increases in accessing the data in mobile adhoc networks, the major challenging issue is congestion. The type of data transmitting and receiving in mobile adhoc networks mainly deals with multimedia data i. e audio, video and animations. The bandwidth utilization is high in case of multimedia data in the mobile ad hoc network when compared to other data payloads. The network performance level can be optimized by reducing the congestion in mobile adhoc networks. It is highly difficult to provide best performance to meet the user expectations in mobile adhoc networks. As in mobile adhoc network, Congestion control at the transport layer is the greater role. Subsequently, the various compound designs are being used and system which has to adjusted as per the service providers.

As Quality of Service is very much required by users in mobile adhoc networks and it’s very much problematic to attain them by having a less number of resources. TCP is the standard protocol of transport layer which behaves in different flavors. Among such variants NewReno and HSTCP are the most commonly used. However the performance of TCP NewReno and HSTCP are very much sensitive to immediate changes in the traffic load. For achieving the QoS in mobile adhoc networks the problems related with congestion issues has to be addressed. The proposed work identifies certain number of issues during the improvement of QoS in mobile adhoc networks. Hence, the proposed algorithms MHSTCP and SwTCP are designed and developed. MHSTCP and SwTCP give attention towards few parameters such as Bandwidth utilization, Data rate, Delay, and energy. The proposed algorithms are simulated in Network Simulator-2 by varying impairments and results are verified. Based on the simulation results, the research outcome shows that the proposed algorithms outperform the existing TCP congestion control variants in enhancing the QoS parameters in Mobile adhoc networks.

List of Figures

Abbildung in dieser Leseprobe nicht enthalten

List of Tables

Abbildung in dieser Leseprobe nicht enthalten

List of Acronyms

Abbildung in dieser Leseprobe nicht enthalten

Chapter – 1 Introduction

Telecommunication is a network and an electronic mode of communication which conveys the information such as text, image, audio and video to wider distances. It connects different types of people from different places for exchanging their information. From the inception to till now, the growth of telecommunication network is quite astonished. The evolution of technologies started from wired networks to wireless networks. After some decades, it was switched over to mobile adhoc network quickly. It attracts greater number of people because of its device portability, handy, location management, business features, funs and games, user friendly, versatility and so on. Due to these attributes, mobile adhoc network has become very popular than wireless networks. So that, mobile adhoc network is encroaching all the fields across the globe.

1.1 Background

Network refers to exchanging messages between the people by following some generic protocols. It can be varied depends on the user need and the medium that are connected within it. The growth of network is moving to globe in an unpredictable manner. Similarly, the research in network has been gradually upgrading due to its technology advancements, type of the medium uses, number of participants availing the services and much more. All these bring together creates a lot of research issues still not yet to be resolved. Because, the term ‘Network’ connect the people to wider distances in the form chatting, sharing information, files, documents, greetings, multimedia sharing, file surfing and much more. Generally, Networks can be classified into two categories. Namely, wired networks and wireless networks. Wired type communication uses cables and wires for creating the connection. In the case of wireless, it doesn’t require any such type of cables in establishes the connection. Air and waves play the important medium for that wireless connection.

Now-a-days people are showing much interest in Wireless communication due to its cheaper cost and portable access. Some popular Wireless Networks are, Bluetooth, Wi–Fi, Zig-bee, Wireless Sensor Networks, Mobile Networks, Body Area Networks and so on. Under these, Wireless Network further classified into two major types. One is Infrastructure – based Network and the other one is Infrastructure – less Network. Infrastructure based Network requires an Access Point to provide service to all the subscribers who are depending on the network. Infrastructure – Less Networks eliminate such kind of Access Point for communication. Both types of networks having advantages and disadvantages get changes depend on the user demand and their availing services. In order to get fruitful services the users have to follow certain protocols frequently. Those protocols only determine the quality of the service which is given by the service provider. It is applicable only to the Infrastructure-based Networks. In another type every user can acts as a host as well as provider of needed information among the participants.

1.1.1 Mobile Ad hoc Networks

Infrastructure less network sometimes works much better than other traditional networks in today’s scenario. Mobile Ad hoc Networks (MANETs) is the one among them and is derived from Latin word ‘For this Purpose Only’. It is highly dynamic in general can be altered at any point of time for any cost. It is also known as self-configurable network easy adjustable by adding any new nodes to the network also shrinks it when some node left from the topology. This frequent adjustment attracts huge number of users. It eliminates the Access Point feature and allows each device in acting as a router for forwarding/receiving the communicated message through wireless links. MANET creates an autonomous background within communication range and each node discovers its neighbouring nodes by sending some hello messages. Here, neighbour nodes are treated as routers and it establishes until reaches to the destination. Each and every nodes are constrained by bandwidth and energy some other parameters for making the communication. Also nodes may join to any other networks freely and can also leave from that network for certain situations.

Source node could not communicate to the Destination node directly if it is residing out of the range; in this case, source node should identify some of the possible intermediate nodes who are inside the range and sends message through those intermediates. It also should find the destination node is near or far. If the node is too far, it has to search another intermediate to send the actual data received from the source node. MANET nodes take three types of messages for establishing the route between the communicators. They are, Route Request (RREQ), Route Reply (RREP) and Route Error (RERR). Route Request is used to discover the path for the data transmission. Route Reply is used to convey that the destination or intermediate node ready to take the part the transmission. If any error occurred while flowing of data, the Route Error message is sent with between intermediate nodes. There may be a chance of possibility to send more number of duplication of RREQs in and around the networks. It also leads to control overhead issue.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1.1- Sample MANET Scenario

Figure 1.1 shows sample network scenario of MANET. Here, node ‘A’ is not residing in the coverage area of node ‘D’ but wishes to send message to ‘D’. It uses two intermediate nodes called ‘B’ and ‘C’. the communication flow is A -> B -> C -> D. suppose, if the node ‘C’ left from the group means, node ‘B’ takes the responsibility and does the job of node ‘C’. This is possible only if the node ‘B’ is within the coverage area. MANET has three modes of communication they are, Unicast, Multicast and Broadcast as like normal networks. It has some individual characteristics in nature and is listed below.

- Wireless Communication Links Radio waves act as an intermediate medium for conveying the messages.
- Infrastructure Less and Autonomous No prior setup is needed for MANET.
- Multi-Hop Routing Many intermediate nodes may participate for exchanging the messages.
- Dynamic Network Topology Any node may join or leave from the network.
- Heterogeneity in Devices Devices having the capacity to accept any type of request even the multimedia transmission could also be possible.
- Energy Constrained Energy consumption plays a major role in MANET and also devices are much constrained to spend energy during data transfer.
- Bandwidth Constrained Bandwidth only determines the capacity of the network for performing the effective communication task.
- Limited Security As far as security concern, MANET poses many challenges due to its dynamic topology any node may become malicious to hack the user data.
- Frequent Routing Update Each and every changes has been made in routing table to alert the intermediate nodes for reducing error in data transfer.

The following section shows the advantages and disadvantages of MANET.

The Advantages are,

- MANET offers the information of geographical position and its services .
- High robustness and self-configuration which leads to acts each node as a router and a host.
- Improved scalability which refers it include any number of nodes during the message transmission.
- MANET can be developed at any time and at any place without posing predefined configurations.
- It is easy to deploy.

The Disadvantages are,

- There is no defined structure
- Lack of cooperation
- Battery power is limited
- Due to its poor authentication, any node may be an attacker
- No centralized server to handle the requests in a single node

1.2 Transport Layer for Ad-hoc Networks

Transport Layer takes the responsibility for providing an end-to-end reliable service in the layering model. MANET has dynamic topology Transmission Control Protocol (TCP) plays a vital role in retransmission of packets. This retransmission reduces the number of missing packets in a considerable time. Therefore, it should be effectively designed for achieving lesser amounts of errors while on transmission.

TCP is one of the dominant protocols to offer internet service for the users who are connected in the network by using some applications. It is possible through Internet Protocol (IP) and is termed as TCP/IP suite. This suite includes both TCP and UDP protocols. UDP stands for User Datagram Protocol. This UDP is mainly suitable for multimedia applications. In transport layer, the TCP is a difficult protocol in nature because it deals with sequencing, in order delivery of packets, connection-oriented structure, reliable service and end-to-end delivery etc. Not only these tasks, but also it does the job of congestion reduction and flow control of data. It can be done for recovering the lost packets while on occurring of congestion in the routers. There are three types of approaches available for observing the performance of Transport Layer in MANET.

1.2.1 Modified TCP

Any small modification can be done at TCP for increasing the performance of the network also it adjusts with the existing Ad-hoc Network to resume normal data transfer. Explicit Link Failure Notification (ELFN) protocol is an example for Modified TCP. The primary objective of an ELFN is to offer TCP sender with some information about Link Failures, Node Mobility and Route Failures etc. which also reduced the network performance.

Some of the Salient Features of TCP-ELFN is discussed as below

- Whenever the link gets failed, that particular link pings back ELFN message to the source TCP connection and is useful to take retransmission of packets avoids host not reachable. It is happening at the MAC layer.
- If a source node receives the link failure message, route re-computation is being done for further connection and resumes into the standby mode. Here, times is assigned and keeps on reducing the congestion by entering into slow-start phase.
- Modification to the RTO is also done during probing.


- Source node obtains the link failure, it helps to find the neighboring node quickly.
- It is independent to routing protocols.


- Excessive uses of bandwidth for probe message transmission.
- Every time re computation is needed to manage the congestion.

1.2.2 Cross Layer Solution in TCP

It deals with hiding its lower level approaches used to strengthen by adapting cross layer approach in Ad-hoc network uniquely. Cross layers takes the responsibility of mitigating the route failure issues, predicting the alternate routes, re-computing its cost and also reducing the delay factors etc.

Route Failure Prediction

Route prediction arises whenever link failure occurs. It also made to find another possible path for transmitting the packets. It can be done by measuring the signal strength of the node and its mobility. The highest strength has lesser chance of dropping packets and poorer strength frequently not carrying the packets. It also depends on the neighbouring nodes movement and the adopted progression model. At the same time, finding their routing information and its history is also a tedious task. The sequence steps are, Initially, the source node identifies the failure of route prediction, it initiates new route request. Route error occurs, when it fails to predict the node strength.

1.3 Transport Layer Protocols for Ad-hoc Networks

New characteristic rate based is also being included with Ad-hoc network which upgrades the performance of TCP. This rate method eliminates the concept of window scheme and comprises three phases in underutilization of resources.

1.3.1 TCP over Ad-hoc Networks

One hop and multi-hop communication is possible in TCP over Ad-hoc networks. Most of the operating topology prefers multi-hop mode connection in order to provide reliable service by connecting longer distances. It is done by checking up the request and reply messages. Similarly, the characteristic of ad-hoc network of TCP is changed with the characteristics of cellular network of TCP due to its architectural changes.

1.3.2 TCP Overview

TCP follows window based schemes for its transmission because here, the window size is very much helpful to identify the number of unacknowledged packets during congestion. Whenever the connection is started, TCP sender adopts slow-start phase and thereby increasing the window size by one in each packet is acknowledged. It gets increased by double due to its round trip time. It maintains one threshold value to identify whether it reaches the congestion or not. Once if it exceeds in number, automatically the window size also gets increased. This level is called as congestion avoidance. All the whole process fully depends on the network bandwidth. Suppose, if any transmitted packet get lost, the window size is reduced by one and then retransmission happens and connection is restarted to slow-start phase again.

1.3.3 TCP Split Approach

TCP Split is one of important approach for improving fairness in ad-hoc networks. Proxies are very useful and it creates a separate zone as a partition which sends the notification to the source node to perform retransmission of packets.


- Increased throughput
- Improved Fairness


- It requires frequent update
- More storage is needed
- Higher overhead

1.3.4 TCP F

Throughput can be increased by reducing the number of link failures through this TCP F method here ‘F’ denotes the feedback procedure. It immediately resolves the broken link by using this feedback and it is an important task to escalate the congestion issue. The reliability could also be enhanced. Route Failure Notification (RFN) is used to solve the link failure issue which happens at the intermediate nodes. It follows two states such as Active State and Snooze State.


- Small message is used to handle link failure issue
- Congestion can be solved using buffering method


- It detects the link failure in an advanced manner
- Transport layer overlaps with Network layer

1.3.5 TCP Bus

It follows the same principle of TCP F which follows feedback as a metric in order to identify the link breakage from the intermediate nodes. It uses Localized Query (LQ) to solve the corrupted paths and fully depends on to the routing protocol.


- Quickly retransmit the packets and improve the network performance
- Selective acknowledgement


- More dependency
- More loss due to poor buffer maintenance

1.3.6 Ad-hoc Transmission Protocol (ATP)

ATP protocol is configured to resolve congestion oriented problems which are generated by the TCP. This configuration is being done by coordinating more than one layer. It initiates the request from the bottom most layer to the upper layers and collect the information for observing the link failure. Selective Acknowledgements (SACK) is used to solve for congestion.


- Upgraded performance
- Reduced congestion
- Enhanced reliability


- Unable to achieve interoperable with TCP
- Low Scalability

1.4 Congestion Control in TCP

Congestion refers to arrival of packets are higher than the buffering capacity causes congestion that is dropping certain number of incoming packets during transmission. It fails to reduce the flow of its control from the source node. In general, TCP is initially prepared for providing services only to the wired networks but later it is implemented in wireless networks also. Due to its wireless characteristics many of people showing interest to access the network without having any time limit. It creates complex situation and it is quite difficult to handle all the users at a time with limited number of resources in hand. This causes congestion and in TCP it is solved by using congestion avoidance/slow-start algorithm. Some of the congestion control schemes are exist in MANET who handles for TCP.

1.4.1 Tahoe TCP

TCP sender always starts with slow-start phase in this Tahoe method and if congestion is detected suddenly it enters into congestion avoidance state or resumes to the slow-start phase. Whenever it gets new ACK immediately it checks with threshold value with current state and the other transition state. It commits three types of message during pause for packets such as new ACK, duplicate ACK and RTO timer expiration. If the timer reaches its end value, it retransmits the ACK packets quickly in order to reduce the latency factor during data transfer.

1.4.2 Reno TCP

It is an improved version of TCP Tahoe to recovery the lost packets. It keeps on filling the packets which are dropped due to its congestion. Here, the path of communication is generally known as ‘Pipe’ and it is very useful to fast recovery. Reno TCP enters into the timeout phase when no acknowledgment is obtained.

1.4.3 New Reno TCP

It is a refinement version of Reno TCP ultimately it provides much quicker retransmission when the packets get dropped. There are some chances when the ACK is received by Reno TCP during transmission. They are, Full Acknowledgement and Partial Acknowledgement. The first one includes the sequence number in the window size end from the current size during acknowledgment. Second one includes the highest sequence number than the lost packet exceeds the window size. NewReno method continues to retransmit the second further lost packets, without getting further delay to receive consecutive duplicate ACKs or RTO timer expiration. It refers to partial because it adds only this type of contents in the events list.

1.5 Motivation for the Research Work

The purpose of the congestion control is to transfer the data packets in the network to reduce jamming of packets. The transmission of multimedia data in the mobile adhoc networks with congestion free is a challenging task. Some of the issues and problems faced in the multimedia adhoc networks are listed to provide the congestion free service

- In mobile adhoc network, the property of dynamicity in nodes makes a major issue as the nodes changes the path between the nodes keep changes.
- The bandwidth requirement for transmitting the normal data is limited compared to multimedia data. The bottleneck of bandwidth is the challenging task.
- The buffer capacity of the TCP congestion window value is very less for multimedia data and the window should be expandable based on the type of data and availability of resources.
- In tactical Communication the transmission & reception should be free of data loss, loss of single frame in multimedia data might be an non efficient communication. This vital issue should be addressed.

The above said problems leads to the increase in data loss, error rate, delay, overhead and decrease the frame rate of the network. It is very much important and need to design a TCP variant which outperforms the existing TCP variants and which solves the issues present in the existing multimedia networks.

QoS aware TCP variant faces lot of challenges while designing and implementing the congestion control, error free TCP variant. There is a huge scope and need to address the above explained issues in mobile networks.

1.6 Scope of the Research Work

Now-a-days internet-based multimedia communication plays a crucial role in our day-to-day life because it makes the way to connect the people and communicate lively and easily. It works in both wired and wireless types, and for adhoc networks, it is quite difficult to configure this type of communication. In mobile adhoc networks it depends on a number of packets that are transferred correctly from the sender to the end receivers. The issues entailed in mobile networks congestion control for real-time application are the scope of this research work. Providing QoS in wired, wireless network is easy than adhoc networking Bandwidth resources can be allocated to the users by maintaining the QoS to optimum level.

The proposed research tries to develop a TCP Variant which controls congestion for real-time applications to enhance the QoS of mobile networks in the form of greater number of frames transmitted (throughput). The proposed congestion control TCP variant is attempted to increase the throughput and also enhances the QoS which leads to attract greater number of users. The parameters mainly includes are Delay, Bandwidth, and Throughput are considered in the scope of this research work and other Qualitative QoS parameters security, reliability, scalability and availability are beyond the scope and might be considered in future work.

1.7 Definition of the Problem

In today’s modern era usage of multimedia applications are much crucial because it gets greater attention from the audiences. This type of applications can be used in any type of networks from wired to wireless, smart distance, longer distance , fixed setup, dynamic setup and much more. People feel wireless network is better than wired network and especially without having existing infrastructure is al so like that. Therefore, it is necessary to focus on multimedia applications on these no infrastructure is much important. One best example for that is, Mobile Ad-hoc networks in Multimedia Application usage. Improving the Quality of Service (QoS) here is also a tiresome job.

1.8 Aim and Objectives of the Proposed Work

The crucial aim of this research work is to propose congestion control for enhancing the QoS in Mobile ad-hoc Networks.

The objectives of the proposed work are

- To propose reduced congestion delay-reduced, overhead-aware TCP variant algorithms for improving the QoS in mobile ad-hoc networks.
- To analyze and simulate the proposed algorithms for verifying the results.

1.9 Organization of the Thesis

This thesis contains seven chapters and it is structured as follows. Chapter 1 discussed the Introduction which wrapped the background theory and structure of congestion control in mobile adhoc networks. It also explained the motivation, scope, aim and objectives of the proposed work. Chapter 2 inclines out the related background studies and abstract of the previous research works which offers the ground work to start this research work. The TCP variants are demonstrated and compared in Chapter 3. The proposed congestion control algorithm with respect to Delay parameter is illustrated in chapter 4. The main idea of this chapter is to describe the techniques that are existed for preparing delay reduced congestion control algorithm. It also discusses a new delay reduced algorithm for increasing the throughput and enhancing the QoS. It also shows the mathematical background necessary for the proposed congestion control algorithm and Delay based congestion control Algorithm for Mobile Networks.

The proposed modified HSTCP algorithm with respect to congestion window is presented in chapter 5. The main objective of this chapter is to reduce the resource consumption into moderate level by eliminating the imperfect users from the network and correctly assign the resources to the perfect users.

The proposed Switching TCP algorithm with respect to Bandwidth availability and number of users is illustrated in chapter 6. The main aim of this chapter is to discuss for reducing the resource wastages when resource distribution is performed. It presents new Switching TCP congestion control algorithm which identifies the user demand first before assigning the resources to users. It further enhances the QoS by reducing the resource loss ratio and increases the throughput. One mathematical model is formulated for the proposed Switching TCPP model.

The important characteristics, complexities of the newly developed congestion control algorithms, suggestions, further future directions and generalized mathematical model are shown in chapter 7. It is appeared before the references and appendices. Suitable images, tables, comparative results and examples are appropriately used in this thesis. It elucidates the readers to understand the research findings elegantly.

2.0 Chapter Summary

It is quite complex to enhance the QoS in Mobile Networks when compared to wired networks. In this mobilized moving environment, allocation of resources is going to be a tedious job. Multimedia data offers higher speed data access and other services which has several limitations like lesser availability of bandwidth, network selection and number of users. Mobile adhoc networks also poses some issues which affects the QoS. By means of developing an efficient TCP congestion control scheme for mobile adhoc networks, QoS can be enhanced into next level. This thesis focuses on the solving the QoS issue in TCP congestion control which brings the QoS performance is slowing down. The subsequent chapters discuss the review of literatures and abstracts, previous works which create the building blocks of the proposed TCP congestion control algorithms.

Chapter 2 Literature Work

This chapter detailed about the TCP Congestion Control Variants for QoS aware processing for heterogeneous mobile networks. It also lists a study of multimedia processing algorithms and requirements of various applications. From past two to three decades internet ruled our world. We humans also changes the means of communication from wired to wireless networks, offline to online shopping, audio data to multimedia data transfer. In all aspects, day today life depends on the multimedia communications.

2.1 Background

Heterogeneous networks are designed to provide continuous broadband service to the large number of users for the available bandwidth. Congestion control plays a vital role in satisfying the users by providing resources based on time and bandwidth. TCP is a communication protocol of transport layer. Generally, there are two different types of TCP Congestion Control algorithms available. They are Loss based TCP Congestion Control and Delay based TCP Congestion Control. Loss based and delay based congestion control defines the different features of TCP protocol. The Delay based congestion control provides better efficiency but the waiting time required is very high, whereas in loss based congestion control the total number of packets loss due to congestion will be uncontrollable but reduces the congestion in the network. Even though, the existing TCP variants have several drawbacks such as high mobility, large number of retransmission of packets, frequent feedback updates and so on. Apart from these, the service side factors such as delay based problems, adaptive play out jitter based problems, signalling overheads and channel modelling based problems should be taken into consideration. Several numbers of QoS based congestion control algorithms available in which some of the algorithms do not satisfy the QoS levels.

The tremendous growth of internet and hence the usage of mobile nodes as well as the size of the mobile network will also increase. It a very challenging issue to come up with a novel efficient TCP variant which overcomes the pitfalls of the all existing TCP variants of the transport layer. A hard study of all the TCP variants in various scenarios is deeply required for the modular and analytical structure of the TCP protocol.

Congestion Control Challenges are available bandwidth, number of users, channel quality variation, buffering capacity, compression levels, spectral efficiency, high retransmission etc., all these parameters together ensuring the scalability and QoS performances. The main objectives of this survey are to find and sort out the issues in existing congestion control algorithms to increase the QoS of users. The key objective of this section is to deal with the concept of Congestion control, its parameters for designing congestion controller, obtaining better QoS and the types of the congestion control algorithms.

2.2 Related Work on TCP Congestion Control Variants

Mobile adhoc networks are self-configuring and infrastructure less networks. The transmission of multimedia data in MANETs leads to different challenges to achieve efficiency. Multimedia stands for more than one dynamic media, such as text, graphics, audio, video and animations. The actual way to understand electronic video is to get a picture from a digitized camera. The frames are displayed fast enough to get the impression of motion 2 3. To obtain the flicker free display the frames are repeated 50 frames/seconds. The challenging issues of MANETs are dynamic topology, bandwidth utilization, transmission errors, nodes failures and link failures. Among which by avoiding congestion and achieving efficiency is an important factor. There is a high demand of continuous congestion free network connectivity from the users. TCP is a transport layered reliable congestion control protocol. There are various TCP variants and it can be categorized into two flavours according to the underlying congestion control mechanisms. One is loss based variants (Tahoe, Reno, New-Reno etc.,) and the other one is Delay based variants (Vegas, Cubic TCP etc.,). There is a plethora of TCP variants, therefore it is bit confusing 1 4.

Wang, Hai L, et. al explained the scenario of packet loss indicated by two indicators in TCP Tahoe and TCP Reno where he highlighted ,when time-out occurs and when source receives the duplicate acknowledgements 2.

Manoj, Hai L, et. al demonstrates TCP-NewReno has two kinds of acknowledgements, Partial ACK(PA) and Full ACK(FA) by modifying fast recovery mechanism of TCP-Reno after sender receives duplicate acknowledgments 1. Choi Li, J. Wang, J. Wen, Y. Hen, et. al detailed TCP-BIC (Binary Increase Congestion) as shown in equation (i).

Abbildung in dieser Leseprobe nicht enthalten

It will not require various phases of window control like other TCP variants, hence simplifying the complexity 3. E. D. Souza, D. Agarwalet. al, detailed HSTCP is used for Large congestion window. It is one of the algorithm which pursue to maximize the aggression of TCP on delay and high bandwidth paths. HSTCPs doesn’t change TCP behaviour with higher congestion that avoids congestion collapse as it performs only at higher congestion window 5.

T. Kelly et. al, depicted that Scalable TCP and High speed TCP is designed for high speed links

Increase: W = W + 0.01W

Decrease: W = W − 0.125W

It uses Multiplicative Increase and Multiplicative Decrease (MIMD) 8.

The parameter sensitivity in TCP protocol mainly highlighted in the existing two important variants such as HSTCP and New-Reno TCP. The basic TCP that is Tahoe mechanism still remains same.

Xiaojun Sun et. al, proposed a variant which is an extended mechanism of High Speed TCP (HSTCP) called Hybrid High Speed TCP. As there are two different categories of protocols namely delay based approaches and loss based approaches. HHSTCP follows both loss based and delay based approach and provides better efficiency compared to existing HSTCP. In HSTCP congestion window will be increased whenever the channel is not unutilized. Based on the Base RTT and Average RTT the window value will be decided and is given below in equation (ii)

Abbildung in dieser Leseprobe nicht enthalten

Ivan Martinez et. al, depicted that TCP Reno outperforms well with respect to TCP Westwood, especially in wireless networks. Choosing the right TCP variant is tedious job for adhoc networks. M. Poleseet. al, have come up with the issues and conclusions on the various TCP congestion control variants on Linux kernel of the operating system. One of the speed up TCP variant CUBIC TCP works efficiently on the kernel, by which it overcomes the packet loss when compared to TCP NewReno. SP-TCP (Single path) and MP-TCP (Multipath) are depicted where MP-TCP provides better efficiency when compared to SP-TCP.

Shao Liu et.al proposed TCP Illinois which can be used for High Speed Networks. Using the packet loss information, the congestion window will be increased and decreased which provides the higher throughput. They developed a new stochastic matrix model by capturing the standard TCP which appears in two flavours synchronized and unsynchronized back off behaviour.

Cognitive Radio Ad-hoc Network, TCP Congestion Control and called as “CRAHN” was identified by K R Chowdharyet. al. In this technique they overcome the drawback of classical TCP NewReno as it drops the packet whenever there was increase in users. CRAHN is a window based TCP friendly protocol. The practical issues were stated as it is required for the lower layers to supported when mobility changes and bandwidth varies. As CRAHN is a window based algorithm, the congestion window value as shown in the equation (iii)

Abbildung in dieser Leseprobe nicht enthalten

Congestion Level TCP (CLTCP) a novel TCP congestion control algorithm was experimented by X. Jiang which is totally different from TCPFIT. CLTCP doesn’t use delay variation, based on the congestion level, the number of virtual flows are controlled in a TCP network. CLTCP can be applied for both high BDP (Bandwidth Delay Product) and lossy networks. The congestion window is updated as in the equations (iv) & (v)

Abbildung in dieser Leseprobe nicht enthalten

M. Hanaiet. al. proposed a dynamic technology for TCP congestion on the buffer flow called CoDel (Controlled Delay). They evaluated on practical wireless LAN network, which improves the TCP fairness when compared with Bufferbloat. CoDel is one of the packet scheduling algorithms, which consists of only one parameter target, where the packet drop is determined queuing delay time.

M. Panda et. al. developed a comprehensive analytical model which outperforms the classical TCP New-Reno and applicable for wired and wireless networks. Using the frame level Markovan loss model and using controllable parameters the TCP throughput are measured. TCP New-Reno provides optimal throughput by reducing number of retransmissions.

N. Patriciello proposed correlation loss recovery algorithm which is a modified part of TCP SACK. The TCP variants like TCP Hybla, High-speed TCP and TCP Vegas were incorporated with correlation loss recovery algorithm and this is already implemented in the kernel of the operating system. He showed that the TCP variants in the kernel provide better efficiency with TCP SACK correlation loss recovery algorithm.

T. Anh. N et. al. analysed the performance of Data Centre Networks. Along with the growth of technology the number of users also increases and obviously storage increases and called as Data Centres. DCN consists of multiple new TCP variants such as DTCP, ICTCP, IA-TCP and D2TCP are much costly to implement in the practice. The existing TCP variants NewReno, Vegas, HSTCP, Scalable TCP, Westwood +, BIC TCP, CUBIC TCP and Yeah follows the principles of Fat Tree Algorithm which are evaluated in this work. TCP Vegas performs better when analysed and compared with other TCP variants.

B. Hesmans et.al depicted multipath tracing on TCP connections in the complicated networks. The good put will not be the sum of the throughput of the connections in the network. Instead, the various traces of the connections in the network are considered for the throughput calculation.

2.2.1 High Speed TCP

In TCP Connections, the HSTCP is used with the larger congestion windows and it is the modification of TCP-Reno’s congestion control mechanisms. HSTCP is a loss based algorithm. It uses Additive Increase Multiplicative Decrease (AIMD)to control TCP congestion Window. This is the algorithm which pursues in increasing the aggression of TCP on higher bandwidth delay product (BDP) paths. It is friendly for small BDP paths. TCP increases its aggressiveness (the rate of additive increase) rapidly. 5 6 7 8. Thus by overcoming the TCPBIC problem that makes the flow more aggressive even though the windows are large. Hence new flows are expected to converge to fairness faster under HSTCP than TCP-BIC. HSTCP is a modification of TCP congestion windows and it is used for TCP connections along with larger congestion window. HSTCPs modified response function carryout only in larger congestion window and doesn’t change TCP behaviour with higher congestion which avoid from congestion collapse.

High Speed TCP (HSTCP) is one of the aggressive TCP variant and is modelling has been done by updating the Congestion Window in the mechanisms as follows in equation (vi) (vii) & (viii).

Abbildung in dieser Leseprobe nicht enthalten

Where a (w) and b (w) are the additive and multiplicative parameter ‘p’ is the packet drop per packet .

A novel response function was proposed by Sally Floyd and is fixed at (10-3, 38) and (10-7, 83,000) as shown in the figure (ix), (x) & (xi)

Abbildung in dieser Leseprobe nicht enthalten

HSTCP with ‘n’ homogeneous HSTCP connections traverse a bottleneck link.

2.2.2 TCP NewReno

TCP-NewReno is the modification of Reno variant and it is one of the congestion control algorithm with better Fast Recovery (FR) than Reno. It also resolves the timeout problem where there is a loss of multiple packets from the same window. It outperforms Reno in case of more packet losses. When TCP-NewReno receives multiple duplicate acknowledgements it will enter fast-retransmit phase 9 10.

Until entire data within window are sent, the Reno will not exit from the fast recovery phase. .Fast-Recovery phase changes with respect to Reno with a fresh acknowledgement received in these two cases, 1) if the acknowledgement of each segment that were outstanding when it reached fast-recovery phase then it comes out of fast-recovery and sets congestion window to ssthresh and continues congestion avoidance similar to TCP-TAHOE. 2) TCP-NewReno has two kinds of acknowledgements, Partial ACK (PA) and Full ACK (FA) by modifying fast recovery mechanism of TCP-Reno after sender receives duplicate acknowledgments. TCP-NewReno at the end of congestion window at the time of fast recovery will send new packets which leads to high throughput when there are more holes of more packets during hole-filling process 11 12 13 14.

2.3 Related Work on Multimedia Data Processing in MANETs

The multi-media data is very crucial for the present mobile networks. The multimedia protocol which supports transmitting the data to the upper layers and to the outside world were analysed by considering various research on multimedia data. X.Shen et.al presented a study on the requirement of buffer size for streaming video over TCP. Whenever the burst segment loss occurs and combination between needed play-out buffer size and maximum day. The data loss causes the video degradation. It can be prevented by TCP but extra delay may occur if there is a packet drop. Using the packets drop information in a burst, the fast retransmission and congestion control can be triggered. Delay fluctuations can be collected and carryout the maintenance of the play-out using play-out buffer at the receiver.

Here, in addition with the fast retransmission mode mainly three different delay patterns of TCP burst segment loss in constant bitrate streaming video flow are identified. They are,

1) Double fast retransmission mode (DR)
2) Double fast retransmission with time out mode(DRTO)
3) Time out mode(TO)

Abbildung in dieser Leseprobe nicht enthalten

Where dmax is the maximum delay time of segment tp is the inter segment time of TCP, tRTO is the retransmission out time of TCP, tRTO is the retransmission out time of TCP, S is the state variable for delay mode, I is the total number of counts where RTO timer expires in TO mode. J. Vieronet. al. mainly focuses on rate control algorithm which can includes delay constraints of real time streaming and also the behaviour of TCP’s congestion control. A new protocol was designed to evaluate the bandwidth prediction model parameters. The multimedia characteristics can be made allowed with the use of this protocol along with the use of RTP and RTCP. If the current channel state is made to estimate by the use of the above encoder, decoder buffers states, protocol and also the delay constraints of read time video source are translated into encoder rate constraints. The combination of H.263+ loss resilient video comparison algorithm with global control model, can be tried on different internet links. Such observations and experiments understandably illustrates the advantages of the protocol that is used for determining the bandwidth prediction model parameters. So that overall idea of this paper leads to significant reduce of source timeouts and thereby minimizing the expected distortion and allowing the compatible use of TCP compatible predicted bandwidth.

Multimedia communication currently faces challenge with respect to congestion management, network friendliness and quality of service. Since there is a high requirement of management of data in real time, multimedia delivery will adapt two protocols namely, Real-Time Transport Protocol (RTP) and User datagram Protocol (UDP). Both RTP and UDP guarantee any level of Quality of Service (QoS). The requirements for QoS are smooth rate variation excessive rate jitter which can effect on visual quality of the recipient signal. The variation of end to end delay that results from network queues which rises in time of congestion will effect on data stream than on traditional computer systems.

Unicast congestion control mechanisms are uniquely developed to form feedback loop between sender and receiver. In this case the monitoring of the network state and transferring periodic feedback information to the sources will play a crucial role. In order to share the network fairly, TCP compatibility property helps in capturing the characteristics of a good session.

Estimation of model parameters includes 1) Feedback frequency 2) RTT and Retransmit Timeout (To) estimation 3) Loss event estimation

- The feedback interval ẟfeed is given in the equation (v)

Abbildung in dieser Leseprobe nicht enthalten

Where, SRTT is Smoothened RTT, NR is the total number of receivers in the session, SB is the complete session bandwidth, FR is frame rate.

- The retransmit parameter To is given by,

Abbildung in dieser Leseprobe nicht enthalten

Where DRTT is Smoothened mean deviation of the RTT.

The values of the loss rate for the three rate control techniques and predicted bandwidth of the encoder rate constraint and the time outs duration on consecutive time intervals are also given in this paper. By introducing more improved frame skipping strategy, the better trade can be action for frame rate against PSNR performance. Including FGS encoder the approaches can be applied directly to any kind of video even though it is accepted with an H.263+ encoder.

H. H. Van et. al. involves an approach of packet level protocol parallelization which uses the principle of parallel multi-threading for the execution of protocol so that packets within and among connections are processed in parallel using unique processor or threads. This approaches is highly scalable which makes it stand out from other approaches. It is good in proper scheduling, more protocol connection, thereby supporting more threads. Here there are many parameters on which we measure the performance such as available bandwidth for a network, different number of TCP connection and various video sources. It mainly focuses on a result that significantly increases the play back effect. Due to high increase in demand on high speed and high volume remote electronic information access, the pressure has increased on the network in terms of performance and servers are heavier than before. When there is a requirement of high performance, there will be a use of expensive controllable hardware and software. One of the efficient ways of employing is by use of low cost symmetric microprocessor (SMP). Which is good to implement this approach of SMP in a high performance file server in a LAN. The Parallel TCP connections could increase the usage of bandwidth to a greater extent. To implement this parallel TCP, the data should be divided in to various pieces and sent through different TCP connections. Once it reaches the destination, it must be recognized with the original data structure.

There are various parameters to be measured to evaluate the performance in order to measure the quality of video

- Effect of network bandwidth: When there are more TCP connection, the variance of frame rate is large. A steady frame rate could be good for a larger variation of rate difference. The peak GoP bit rate of video is high with the accessible bandwidth, there will be some comprises with the time such as the quality cannot be good all the time.
- Effect of TCP parallelism: If the numbers of TCP connection are high, the performance can reach the required rate.

Frame rate distribution: When there are more number of TCP connection, sustainably of time delay is acceptable for only shorter period of time and the magnitude of reduction of frame rate is very high in this case.

Optimal number of TCP connections: With the known fact that, more TCP connections leads to significant increase of performance. Depending on the mean of frame rate, the video frame-rate drop down.

Similar behaviour even if different number of parallel TCP connections are used. In case of single TCP connection, there will be a reduction in frame for longer duration, this effect is due to the variable bit rate of reduced video data traffic. This property of variable bit rate causes the dynamic changes in actual data size of each GoP.

Other performance metrics: When the mean frame rate provide the total measurement of the performance. It is important to measure other parameters those play an important role. Namely,

- Total delay time (Dall)
- Maximum delay time(Dmax)
- Average delay time etc.

Here is a detailed quantitative study on parallel multithreading TCP in multiple connections is used to serve the request. However, the optimal value will be affected by algorithms are used and available network bandwidth. Many research works on streaming video have stated that TCP is enviable for streaming multimedia. Here the challenge the dogma that TCP is an unenviable selection for streaming video are revisited. The two general objections for basic mechanisms of TCP are identified, namely packet retransmission and congestion avoidance which are at the root of anti-TCP dogma.

Packet retransmission in TCP introduces large amount of end-to-end delay. Because in real time video streaming, retransmitted data reaches too late at the receiver side. Hence retransmission of lost data is not suitable. It can be overcome by usage of client-side buffer management where it is not applicable with application level delay requirements. With TCP, clash occurs only when the application’s delay requirements are close to path RTT. Duplicate ACKs that are received at the receiver are used to detect the lost packets in TCP. Therefore for retransmission the earliest time will be obtained at the receive side is one complete round trip time after the loss of original data.

Congestion avoidance algorithms are mainly designed to investigate the accessible bandwidth through intentional manipulation of transmission rate. In steady-state, congestion control of TCP intersects on an average rate of transmission that is close to a fair-share of accessible bandwidth. Instantaneous rate of transmission takes familiar shape of saw tooth when it is over a short time-scales. Here it cycles between periods of Additive Increase Separated by Multiplicative Decrease (AIMD). The short-term rate saw tooth is the reason to say that TCP is not suitable for video application.

In order to benefit video streaming proportional to their benefit for traditional applications, interface preserving improvement of TCP should be done. Realistic measures of video quality impacts must be included instead of just network level measurements. Video system which supports fine-grained adaptation of MPEG have been implemented and quality of performance of the streaming system is measured mainly here. Without any modification to TCP, persistent rate variation of quality adaption should be achieved.

Multimedia-Technology allows humans to make use of computers which have the ability to processing audio, video, text data, still pictures and animations. At present, internet is used by people for not only watching movies but also for uploading videos(YouTube) and to make internet calls (Google talk and Skype).

Audio or video is employed by multimedia network application. The type of multimedia data and its properties are on which internet Multimedia applications depend.

The various properties of videos data are depicted here, Pulse code modulation (PCM) is one of the basic technique of converting analog audio to digital audio. The sampling Rates of video has high bit rate about 100Kbps for low quality video conferencing to over 3Mbps. There are multiple versions of video which have different video quality, so users can Speech encode (uses PCM) with 8000 samples per second and 8 bits per sample. Audio compact disk (also uses PCM) with 44,100 samples per second and 16 bits per sample. PCM encoded speech which is not used commonly on the internet hence compression techniques are used to lower bit rates of audio streams, compression techniques rates are most commonly to 128Kbps and although users are more sensitive to audio malfunctions than video, encoder can note that digital audio has lower bandwidth than video.

Real-time conversational voice over the Internet is generally known as Voice/Video over IP (VoIP) or internet telephony. Creation of conferences with three or more participants for users are allowed by Video conversational systems. In the Internet today Conversational video and voice are widely used like Google talk, Skype and Facebook voice chat. The two important Application service requirements for Conversational Voice/Video over IP are:

- Tolerance of Data loss
- Timing Considerations

Conversational multimedia applications are loss-tolerant, occasional glitches in audio/video playback are only causes of occasional loss, and these losses are often partial or fully hidden. Similar to television and traditional broadcast, other than that transmission takes place over the internet such as live news or sporting event as the event is live, other issue is delay, even though the timing constraints are less stringent than those for conversational voice delays which are up to 10 seconds or so from when the user chooses to view alive transmission when play out begins can be tolerated.

For streaming video applications, pre-recorded videos are present in servers, and for viewing the videos users send requests to servers. The users might watch the video without interruption from starting till end, they may stop at intervals well before it ends, or they can interact with it by repositioning to a past or future scene or by pausing. Streaming system scan can be classified as: Adaptive HTTP Streaming, UDP Streaming and HTTP Streaming. Audio Streaming involves conversion of audio analog signals into digital data which can be transmitted through internet. UDP segment is sent every 20sec. Choose which ever version they can watch according to their availability. The video streaming systems can be divided into three categories Adaptive HTTP Streaming, HTTP Streaming and Streaming Stored Video. The various challenges of HTTP Streaming are:

- Play out constraint: once client Play out starts, playback should match original timing but network delays are variable (jitter), so one needs buffer at client-side to match Play out requirements.
- Client interactively pause, rewind, fast-forward, jump through video.

2.3.1 UDP Streaming

In UDP streaming, the video sent by the server matches with the video that client consumes in a steady rate over UDP. It is done by clocking the video chunks.

Drawbacks of UDP Streaming:

- It is not predictable as it has varying bandwidth available between client and server, it can fail to provide continuous play out of UDP streaming at a constant-rate.
- It does require a same dia control server, which is similar to RTSO server, for processing the requests of client-to-server and to tracking of client state.
- The firewalls block the UDP traffic, and prevents users from receiving video.

2.3.2 Multimedia Streaming- HTTP

The multimedia files are retrieved using HTTP GET method. There is fluctuation in fill rate due to “TCP congestion control and retransmission”. The major issue in multimedia streaming is Larger Playout Delay- Smooth TCP delivery rate sent at maximum possible rate under HTTP/TCP passes through firewalls and NATS more easily.

2.3.3 Content Distribution Network (CDN)

The content selected from a stream (from million videos) to the hundreds and thousands of distributed and concurrent users is a challenging issue. If the user wants to build a large and a single “Mega-Server”, he has to face lot of challenges:

- Single point of failure.
- Network Congestion
- Distant Clients have lengthy path
- Outgoing link has multiple copies of video sent

2.3.4 Voice over IP

It involves the process of translating analog audio signals into digital data which are delivered over the Internet which is popularly called as Voice telephone over internet. For example: The sender will generate bytes at the rate of 8000 bytes per second, for every 20msec then the sender assemble these bytes into a chunk. Then the UDP header segment encapsulates the chunk.UDP segment sends for every 20msec. When the receiver gets each packet with uninterrupted end-to-end delay, the packets arrive occasionally at every 20msec to the receiver. Receiver plays as each chunk is received.

The major demerits of VoIP are

- Packet Loss

IP datagram encapsulates a UDP segment. These datagrams are moved in a network, where it routes through buffers and waits for outbound links for transmission. If the buffers are full in the path of sender to receiver, then the arriving IP datagram will be discarded. This loss can be avoided by the usage of TCP than UDP. The end-to-end delay increases this problem. The packet loss can be between 1 and 20, are tolerated and depends on the loss, occurred at voice encoding and receiver.

- End-To-End Delay

The delay occurs due to queuing delays and processing in routers, time stamps, end-system processing and propagation delays occurs in links. It depends on end-to-end delay and real-time applications. The Delays below 150msec is not regarded as good listener. The range between 150 and 400msec are acceptable but not good. Range above 400msec are problematic in conversations. The receiver will not get the packet which are delayed for a long period.

- Packet Jitter

In end-to-end delays we see delays changing in the queue which is encountered by packet in network router. The result of such delays depicts the time between the packet sent and received and can vary from one packet to another, which is the jitter packet. The difference of two consecutive packets is more or less 20msec. As this jitter is ignored once by receiver, then the chunks are played as it received, which becomes worthless to receiver. The jitters are removed by sequence numbering, play out delay and time stamps.

- Removing Jitters: Fixed play out

The receiver tries to play the chunk exactly at q msec after it has been generated. The chunk is time stamped as time ‘t’ at the sender side. If the chunk arrives in stipulated time, then the receiver will play the chunk at the time of t+q. Late arriving packets will be lost. The Variations of q are

- large q: Minimum loss of packet.
- small q: Conversational has better experience.
- q is much smaller than 400msec and has more packet loss.
- Adaptive Playout Delay

With the early delays in playout, most of the packets are slightly lost as they meet their deadlines. The conversational services that is VoIP, has long delays which are intolerable and annoying, so play-out delay must be reduced to decrease the lost percentage. For addressing the above problem, there must be a estimate and variance in network delay. The sender’s silent duration must be elongated and condensed. So that we can estimate the network delay as shown in equation (1)

Abbildung in dieser Leseprobe nicht enthalten

- Packet Loss Recovering

In some real-time conversational application like VoIP the retransmission of packets lost is not much practical and it is difficult to accomplish the time to maintain the conversation understandable. This type of applications uses the approach of interleaving to anticipate loss and forward error correction.


To rebuild exact or approximate version of the lost packet, redundant information is added to the stream of original packet. A Simple FEC mechanism: In a group, if n chunks are sent, then the redundant chunks are added by the exclusive OR-ing of n chunks. The receiver is able to reconstruct the packet if it is from the group of n+1 packet. If more than one packet is lost from a group, then it is difficult to reconstruct the packets lost. There is an increase in the transmission rate factor by (1/n), if group size of (n+1) is small. The play out delay increases because it should wait until the entire group initiates.

- Interleaving

The sender before transmission breaks down the audio into separate packets, which are at a certain distance in a stream. With the above technique it lowers the packet loss, as there exists smaller gaps in reconstruction compared to large gaps. This provides replacement for the packet lost which is similar to original. This works on the principle that audio signals are same for short term and works for small rates for small packets. The recovery of packets from loss can be done by packet repetitions which is replaced by the copies that arrives before loss.

2.3.5 Multimedia Networking Protocols

We have some protocols which support the real time traffic over internet. The protocols are RTP (Real Time Protocol),SIP (Session Initiation Protocol) and RTSP (Real Time Streaming Protocol). RTP

In case of video and audio, RTP protocol is suited for the real time data transport. It has a specific packet format to deliver audio and video over IP networks. It is used for transporting audio-video formats. RTP Protocol Components

The following are the information required for streaming data-

Sequence Numbers (Packet loss and reordering), Time Stamps (synchronization) and the payload indicates the encoded format of data. Source Identification (Identifies originator of frame).& Frame Indication (Marks the begin and end of frame).

Source identifier number field ,Sequence Number, and RTP-Timestamp. The sequence number is about 16 bits long and increments when a RTP packet is sent. The time stamp field is 32 bit long which do reflect the sampling instant of first byte in RTP data packet. The timestamp clock rate of an audio is 8000Hz and video is 90000Hz. The Synchronization Source Identifier (SSRC) is of 32 bit long which detects the source of RTP stream. Distinctly a single RTP has unique SSRC.

Many a situations where RTP cannot use TCP as an underlying protocols because of the following reasons:

- In case of packet loss, TCP forces the receiver to wait until retransmission and causes huge delays.
- TCP do not support multicast.
- Headers of TCP are larger compared to UDP (40bytes-TCP, 8bytes-UDP)
- TCP do not have coding information and time stamp required by receiving application.


Excerpt out of 138 pages


QOS Aware TCP Congestion Control Variants for processing Multimedia Data in mobile adhoc Networks
Catalog Number
ISBN (eBook)
ISBN (Book)
aware, congestion, control, variants, multimedia, data, networks
Quote paper
Dr. Gururaj H L (Author), 2019, QOS Aware TCP Congestion Control Variants for processing Multimedia Data in mobile adhoc Networks, Munich, GRIN Verlag, https://www.grin.com/document/979695


  • No comments yet.
Look inside the ebook
Title: QOS Aware TCP Congestion Control Variants for processing Multimedia Data in mobile adhoc Networks

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free