Since the inception of Internet, the network of networks has been growing at an immense rate spreading over the globe. Internet was intended for connecting research laboratories but it broadened to industries, government, cooperates and so on, with millions of users connected worldwide. Due to increasing flow of real-time streaming data, present needs for bandwidth has considerably increased as most of the traffic over the internet nowadays is multimedia streaming . This type of traffic requires good bandwidth with minimum delays and negligible network congestion . The major resource of a channel is its bandwidth and multimedia streaming can tolerate packet loss to some extent but timely and orderly delivery is very important in transmission of audio/video streams. Services like voice over IP (VoIP) will drive the future of communication which is being deployed over the packet switched networks. However, the most common issue faced by users is not having enough bandwidth to support the channel which results in coherent delays in the data network by which the audio/video quality over the internet is affected badly .
Also, the multimedia streams face another issue over the internet that is of packet reordering . Since the packet switched networks break the information into packets and then these packets are transmitted, it can happen that each packet takes a separate route to the destination from the source resulting in the additional delay in reaching the receiver. The receiver ignores this packet as it has already received the packet which is next in sequence, thereby causing poor audio and video quality. In addition to this, to provide better Quality of Service to the users, the network has to provide minimum error rates .
Defined in RFC 791, Internet Protocol (IP)  is the routing layer datagram service of the TCP/IP (Transmission Control Protocol/Internet Protocol) suite. TCP/IP is the basic protocol of the Internet of which IP is one of the constituent. TCP is involved in assembling and reorder of packets in addition to the connection oriented service on the internet using the client server model over internet. The TCP/IP suite enable computers to communicate over the network, specifying how data should be packaged, addressed, shipped, routed and delivered to the right destination . Internet has been using Internet Protocol version 4 (IPv4) since beginning, which uses 32 bit IP addressing. Apart from addressing, IP handles the routing as well as error detection. It is a connectionless protocol based on best effort service model, without any guarantees of orderly delivery of packets.
Multi-Protocol Label Switching (MPLS)  is a technology which plays a crucial role in efficient real time data transmission. MPLS, developed in 2001 by Internet Engineering Task Force (IETF) provides several features like improved Quality of Service (QoS) , improved uptime, better scalability, reduction of network congestion, features like Virtual Private Networks (VPNs), traffic engineering  etc. MPLS utilizes short fixed-length labels instead of layer-3 IP addresses for routing of packets with connection-oriented Label Switched Path (LSP) for flow of traffic and Label Switched Routers (LSRs) as MPLS enabled routers . MPLS is viewed by some as one of the most important network developments of the 1990's. MPLS allows routing with QoS restrictions, using signalling protocols like Constraint Based Routing over Label Distribution Protocol (CR-LDP) or Resource Reservation Protocol (RSVP) to establish the path adapted to QoS's restrictions [12, 13, 14].
In the TCP/IP protocol stack , IP packet encapsulates payload received from above layer and add to its own header information, thus each protocol layer adds its own header with the information related to the layer. This is done for each and every packet regardless of whether the packet is intended for same destination or not. Data given by an application is taken as payload by the application layer and certain headers like RTP header in case of real time traffic is added to it. Consequently, Transport layer adds its header such as UDP or TCP header, network layer adds either IPv4 or IPv6 header and so on. This header addition is shown in figure 1.1. This header addition may be big, sometimes equal to the size of payload. In addition to this, header information is often redundant information and is not necessary to be transmitted for each packet in case of transmission between same source and destination.
illustration not visible in this excerpt
Figure 1.1: Header addition at higher layers
Thus, a considerable header overhead leads to poor performance in the networks [16, 17]. However, there are several header compression mechanisms which have been developed such as Compressed Real-Time Protocol (CRTP), Internet Protocol Header Compression (IPHC), Van Jacobson Header Compression (VJHC), Robust Header Compression (ROHC)etc. to improve link efficiency for network, which is in terms of reduced bandwidth consumption. The compression algorithms are used to compress the UDP, IP, RTP and TCP headers. Many approaches have been proposed and implemented with respect to Header Compression but a lot more needs to be done for header compression over MPLS.
The addition of various headers like IP header, UDP header and RTP header adds a considerable overhead for the multimedia data sent over the Internet. For normal applications like HTTP, the overhead is not considerable, however for multimedia applications like voice and video transmission, the overhead is huge. In many cases, the size of the payload is almost the same as that of the header and as a result there is wastage of a lot of bandwidth due to the redundant header . Bandwidth is one of the most important resources of the network and proper use of the bandwidth is necessary for the reliability, speed and efficiency of the network. Thus, there is a need to implement a header compression mechanism which would provide an efficient method to improve response time, throughput, and reduce the packet loss by decreasing the packet overhead to save network bandwidth. IPv4 packet has a 20 byte header, UDP header is of 8 bytes and the RTP header has 12 bytes, making a total of 40 bytes for a single packet. If header compression is deployed, it can reduce the overhead of an IPv4 packet from 40 bytes to just 1 byte, which will be of considerable advantage. By compressing the headers of IPv4/UDP/RTP multimedia streams, a compression gain of more than 90% can be achieved.
For compression, the sender has to first communicate the complete information to the receiver without compressing anything at the start of the transmission. Once the receiver has the complete information on how to decompress the packet, the sender sends the compressed packets to the receiver and only that information is exchanged for the header that has been changed.
In the MPLS network, a label is used for making forwarding decisions, rather than the IP destination address. Label Switch Router (LSR) is capable of understanding these labels and helps in forwarding of labelled packets. MPLS flows are connectionoriented and packets are routed along pre-configured LSPs, incorporating label swapping forwarding paradigm with network layer routing. When a packet enters MPLS domain, it is assigned a label by the ingress LER specifying the path that the labeled packet has to take within the MPLS domain. A different label is used for each hop, and it is chosen by the LSR performing the forwarding operation. At the egress, LSR receives the labeled packet, removes the label and forward them based on layer 3 addresses for normal IP routing [25, 26].
MPLS can be used to transmit compressed packets over MPLS network which can increase the bandwidth efficiency as well as the scalability. As an example, the uncompressed traffic header within network for 300 million or more calls per day can take up 20 to 40 gigabits per second for headers alone, thus wasting a huge bandwidth .
There are a number of Header Compression mechanisms available such as Van Jacobson Header Compression (VJHC), Internet Protocol Header Compression (IPHC), RTP Header Compression, Extended Compressed Real Time Protocol (ECRTP), Robust Header Compression (ROHC)etc. however; all of these header compression mechanisms lack the following:
1. These header compression mechanisms have to work on hop-by-hop basis, that is the packet is compressed by the compressor and decompressed by the decompressor connected directly not through any intermediate node, not even a layer 3 devices such as a router . This is because of the fact that packets in the IP network are to be routed from source to destination based on layer 3 routing which makes it essential for every node to check the packet header and then forward it based on the destination address mentioned in the packet and after referring to its routing table. If header compression is implemented and deployed at each of the nodes, then this would take enormous time for compression and decompression at each node for every packet which instead of improving the performance would degrade it. This also adds to the load on the routers in terms of the more computations and saving of states for flows.
2. Packet reordering is not good in header compression algorithms. In fact, ROHC protocol is not designed to handle packet reordering between compressor and decompressor. RFC 3095  mentions that the channel between compressor and decompressor is required to maintain packet ordering, i.e., the decompressor must receive packets in the same order as the compressor sent them. Thus ROHC assumes that the packets will be received by the decompressor in sequence.
3. There is a limit in the number of compressed flows that a router can take such as 30-50 flows in case of CRTP . Thus having scalability problems with respect to the number of concurrent flows.
ROHC has been utilized in many networks till date; however no previous work exists on how ROHC can be implemented and deployed in MPLS networks, especially there is no performance analysis or simulation done on it. Therefore, there is a need for ROHC over MPLS to be assessed in terms of scalability and robustness. There are several questions that need to be answered such as:
How will the ROHC protocol performs over MPLS network?
What will happen upon decompression failures in ROHC MPLS network?
How will this protocol implementation perform with respect to different traffic such as UDP, RTP etc.?
The main objectives of this thesis are to evaluate the performance of Robust Header Compression (ROHC) over MPLS. MPLS Label Edge Routers (LERs) can be used for header compression as these routers have the essential processing resources to carry out header compression algorithms and consequently utilize the network bandwidth. To implement header compression, there has to be a balance of node resources, such as processor and memory, for bandwidth savings . This will be done by creating a network model of ROHC over MPLS and then simulation of the same using Riverbed Modeler. Following were the main objectives of the research:
1. End-to-End Compression/decompression: There is a need for complete end to end compression and decompression mechanism from source router to destination router, thus intermediate nodes do not have to participate in compression and decompression cycles as forwarding of packets is not based on IP address but labels of MPLS network. This will significantly reduce the overhead as well as delay incurred for compression and decompression at each router from source to destination for core networks. This will be achieved by having ROHC over MPLS so that MPLS Label Switched Path (LSP) can be used to carry the compressed packets from Ingress router to egress router, without compression decompression cycle in the intermediate Label Switching Routers (LSRs) of MPLS network. Standard signaling like RSVP will be used. ROHC has many advantages over other header compression protocols such as high robustness and improved efficiency, therefore this header compression scheme will be implemented over MPLS.
2. Tolerance to reordering: MPLS network will help maintaining the packet ordering between compressor and decompressor. Thus, the receiver will receive the packets in the order by which the sender had sent. The MPLS LSP will maintain packet ordering for each compressed flow by ensuring that compressor use compressed headers that are adequately robust to the expected possible reordering or also by modifying decompressor to tolerate reordered packets.
3. Scalability: There is a necessity to increase the scalability of HC to a large number of flows. A label switching technique will be used which will tag each stream with a flow identifier (CID) and can have numerous flows at the same time. There might be 300-500 concurrent flows and this method will make it possible to include all of these.
Apart from these, the general goals of header compression over MPLS network that will be implemented are as follows:
4. To provide more efficient voice transport over MPLS networks.
5. Decrease packet delay, delay variation, or loss probability.
6. Leverage existing work through use of standard protocols as much as possible.
This research can considerably improve the utilization as well as efficiency of multimedia flows through MPLS wired networks by reducing overheads that in addition leads to decrease in the loss of packets and delays.
ROHC over MPLS
ROHC was developed for header compression over high Bit Error (BER) links and has mechanisms for quick context resynchronization. ROHC has compression of ESP/IP, UDP/IP, RTP/UDP/IP with better encoding scheme for the fields which keep on changing dynamically. The compressor compresses the RTP/UDP/IP packets into the appropriate compressed packets and sends them to the decompressor and works on the feedback information from the decompressor. This is done taking care of the states, modes and CIDs, to be managed practically and efficiently . The advantage of having ROHC is that it has by default the ability to identify the packet type in the compression header and thus there is no need to further extend the identify packet type. There is a need for some changes in the existing MPLS network for handling compression and decompression such as :
i. Expansion in MPLS signaling to discover the LSP from Header Compressor to Header Decompressor
ii. Negotiate the HC algorithm used and protocol parameters
iii. Negotiate the Session Context IDs (SCIDs) space between the ingress and egress routers on the MPLS LSP
iv. Signal HC over MPLS tunnels with the Label Distribution Protocol (LDP).
In order to accomplish the objectives, following research methodology have been set:
1. Performing a study of literature in order to know the state of art in ROHC and MPLS and more specifically ROHC over MPLS.
2. Network model is to be created with a method by which ROHC is implemented over MPLS network.
3. Developing the network model in the simulator and then testing as well as evaluating the performance of the network model.
4. Evaluating the performance of the ROHC over MPLS network by analysis of the result.
5. Interpreting the simulation results in terms of tables and graphs with explanation of each.
6. Concluding the thesis.
With the rapid growth of Internet and evolution of new technologies there has been an increasing demand on real-time multimedia services and better Quality of Service (QoS) requirements over the existing infrastructure. QoS is the overall performance of network. It is a set of technologies that enables network administrators to manage the effects of congestion on traffic flows by using network resource optimally rather than by conditionally adding extra capacity . It represents the set of techniques necessary to manage network bandwidth, delay, jitter, and packet loss which are commonly used parameters  . QoS is a major concern for the ISPs in supporting multimedia applications . The two generally used QOS approaches are: Integrated Service (IntServ) and Differentiated Service (DiffServ). IntServ framework aims at providing per-flow QoS guarantees to individual application sessions assigning end applications to request the QoS they require from routers along their data path using Resource Reservation protocol (RSVP) . However, IntServ suffers from scalability problems because of excessive overhead where as DiffServ is more scalable, manageable and easy deployable for service differentiation in IP networks  .
Traditional IP forwarding is based on Layer 3 destination address with lookups at every hop. The drawback associated with this routing is that of the destination address based lookup needed at every hop. Moreover, IPv6  is the next generation protocol for networks which has a bigger header size, increased to 128 bits from 32 bits of IPv4. Introduction of flow label field was another major change in IPv4 header for QoS purpose . In addition to huge address space, IPv6 puts forward an important enhancement with respect to built-in security, mobility, auto-configuration and enhanced multicast support .
MPLS (Multi-Protocol Label Switching) became popular in its early time due to its advantage of fast forwarding, which is no more an advantage due to the capacity of fast forwarding by IP Layer 3 routers. However, now the main advantages of MPLS is providing unified network architecture, BGP free core, QoS, Traffic Engineering, optimal traffic flow etc. MPLS is called Layer 2.5 as it is performed between the L2 and L3 network. IPv6 over MPLS is considered as an able mixture of protocols on layer 2 and layer 3 for routing of packets. Various header compression and suppression technologies have been proposed to compress the UDP, IP, RTP and TCP headers. The main purpose of these compression technologies like Compressed Real-Time Protocol (CRTP), Internet Protocol Header Compression (IPHC), Van Jacobson Header Compression (VJHC), Robust Header Compression (ROHC) etc. is to improve link efficiency for network which is in terms of reduced bandwidth consumption. Many approaches have been proposed and implemented with respect to Header Compression. However, a lot more needs to be done for header compression over MPLS.
This chapter gives a detailed survey of the Header Compression mechanisms that can be implemented over MPLS to improve the Quality of Service and thereby improving the overall performance of the network, as well as improving link efficiency for network. Also, due to the excessive overhead of the Next Generation IP addressing protocol, the need to header compression becomes necessary. The aim is to provide the researchers of QOS over MPLS an easy way to understand the essence of header Compression and implement the various header compression technologies which are in place. The various related works have been researched and discussed so that the researchers can easily get the idea of the state of art and the possible future work in this field. The next section gives the details of Multi Protocol label Switching (MPLS), its architecture and working. This is followed by Header Compression techniques explained in detail, with the types, advantages and disadvantages being discussed. The last section concludes the survey work.
Multi Protocol Label Switching (MPLS)    has been introduced as the essential technology for the next-generation packet networks. The main factor for evolution of MPLS is high speed packet switching, forwarding and large scalability which helps Internet Service Providers (ISPs) to offer several services on the single network architecture . It was intended to improve the forwarding speed of routers; however it is offering several important technologies like Traffic Engineering, Virtual Private Networks (VPN), routing performance etc. available at low cost and with minimum configuration overhead   . In addition, MPLS can provide QoS guarantees   with ability of one-to-many connection, solving the problem of performance bottleneck due to longest prefix match in IP networks. It solves the excessive overhead of network management in IP and problems with overlay models like IP over ATM. MPLS is viewed by some as one of the most important network developments of the 1990's. MPLS allows routing with QoS restrictions, using signalling protocols like Constraint Based Routing over Label Distribution Protocol (CR-LDP) or Reservation Protocol (RSVP) to establish the path adapted to QoS's restrictions.
The idea that MPLS is faster than IP is no more a valid reason because of the fact that nowadays Application-Specific Integrated Circuits (ASIC) are used in routers making the packet switching as fast as that of a label. However, using MPLS enables carrying protocols other than IP, known as Any Transport over MPLS (AToM)  . It also provides better IP over ATM integration, in addition to optimal traffic flow and traffic engineering .
All routers in the network must be MPLS-enabled, in order to apply MPLS to an existing IP network . MPLS technology is called Layer 2.5 technology as it functions between the Layer 2 (Data Link Layer) and Layer 3 (Network Layer), since it is a packet forwarding technology that’s capable of layer 3 to layer 2 route mapping . The idea is to use MPLS labels of 32 bit length instead of longer IP addresses in switching of packets. Figure 2.1 depicts the syntax of MPLS Label  .
illustration not visible in this excerpt
Figure 2.1: MPLS Label
MPLS header, also known as Shim Header, is inserted between the layer-2 header and layer-3 header as shown in Figure 2.2. MPLS header is fragmented into 4 fields: Label (20 bit), EXP (3 bits), S or BoS (1 bit) and TTL (8 bits). Label is used for lookup and gives the next hop to which the packet is to be forwarded and the operation to be performed on the label stack. EXP is experimental bits, reserved for Quality of Service (QOS). S or BoS (Bottom of Stack) is 0, unless this is the bottom label in the stack. TTL is Time-to-Live, used to avoid routing loop. It is decremented by 1 at each hop, and shows how far the header could travel along the route    .
illustration not visible in this excerpt
Figure 2.2: MPLS Shim header position
A label is used for making forwarding decisions in the MPLS network, instead of the IP destination address. Label Switch Router (LSR) is capable of understanding these labels and helps in forwarding of labelled packets. LSR has three types: Ingress LSR, responsible to add a label; Egress LSR, responsible to remove a label and Intermediate LSR responsible for correct switching of the packet. Ingress and Egress LSRs are Label Edge Routers (LER). A sequence of LSRs in MPLS network forms a Labelled Switched Path (LSP).
MPLS flows are connection-oriented and packets are routed along pre-configured LSPs, incorporating label swapping forwarding paradigm with network layer routing. When a packet enters MPLS domain, it is assigned a label by the ingress LER specifying the path that the labeled packet has to take within the MPLS domain. A different label is used for each hop, and it is chosen by the LSR performing the forwarding operation. At the egress, LSR receives the labeled packet, removes the label and forward them based on layer 3 addresses for normal IP routing. Figure 2.3 shows an example of forwarding IP packets using MPLS    .
illustration not visible in this excerpt
Figure 2.3: IP and MPLS Network
The network in figure 2.3 has three subnets, with two IP based networks and one MPLS network having four core routers called Label Switch Routers (LSR) and two paths called Label Switch Paths (LSP). LSPs are unidirectional for each pair of LSRs. LSR that transmits with respect to the direction of data flow, given as a line with arrow pointing towards Egress router in the figure, is called upstream. LSR that receives the MPLS packet is called downstream. The MPLS edge routers are called E-LSR (EdgeLSR) with the first LSR denoted as Ingress Router and the last as Egress router.
Sender A intends to send traffic to destination B and C. A utilizes IP routing till it reaches MPLS network, after that LER classifies a packet into Forward Equivalence Class (FEC) and attaches a label. FEC is a subset of packets that are all treated in the same way by the router mapped to a label . After assigning FEC, there is no need to further analyze the header by the successive routers thus improving the performance. To forward an unlabeled packet, MPLS first relates the FEC with an entry in its next hop forwarding equivalence class table. This table contains the operations like pop, push etc, the next hop and if needed, a new label. The ensuing table is called the Label Forwarding Information Base (LFIB)  .
At the Ingress router, the destination IP address is used to determine the next hop and initial label for each packet, given as 22 and 15 in the figure above. The LSR after receiving the packet utilizes these labels to identify the LSPs to determine the next hops and labels. Once the packet reaches the egress routers, the label is popped off and IP routing takes the packet to the destination.
Internet Protocol version 4 (IPv4) has been around since 1980s and was intended to interconnect research universities and government facilities . However, nowadays it has tremendous scalability problems due to the phenomenal increase of Internet users, devices and applications that are growing at a rapid rate. The solution to this problem is IPv6, as it provides a larger address space, along with several other features like inbuilt security, better traffic routing etc. IPv4 uses 32-bit addresses which can support about 4.3 billion devices and due to this reason the IPv4 address have depleted whereas IPv6 uses 128-bit addresses which can support 2 to the power of 128 that is approximately 3.4×10 addresses. The IPv6 header is of 40 bytes divided into eight fields. Three tuples of IPv6 header, which are the IP source address, IP destination address, and flow label, represent the IPv6 flow signature . Traffic Class field in the IPv6 header is used to identify different classes or different priorities of IPv6 packets. Based on this class, the network forwards the packet. It is 8-bit field where in first 6 bits are used for differentiated service, which classifies the packet and last 2 bits for Explicit Congestion Notification (ECN), providing congestion control. It provides similar functionality to the IPv4 Type of Service (TOS) field. The first 6-bits can be used to create traffic classes for 64 distinct classes and for QoS as well as MPLS label identification. IPv6 has a same 20-bit label field known as Flow Label .
MPLS does not define any new QoS architecture, but follows Diffserv architecture applied in the MPLS environment   . A flexible solution for support of Diff-Serv over MPLS network has been given by mapping between IP packets and FECs by the ingress router   . MPLS is independent from both network layer protocols and datalink layer media . MPLS infrastructure has minimal core impact to provide IPv6 services . IPv6 over MPLS is considered to be the best available and most efficient combination of protocols on layer 2 and layer 3 for routing of packets with protocol transparency that can have minimal core impact to provide IPv6 services  . MPLS labels and IPv6 labels serve different network functions, and they are not interchangeable because of the fact that MPLS labels are used to create connection-oriented Label Switched Paths (LSP) whereas IPv6 is a connectionless protocol.
MPLS labels are distributed by label distribution protocols and change at every hop whereas Flow Labels are used to identify end-user traffic and do not change. Also, various MPLS services use the shim header and if it is stacked on 40 Byte IPv6 header, it would be mammoth overhead  . However as per , Flow Label of IPv6 can be used to hold the MPLS label without increasing the complexity of the model and all other shim header fields can be completely mapped into IPv6 headers by introducing IP Next Generation Label Switching (IPngLS). In addition to providing integration of MPLS and IPv6, IPngLS also decreases complexity by eliminating extra headers, no extra QoS mappings, as MPLS reserves only 3 bits to classify packets into QoS classes while IPv6 is fully compatible with the Differentiated Services. It is only suitable for IPv6 networks, still needing MPLS to interoperate with IPv4 networks. The mapping of the Label field from the MPLS shim header can be done on a 1-to-1 basis since both are 20-bit long    .
In many applications, the data is almost equal to that of the header. For good bandwidth utilization, it is necessary to reduce the unnecessary packet overhead for each packet. An IP packet is a combination of header and payload, and header compression takes out the redundant header and then transmits payload thereby helping in reduction of header information between consecutive packets. Suppression of parts of the header leads to a compressed header. The receiver has to restore the header at the receiving end.
Header compression relies on many fields that are constant or those that rarely change in consecutive packets of the similar packet flow. If the packets have the same flow that is moving to same destination, some fields such as next header, version, flow label, source address, and destination address fields are same and thus are unnecessary overhead in a packet. That part of the information that does not modify is sent at the start or updated after certain interval of time or after some change has occurred. Even though header is an important part of the packet for communication, still at times these can be excessive or redundant overhead, taking up bandwidth un-necessarily. Header compression or suppression makes it possible to save bandwidth in addition to reduction of packet loss and improved response time . Some of the header compression gains are given below in table 2.1:
illustration not visible in this excerpt
Table 2.1: Header Compression gains
illustration not visible in this excerpt
Figure 2.4: Header Format of IPv6
The IPv6 header consists of a base portion header and extension headers. The base portion header (40 bytes) has seven fields of version field, priority field, flow label field, payload length field, next header field, hop limit field, and address field. Extension headers provide extra functionalityas shown in figure 2.4.
illustration not visible in this excerpt
Figure 2.5: Header Compression in general
Figure 2.5 depicts the general concept of header compression, in which the packet which consists of the payload plus data is compressed at the source before sending it and the compressed header is sent instead of the complete header. The compression at the source is done by the compressor and at the destination; the de-compressor helps in decompressing the header.
Packets involved in header compression technique are categorized as uncompressed packets, compressed packets, and feedback packets. Uncompressed packet is a packet with complete header plus payload whereas compressed packet has compressed header plus payload. Information necessary to compress or decompress packets is stored in a context state database. Compressor and de-compressor operate according to a well defined protocol. The compressor compresses the headers with respect to a reference state that it shares in common with the de-compressor. Both have common reference state, therefore both needs to operate according to a protocol. Compressor has the job of conveying context state updates to the de-compressor whenever there is an update in the network and hence the context state database changes. There is context synchronization until the de-compressor can successfully process this context state updates    . The various header compression techniques are explained below.
Van Jacobson proposed the original transport header compression scheme in RFC 1144 for the Transmission Control Protocol/Internet Protocol (TCP/IP) naming it Van Jacobson Header Compression (VJHC) . In VJHC, the 40 byte TCP/IP packet header is reduced to less than 5 bytes for the average case. Van Jacobson (VJ) TCP header compression significantly reduces TCP protocol overhead in a noiseless environment with smaller packets exhibiting better VJ compression gains and can get about 50% compression ratio . TCP/IP VJHC is implemented with the Point-to- Point (PPP) link protocol achieving compression of TCP/IP header from 40 bytes down to 3-5 bytes. It was especially designed to improve TCP/IP performance over low- speed serial links . It treats the physical link as consisting of two simplex links, one in each direction going from compressor to de-compressor, implying that there is no direct backwards flow of information from the de-compressor to the compressor . It was proposed to improve TCP based interactive performance of applications over low-speed links with improvement in link utilization .
VJHC is performed on per hop basis at the link layer, maintaining connection state tables which contain states for each connection consisting of the last uncompressed TCP and IP headers sent or received on that connection. Unique Compression Identifier (CID) is allocated by the compressor for the connection and by saving the first TCP/IP headers sent, all successive headers are built by sending only the changes from the previous headers. The de-compressor at the destination un-compresses the header by applying the changes contained in the newly received compressed header to the saved header.
VJHC is very commonly used header compression method despite several other Header Compression mechanisms being implemented. Compressor from the source is located between Network Layer and Data Link Layer and relies on framer for in-order packet delivery and error detection, without any feedback between compressor and de-compressor.
Figure 2.6 depicts the VJHC mechanism. In case of a lost or corrupted packet, invalid uncompressed header will be created. All packets delivered after the lost or corrupted packet will be decompressed improperly, thus will be discarded by the destination and requires TCP/IP re-transmission. Hardly any experimental results are there to support impact of TCP/IP’s VJHC in lossy communication channels, particularly for low bit- rate wireless and satellite links. The state is synchronized once again after the sender retransmits the original lost or corrupted packet. Resynchronization is done by uncompressed retransmissions.
illustration not visible in this excerpt
Figure 2.6: Van Jacobson Header Compression (VJHC)
Space communications require protocols which are reliable and efficient, simple TCP performs poorly over space communications links.TCP was developed for earthly wired networks whereas for satellite communication, there are long delays and high Bit Error Rate (BER) producing unsatisfactory results. As the congestion control mechanism in TCP has unnecessary overhead of rate control, it leads to low bandwidth utilization. As an example, the achieved throughput is only about 200 Kbps, even though the satellite link capacity reaches 1.5 Mbps with a BER at 10-  . Many TCP enhanced protocols such as Scalable TCP (STCP) , FAST AQM Scalable TCP (FAST TCP) , eXplicit Control Protocol (XCP) , Variable-structure congestion Control Protocol (VCP)  and Westwood protocol  etc. have been developed to improve its performance and the most successful among these is SCPS . It contains four protocols named SCPS-FP, SCPS-TP, SCPS-NP, and SCPS-SP. As per ISO network model, SCPS-FP is an application layer protocol, SCPS-TP is a transport layer protocol, SCPS-NP is a network layer protocol, and SCPS-SP protocol is between transport layer and network layer .
Compression techniques are available in Space Communication Protocol Specification- Network Protocol (SCPS-NP) and Space Communication Protocol Specification-Transport Protocol (SCPS-TP). The SCPS-NP header construction approach is based on the header compression concepts elaborated in RFC 1144  and uses a technique called ‘capability driven header construction’ as a means to control bit overhead which means that the packet has only those header fields that are essential for that packet only .
Master's Thesis, 79 Pages
Bachelor Thesis, 76 Pages
Scientific Essay, 8 Pages
Elaboration, 30 Pages
Scientific Essay, 7 Pages
Diploma Thesis, 117 Pages
Bachelor Thesis, 85 Pages
Diploma Thesis, 95 Pages
Term Paper (Advanced seminar), 14 Pages
Term Paper (Advanced seminar), 25 Pages
Seminar Paper, 21 Pages
Examination Thesis, 60 Pages
Thesis (M.A.), 129 Pages
GRIN Publishing, located in Munich, Germany, has specialized since its foundation in 1998 in the publication of academic ebooks and books. The publishing website GRIN.com offer students, graduates and university professors the ideal platform for the presentation of scientific papers, such as research projects, theses, dissertations, and academic essays to a wide audience.
Free Publication of your term paper, essay, interpretation, bachelor's thesis, master's thesis, dissertation or textbook - upload now!