Which two protocols work at the transport layer and ensures that data gets to the right applications

The Fundamentals in Understanding Networking Middleware

Tammy Noergaard, in Demystifying Embedded Systems Middleware, 2010

4.5.7 Transport Layer Middleware

Transport layer protocols (see Figure 4.37) are typically responsible for point-to-point communication, which means this code is managing, establishing, and closing communication between two specific networked devices. Essentially, this layer is what allows multiple networking applications that reside above the transport layer to establish client–server, point-to-point communication links to another device via functionality such as:

Figure 4.37. Transport Middleware Layer Protocols

flow control that insures packets are transmitted and received at a supportable rate

insuring packets transmitted have been received and assembled in the correct order

providing acknowledgments to transmitter upon reception of error-free packet

requesting re-transmission to transmitter upon reception of defective packet.

As shown in Figure 4.38, generally, data received from the underlying network layer are stripped of the transport header and processed, then transmitted as messages to upper layers. When a transport layer receives a message from an upper layer, the message is processed and a transport header appended to the message before being passed down to underlying layers for further processing for transmission.

Figure 4.38. Transport Layer Data-flow Diagram

The core communication mechanism used when establishing and managing communication between two devices at the transport layer is called a socket. Basically, any device that wants to establish a transport layer connection to another device must do so via a socket. So, there is a socket on either end of the point-to-point communication channel for two devices to transmit and receive data. There are several different types of sockets, such as raw, datagram, stream, and sequenced packet for example, depending on the transport layer protocol.

Because one transport layer can manage multiple overlying applications, sockets are bound to ports with unique port numbers that have been assigned to each application either by default via industry standard or by the developer. For example, an FTP client being assigned ports 20 or 21, an email/SMTP client being assigned port 23, and an HTTP client being assigned port 80 to name a few. Each device has ports ‘0’ through ‘65535’ available for use, because ports are defined as 16-bit unsigned integers.

As shown in Figure 4.39, in general, transport layer handshaking involves the server waiting for a client-side application to initiate a connection by ‘listening’ to the relative transport layer socket. Incoming data to the server socket are processed and the IP address, as well as port number, is utilized to determine if the received packet is addressed to an overlying application on the server. Given a successful connection to a client for communication, the server then establishes another independent socket to continue ‘listening’ for other clients.

Figure 4.39. Transport Layer Client–Server Handshaking

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780750684552000042

Guarding Against Network Intrusions

Thomas M. Chen, Patrick J. Walsh, in Network and System Security (Second Edition), 2014

Closing Ports

Transport layer protocols, namely, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), identify applications communicating with each other by means of port numbers. Port numbers 1 to 1023 are well known and assigned by the Internet Assigned Numbers Authority (IANA) to standardized services running with root privileges. For example, Web servers listen on TCP port 80 for client requests. Port numbers 1024 to 49151 are used by various applications with ordinary user privileges. Port numbers above 49151 are used dynamically by applications.

It is good practice to close ports that are unnecessary, because attackers can use open ports, particularly those in the higher range. For instance, the Sub7 Trojan horse is known to use port 27374 by default, and Netbus uses port 12345. Closing ports does not by itself guarantee the safety of a host, however. Some hosts need to keep TCP port 80 open for HyperText Transfer Protocol (HTTP), but attacks can still be carried out through that port.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124166899000034

Internet Protocols Over Wireless Networks

GEORGE C. POLYZOS, GEORGE XYLOMENOS, in Multimedia Communications, 2001

14.2.1 Internet Transport Layer Protocols

Transport layer protocols lie between user applications and the network. Although they offer user-oriented services, their design is based on assumptions about network characteristics. One choice offered by the Internet is the User Datagram Protocol (UDP), essentially a thin layer over IP. UDP offers a best-effort message delivery service, without any flow, congestion, or error control. Such facilities may be built on top of it, if needed, by higher-layer protocols or applications. Besides offering nearly direct access to IP, UDP is also useful for applications that communicate over Local Area Networks (LANs). Because wired LANs are typically extremely reliable and have plenty of bandwidth available, their lack of error and congestion control is unimportant.

Even though wired long-haul links have been exhibiting decreasing error rates (due to widespread use of optical fiber), the statistical multiplexing of increasing traffic loads over wide-area links has replaced errors with congestion as the dominant loss factor on the Internet. Congestion is caused by temporary overloading of links with traffic that causes transmission queues at network routers to build up, resulting in increased delays and eventually packet loss. When such losses occur, indicating high levels of congestion, the best remedy is to reduce the offered load to empty the queues and restore traffic to its long-term average rate [8].

The Transmission Control Protocol (TCP) is the other common transport layer protocol choice offered on the Internet, and the most popular one, since it supports many additional facilities compared to UDP. It offers a connection-oriented byte stream service that appears to applications similar to writing (reading) to (from) a sequential file. TCP supports reliable operation, flow and congestion control, and segmentation and reassembly of user data. TCP data segments are acknowledged by the receiving side in order. When arriving segments have a gap in their sequence, duplicate acknowledgments are generated for the last segment received in sequence. Losses are detected by the sender either by timing out while waiting for an acknowledgment, or by a series of duplicate acknowledgments implying that the next segment in the sequence was lost in transit. Since IP provides an end-to-end datagram delivery service, TCP resembles a Go-Back-N link layer protocol transmitting datagrams instead of frames. On the other hand, IP can reorder datagrams, so TCP cannot assume that all gaps in the sequence numbers mean loss. This is why TCP waits for multiple duplicate acknowledgments before deciding to assume that a datagram was indeed lost.

During periods of low traffic or when acknowledgments are lost, TCP detects losses by the expiration of timers. Since Internet routing is dynamic, a timeout value for retransmissions is continuously estimated based on the averaged round-trip times of previous data/acknowledgment pairs. A good estimate is very important: Large timeout values delay recovery after losses, while small values may cause premature timeouts and thus retransmissions to occur when acknowledgments are delayed, even in the absence of loss. Recent versions of TCP make the key assumption that the vast majority of perceived losses are due to congestion [8], thus combining loss detection with congestion detection. As a result, losses cause, apart from retransmissions, the transmission rate of TCP to be reduced to a minimum and then gradually to increase so as to probe the network for the highest load that can be sustained without causing congestion. Since the link and network layers do not offer any indications as to the cause of a particular loss, this assumption is not always true, but it is sufficiently accurate for the low-error-rate wired links. A conservative reaction to congestion is critical in avoiding congestive collapse on the Internet with its ever-increasing traffic loads [8].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780122821608500153

Transport layer systems

Dimitrios Serpanos, Tilman Wolf, in Architecture of Network Systems, 2011

Transport layer protocols

Several different transport layer protocols exist to accommodate different application layer needs. By combining different features (e.g., the ones discussed previously), different types of transport layer protocols can be created. Despite the potential diversity in transport layer protocols, two protocols dominate in the Internet.

User Datagram Protocol (UDP): This connection-less protocol uses datagrams to send messages from one end system to another. Because UDP operates in a connection-less mode, no prior connection setup is necessary to transmit data. UDP does not provide any services beyond multiplexing and demultiplexing. Datagrams may be delayed, lost, and reordered. In addition, use of the checksum field present in the UDP header is optional. Therefore, UDP is considered a “bare bones” transport layer protocol. UDP is often used for applications that tolerate packet loss but need low delay (e.g., real-time voice or video). The full details of UDP are specified in RFC 768 [142].

Transmission Control Protocol: The transmission control protocol operates in connection-oriented mode. Data transmissions between end systems require a connection setup step. Once the connection is established, TCP provides a stream abstraction that provides reliable, in-order delivery of data. To implement this type of stream data transfer, TCP uses reliability, flow control, and congestion control. TCP is widely used in the Internet, as reliable data transfers are imperative for many applications. The initial ideas were published by Cerf and Kahn [25]. (The authors of this paper received the highest honor in computer science, the Turing Award, in 2004 in part for their work on TCP.) More details on TCP implementation are available in several RFCs, including RFC 793 [145].

An example of another, less commonly used transport layer protocol follows.

Stream Control Transmission Protocol (SCTP): This transport layer protocol combines some aspects of UDP and TCP. SCTP provides reliability similar to TCP but maintains a separation between data transmissions (called “chunks”) similar to datagrams in UDP. Further, SCTP supports multiple parallel streams of chunks (for both data and control information). More information on SCTP can be found in RFC 4960 [170].

Other transport layer protocols are used for media streaming, transport layer multicast, improved TCP congestion control, high-bandwidth transmissions, etc. Many of these protocols employ some variations of the basic functionalities of the transport layer discussed earlier.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123744944000086

The Communication View

Richard John Anthony, in Systems Programming, 2016

3.15.1 Questions

1.

Determine which transport layer protocol is most appropriate for the following applications, justify your answers.

(A)

Real-time streaming

(B)

File transfer that is only used in a local area network

(C)

File transfer that is used across the Internet

(D)

A clock synchronizing service for all computers in a local network

(E)

An eCommerce application

(F)

A local file-sharing application in which clients need to automatically locate the server and cannot rely on a name service being present

2.

Determine which sequences of socket primitives are valid to achieve communication, and also, state whether the communication implied is based on UDP or TCP.

(A)

create socket (client side), sendto (client side), create socket (server side), recvfrom (server side), close (server side), close (client side)

(B)

create socket (client side), create socket (server side), bind (server side), listen (client side), connect (client side), accept (server side), send (client side), recv(server side), shutdown (server side), shutdown (client side), close (server side), close (client side)

(C)

create socket (client side), create socket (server side), bind (server side), sendto (client side), recvfrom (server side), close (server side), close (client side)

(D)

create socket (client side), create socket (server side), bind (server side), listen (server side), connect (client side), accept (server side), send (server side), recv(client side), shutdown (server side), shutdown (client side), close (server side), close (client side)

(E)

create socket (client side), create socket (server side), bind (server side), listen (server side), connect (server side), accept (client side), send (server side), recv(client side), shutdown (server side), shutdown (client side), close (server side), close (client side)

(F)

create socket (client side), create socket (server side), sendto (client side), bind (server side), recvfrom (server side), close (server side), close (client side)

3.

Explain the fundamental difference between RPC and RMI.

4.

Explain the main differences between constructed forms of communication such as RPC or RMI and lower-level communication based on the socket API over TCP or UDP.

5.

Identify a way in which communication deadlock can occur when using the socket API primitives to achieve process-to-process communication, and explain a simple way to avoid it.

6.

Identify one benefit and one drawback for each of the two socket IO modes (blocking and nonblocking).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128007297000030

The SSH Server Basics

In Next Generation SSH2 Implementation, 2009

Connection Protocol

The connection protocol was designed to operate over the transport layer protocol and the user authentication protocol. It manages the interactive login sessions, remotely executes commands, and the forwarding of the TCP/IP and X11 connections. Note that all previous examples require the opening of a channel. Multiple channels can be multiplexed in a single connection. Each channel is characterized by two numbers, which represent the ends of the channel. The request to open a channel includes the number the sender uses for that channel. Once the numbers for the communications have been established, it is necessary that the window size be exchanged. When a new channel is opened, the operations to be carried out are the following:

Sending of a request together with the number locally chosen to manage the channel;

The receiver decides whether to accept the channel-opening request; if yes, it responds by giving its identification number for the open channel.

After the connection of the two ends, there is the actual data transfer. As anticipated, the window size is sent so that the part involved in the communication can send the data without changing the size of this window. After the reception of the first message, it is possible to change the window size by sending a new message containing the exact number of bytes to be sent. The connection layer implementation works it so that:

It must not publish a size that is not able to manage the transport layer;

It must not generate packets bigger than those the transport layer protocol can manage.

When the communication must be ended, a message for closing the SSH_MSG_CHANNEL_EOF channel is sent. For this type of message, no response is specified; however, the applications prefer sending an EOF. Remember that following this message, it is possible to continuously send data. To cause the actual closing, it is necessary to send the SSH_MSG_CHANNEL_CLOSE message. With this message, the channel has to be considered as closed and the number associated with the channel could be used for another communication. These messages do not occupy the space of the available window. Now let's return to the previous examples on the uses of the connection protocol. A session is the remote running of a program, such as a shell, an application, a system command, or any type of subsystem. The program in question could have a tty or it could involve the X11 forwarding. For the last one, we recommend that the cookie sent for the X11 authentication be false and random, but controlled and replaced with the real cookie once the connection has been established. The X11 channels do not depend upon the session, and the session closing does not imply the closing of the X11 channel. The server implementation must reject the requests to open an X11 channel without their having performed an X11 forwarding request. Another example involves the remote execution of shells or commands. To complete this analysis, there is still the topic concerning the port forwarding on the TCP/IP protocol. When the port forwarding is performed, it is not necessary to specify the end point (i.e., the service user), but within the SSH protocol a specific request must be made, also defining the end user. The address and the port towards which the bind must be performed are thus to be specified. The implementations should enable the port forwarding of users that have carried out an authentication as privileged users. In the case of an incoming connection on a port on which a TCP/IP forwarding request has been made, a channel is opened. The implementations must reject the messages that have not made a forwarding TCP/IP request beforehand by giving the port number.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597492836000076

Network Security

James Joshi, Prashant Krishnamurthy, in Information Assurance, 2008

TCP SYN Flood Attack

As mentioned earlier, TCP is the most common transport layer protocol. It is used by many application layer protocols like the HyperText Transfer Protocol (HTTP) and FTP. TCP was designed to provide reliable service on top of the unreliable network layer provided by IP. So among other things, TCP is connection oriented and it carefully maintains buffers, windows, and other resources to count segments and track lost segments. When host A wants to connect to host B, a “three-way” handshake occurs to set up the connection. First, host A sends a TCP segment with a SYN flag set (this is one of six flags used for synchronization—bits—in TCP for indicating information). Host B acknowledges the SYN segment with its own TCP segment with the SYN flag and ACK flag (used to acknowledge the receipt of the SYN packet) set. Host A completes the handshake with a TCP segment with the ACK flag set. Then data transfer begins. Whenever a server receives a SYN segment from a client, it sets aside some resources (e.g., memory) anticipating a completed handshake and subsequent data transfer. As there are limited resources at a server, only a set number of connections can be accepted. Other requests are dropped. Oscar can make use of this “feature” to deny services to legitimate hosts by sending a flood of crafted SYN segments to a server with possibly spoofed source IP addresses. The server responds with SYN-ACK segments and waits for completion of the handshake, which never happens. Meanwhile, legitimate requests for connection are dropped. Such an attack is called a SYN flood attack and has been the cause of denial of service to popular web servers in recent years. Note that Oscar primarily makes use of a feature in a communications protocol to launch denial of service (DoS). The absence of authentication of the source IP address makes it difficult to block such attacks since it is hard to separate legitimate requests from malicious requests. Similarly, Internet Control Message Protocol (ICMP) and other protocols can be used to launch floods that result in DoS. Distributed DoS (DDoS) attacks have recently made headlines by bringing down several popular web sites in recent years as well as launching attacks on root DNS servers. A taxonomy of DDoS attacks is available in Mirkovic and Reiher [2].

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123735669500045

Introduction

James Farmer, ... Weyl Wang, in FTTx Networks, 2017

FTTH Data Link and Network Protocols

As we move up above the strictly physical level, we find a few different transport layer protocols in use. Much of this book is devoted to describing these protocols and how to best use them. So here we present a short overview of the PON protocols in use today.

Fig. 1.3 is an attempt to summarize the FTTH systems that are in use now, along with one or two included mainly for historical reasons. When we consider what we usually think of as FTTH systems, meaning binary data down- and upstream on the same fiber, optionally combined with broadcast, there are two chains of standards to follow. The earliest was the ITU’s APON (ATM over PON) system specified in the 1990s. It established the general architectural elements of subsequent systems, and was deployed a little experimentally, but we know of no current systems in use. It was based on ATM (asynchronous transfer mode, a layer 2 protocol essentially competing with Ethernet), a protocol used and still in use extensively in the telephone industry, among others. There was no provision for broadcast video, and at the time IPTV (video transmitted using internet protocol) was not really feasible.

Figure 1.3. FTTH systems then and now.

APON was modified to free the 1550nm wavelength for a broadcast video overlay, by putting the downstream data at 1490 nm. This produced the BPON standard, which offered a maximum downstream speed of 622 Mb/s and an upstream speed of 155 Mb/s, and which has been deployed rather widely in much of Verizon’s FiOS network, along with other installations.

Operators perceived the need for higher speeds and Ethernet was becoming the apparent winner of the layer 2 race, so the ITU ratified the G.984 GPON standard in 2004. It started with BPON, increased the maximum downstream speed to 2.488 (rounded to 2.5) Gb/s and the upstream to 1.2 Gb/s, and added Ethernet and TDM (T1/E1) transport to the ATM transport already in the standard. Adding these additional layer 2 transport standards, though, made implementation of the standard extremely complex, and as a consequence, not much happened commercially for a year or two. Then people realized that they really did not need all of these transport standards. Ethernet, which began as an enterprise standard for corporate data networking, had improved in many respects, and costs were dropping precipitously. So the Ethernet portion of GPON was built into chip sets, effectively abandoning the other parts. This made a commercially viable product possible, and since then a number of operators, many with telephone backgrounds, have deployed GPON.

Meanwhile, the 802.3 subcommittee of the IEEE, which was responsible for the Ethernet standard, had been adding its own version of FTTH to the Ethernet standard. The task force charged with developing the original standard was formally known as 802.3ah and at times you will see the standard so referenced. The standard was ratified in 2004 (the same year as GPON), and the next time the 802.3 Ethernet standard was updated, the 802.3ah work was incorporated into it. Typical of IEEE Ethernet work, the EPON standard specified only the minimum items necessary to implement the PON standard. Things such as detailed management protocols and encryption, which were built into the GPON standard, were not incorporated into EPON. Rather, it was left to commercial interests to adopt existing specifications to fill in these gaps. This meant that the EPON standard was much easier to implement than was the GPON standard, so by the middle of the decade chip sets for EPON became available, and some manufacturers, who had previously produced similar proprietary PON systems, switched to EPON. EPON gained quite a toe-hold in Asia, which was hungry for improved telecommunications. It also gained adherents in the Americas and Europe, though many early adopters in those areas waited for GPON.

EPON has been known by several other terms, including GE-PON (Gigabit Ethernet PON) and EFM, Ethernet in the First Mile (the original 802.3ah group saw the network closest to the subscriber as the first mile, whereas many people, authors included, tend to think of this as the last mile—it depends on where you see the system starting). Besides the PON standard, 802.3ah defined P2P Ethernet to the home, either on fiber or twisted pair.

A while after 802.3ah was ratified, several other related activities sprung up. A working group was formed under the 802.3av name to consider increasing the speed to 10 Gb/s. This group has subsequently finished its work, and 10 Gb/s EPON (10G-EPON) has been incorporated into the 802.3 Ethernet standard. Another group, SIEPON, was formed to fill in some of the missing pieces of the EPON standard, in order to make it more robust for commercial applications. These missing pieces were built partially on the work of the Metro Ethernet Forum (MEF), which was formed from the old DSL Forum to promote Ethernet as more than an enterprise solution, by adding features to give Ethernet some ATM-like capabilities at a much lower cost.

The cable television industry in the US became interested in EPON as a way to compete with telephone companies installing GPON, and as the next architecture beyond the tried-and-true HFC. Some in the industry were concerned, though, about certain basic management philosophies and techniques that had gone into EPON, which were in conflict with the management of DOCSIS cable modem systems, which had captured the greatest part of the residential data business. Large cable operators had developed very complete management systems around the DOCSIS system, and they perceived that adapting EPON to use these management systems would make it easier to incrementally add EPON to their HFC systems. It is impossible, both physically and financially, to change out HFC systems for FTTH systems overnight. So the concept was to build new plant, possibly in greenfields and into business regions where cable was starting to penetrate, with FTTH, while continuing to operate existing HFC plant for a number of years. Accordingly, Cable Television Laboratories (CableLabs) initiated the DOCSIS provisioning of EPON (DPoE—an acronym of acronyms) work, which as of this writing has produced two revisions defining how to adapt EPON to be managed by existing DOCSIS management systems.

Yet more activities were underway. Another perception of how to move HFC to FTTH was to build physical networks according to the FTTH concepts, as shown earlier in this chapter, but to retain the existing DOCSIS infrastructure at the ends. The Society of Cable Telecommunications Engineers (SCTE) undertook this standardization effort. Rather than terminate the network with OLTs at the hub (or headend) and conventional ONTs at the home, the network would be terminated at the headend with equipment identical to that used in HFC systems, with the possible exception that HFC uses a lot of 1310 nm downstream transmission, and in all FTTH systems this wavelength is reserved for upstream transmission. Downstream transmission would be on 1550 nm (a particularly convenient wavelength because fiber loss is extremely low, and economical optical amplifiers are practical). Two options were specified for the upstream wavelength: 1310 nm for people who wanted the most economical equipment and were not expecting to put PON data on the same fiber. The other option was for upstream transmission at 1610 nm, which would let the fiber network also accept PON transmissions. The first standard was ratified through SCTE then through ANSI, which is the top-level standards organization in the United States. This ratification was completed in 2010.

Not on the chart, there is yet another effort currently under consideration by the IEEE, which is intended to allow cable operators to adapt their existing HFC plant to PON gradually, by replacing nodes with a new device which would convert the PON optical signals to electrical format for transmission to the home on existing coax. This effort is known as EPON Protocol over Coax (EPoC). In turn, it is planning to use the physical layer of another CableLabs initiative, DOCSIS 3.1, an ambitious effort to provide much, much higher bandwidth over HFC networks by expanding the RF bandwidth used by DOCSIS by using more efficient modulation methods. This is one of the hazards of writing a book on such a contemporary topic; by the time you read it, there may have been some major shifts in direction of the industry that your authors were not smart enough to foresee.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124201378000019

Communication Network Architecture

Vijay K. Garg, Yih-Chen Wang, in The Electrical Engineering Handbook, 2005

6.1.2 TCP/IP Network Architecture

The TCP/IP network architecture also refers to the Internet architecture. The Transmission Control Protocol (TCP) is a transport layer protocol, and the Internet Protocol (IP) is a network layer protocol. Both protocols were evolved from a earlier packet switching network called ARPANET that was funded by the Department of Defense. The TCP/IP network has been the center of many networking technologies and applications. Many network protocols and applications are running at the top of the TCP/IP protocol. For example, the Voice over IP (VOIP) and the Video Conference application using MBONE are the applications running over the TCP/IP network. The TCP/IP network has the corresponding five layers in the OSI reference model. Figure 6.2 shows the layers of the TCP/IP network and some applications that might exist on the TCP/IP networks. The standard organization for the TCP/IP-related standard is the Internet Engineering Task Force (IETF), which issues Request-for-Comment (RFC) documents. Normally, IETF requires that a prototype implementation be completed before an RFC can be submitted for comments.

FIGURE 6.2. TCP/IP Network Architecture and Applications

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780121709600500748

Which 2 protocols are used in transport layer?

TRANSPORT LAYER PROTOCOLS..
USER DATAGRAM PROTOCOL (UDP).
TRANSMISSION CONTROL PROTOCOL (TCP).

What are the two main transport protocols used on the Internet?

User Datagram Protocol (UDP) and the TCP are the basic transport-level protocols for making connections between Internet hosts.

Why TCP and UDP are both required in transport layer?

Differences between the TCP and UDP. Both the protocols, i.e., TCP and UDP, are the transport layer protocol. TCP is a connection-oriented protocol, whereas UDP is a connectionless protocol. It means that TCP requires connection prior to the communication, but the UDP does not require any connection.

Is TCP a transport layer protocol?

Transmission Control Protocol (TCP) In terms of the OSI model, TCP is a transport-layer protocol. It provides a reliable virtual-circuit connection between applications; that is, a connection is established before data transmission begins.

Toplist

Neuester Beitrag

Stichworte