Explain the elements of transport protocol.

4. Explain the elements of transport protocol.

Answer:

The elements of transport protocols are:

  1. Addressing.
  2. Connection Establishment.
  3. Connection Release.
  4. Error control and flow control
  5. Multiplexing.
  6. Crash recovery

1. Addressing

  • When an application (e.g., a user) process wishes to set up a connection to a remote application process, it must specify which one to connect to. (Connectionless transport has the same problem: to whom should each message be sent?)
  • The method normally used is to define transport addresses to which processes can listen for connection requests.
  • In the Internet, these endpoints are called ports.
  • We will use the generic term TSAP (Transport Service Access Point) to mean a specific endpoint in the transport layer.
  • The analogous endpoints in the network layer (i.e., network layer addresses) are not surprisingly called NSAPs (Network Service Access Points). IP addresses are examples of NSAPs
  • Fig.1 illustrates the relationship between the NSAPs, the TSAPs, and a transport connection.
  • Application processes, both clients and servers, can attach themselves to a local TSAP to establish a connection to a remote TSAP.
  • These connections run through NSAPs on each host, as shown.
  • The purpose of having TSAPs is that in some networks, each computer has a single NSAP, so some way is needed to distinguish multiple transport endpoints that share that NSAP
Fig.1. TSAPs, NSAPs, and transport connections

A possible scenario for a transport connection is as follows:

  1. A mail server process attaches itself to TSAP 1522 on host 2 to wait for an incoming call.
    How a process attaches itself to a TSAP is outside the networking model and depends entirely
    on the local operating system. A call such as our LISTEN might be used, for example.
  2. An application process on host 1 wants to send an email message, so it attaches itself to TSAP
    1208 and issues a CONNECT request. The request specifies TSAP 1208 on host 1 as the source and TSAP 1522 on host 2 as the destination. This action ultimately results in transport connection being established between the application process and the server.
  3. The application process sends over the mail message.
  4. The mail server responds to say that it will deliver the message.
  5. The transport connection is released.

Many of the server processes that can exist on a machine will be used only rarely. It is wasteful to
have each of them active and listening to a stable TSAP address all day long. An alternative scheme is
shown in Fig. 6 in a simplified form. It is known as the initial connection protocol. Instead of every
conceivable server listening at a well-known TSAP, each machine that wishes to offer services to
remote users has a special process server that acts as a proxy for less heavily used servers. This
server is called insted on UNIX systems. It listens to a set of ports at the same time, waiting for a
connection request. Potential users of a service begin by doing a CONNECT request, specifying the
TSAP address of the service they want. If no server is waiting for them, they get a connection to the
process server, as shown in Fig. 2.

Fig.2 How a user process in host 1 establishes a connection with a mail server in host 2 via a process server

After it gets the incoming request, the process server spawns the requested server, allowing it to
inherit the existing connection with the user. The new server does the requested work, while the
process server goes back to listening for new requests, as shown in Fig. 6(b). This method is only
applicable when servers can be created on demand.

2. Connection Establishment.

With packet lifetimes bounded, it is possible to devise a fool proof way to establish connections
safely. Packet lifetime can be bounded to a known maximum using one of the following techniques:

  • Restricted subnet design
  • Putting a hop counter in each packet
  • Time stamping in each packet

Using a 3-way handshake, a connection can be established. This establishment protocol doesn’t
require both sides to begin sending with the same sequence number.

  • The first technique includes any method that prevents packets from looping, combined with
    some way of bounding delay including congestion over the longest possible path. It is
    difficult, given that internets may range from a single city to international in scope.
  • The second method consists of having the hop count initialized to some appropriate value
    and decremented each time the packet is forwarded. The network protocol simply discards
    any packet whose hop counter becomes zero.
  • The third method requires each packet to bear the time it was created, with the routers
    agreeing to discard any packet older than some agreed-upon time.

In fig (A) Tomlinson (1975) introduced the three-way handshake.

  • This establishment protocol involves one peer checking with the other that the connection
    request is indeed current. Host 1 chooses a sequence number, x , and sends a CONNECTION
    REQUEST segment containing it to host 2. Host 2replies with an ACK segment
    acknowledging x and announcing its own initial sequence number, y.
  • Finally, host 1 acknowledges host 2’s choice of an initial sequence number in the first data
    segment that it sends

In fig (B) the first segment is a delayed duplicate CONNECTION REQUEST from an old
connection.

  • This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this segment by
    sending host1an ACK segment, in effect asking for verification that host 1 was indeed trying
    to set up a new connection.
  • When host 1 rejects host 2’s attempt to establish a connection, host 2 realizes that it was
    tricked by a delayed duplicate and abandons the connection. In this way, a delayed duplicate
    does no damage.
  • The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating
    around in the subnet.

In fig (C) previous example, host 2 gets a delayed CONNECTION REQUEST and replies to it.

  • At this point, it is crucial to realize that host 2 has proposed using y as the initial sequence
    number for host 2 to host 1 traffic, knowing full well that no segments containing sequence
    number y or acknowledgements to y are still in existence.
  • When the second delayed segment arrives at host 2, the fact that z has been acknowledged
    rather than y tells host 2 that this, too, is an old duplicate.
  • The important thing to realize here is that there is no combination of old segments that can
    cause the protocol to fail and have a connection set up by accident when no one wants it.

3. Connection Release.

A connection is released using either asymmetric or symmetric variant. But, the improved protocol for releasing a connection is a 3-way handshake protocol. There are two styles of terminating a connection:
1) Asymmetric release and
2) Symmetric release.
Asymmetric release is the way the telephone system works when one party hangs up, the connection
is broken.
Symmetric release treats the connection as two separate unidirectional connections and requires
each one to be released separately

4. Error control and flow control

Error control is ensuring that the data is delivered with the desired level of reliability, usually that all of the data is delivered without any errors.

  • Similarity: In both layers, error control has to be performed.
  • Difference: The link layer checksum protects a frame while it crosses a single link. The transport layer checksum protects a segment while it crosses an entire network path. It is an end-to-end check, which is not the same as having a check on every link.

Flow control in data link layer and transport layer

  • Similarity: In both layers a sliding window or other scheme is needed on each connection to keep a fast transmitter from overrunning a slow receiver.
  • Difference: A router usually has relative few lines, whereas a host may have numerous connections.

Buffering

  • The sender: The sender must buffer all TPDUs sent if the network service is unreliable. The sender must buffer all TPDUs sent if the receiver cannot guarantee that every incoming TPDU will be accepted.
  • The receiver: If the receiver has agreed to do the buffering, there still remains the question of the buffer size.

Note: Explanation should be continued with respect to all buffers….

5. Multiplexing

  • Multiplexing, or sharing several conversations over connections, virtual circuits, and physical links plays a role in several layers of the network architecture.
  • In the transport layer, the need for multiplexing can arise in a number of ways.
  • For example, if only one network address is available on a host, all transport connections on that machine have to use it.
  • When a segment comes in, some way is needed to tell which process to give it to.
  • This situation, called multiplexing, is shown in Fig. 6-17(a). In this figure, four distinct transport connections all use the same network connection (e.g., IP address) to the remote host.
Figure: (a) Upward multiplexing. (b) Downward multiplexing
  • Multiplexing can also be useful in the transport layer for another reason.
  • Suppose, for example, that a host has multiple network paths that it can use.
  • If a user needs more bandwidth or more reliability than one of the network paths can provide, a way out is to have a connection that distributes the traffic among multiple network paths on a round robin basis, as indicated in Fig. 6-17(b).
  • This modus operandi is called inverse multiplexing. With k network connections open, the effective bandwidth might be increased by a factor of k.
  • An example of inverse multiplexing is SCTP (Stream Control Transmission Protocol), which can run a connection using multiple network interfaces.
  • In contrast, TCP uses a single network endpoint.
  • Inverse multiplexing is also found at the link layer, when several low-rate links are used in parallel as one high-rate link
  • Multiplexing and demultiplexing
    – Multiplexing:Application layer Transport layer
    Network layer Data link layer Physical layer
    – Demultiplexing: Physical layer Data link layer
    Network layer Transport layer Application layer
  • Two multiplexing:
    – Upward multiplexing
    – Downward multiplexing

Leave a Reply

Your email address will not be published. Required fields are marked *