Jump to content

Talk:Network protocol design principles

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Conversion script (talk | contribs) at 15:51, 25 February 2002 (Automated conversion). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Questionable passages removed from earlier draft, and comments:

An interesting fact is that the poor design of the error-correction protocol stack of the Internet forces a requirement for error-rates of 1x10-11. This is often achieved by tunneling the the internet protocols through a more reliable protocol such as ATM (asynchronous transfer mode).

This is not true. TCP will work reasonably well up to about 1x10-7, with slight degradation (0.12% packet loss on 12000 bit packets). In any case, it's fibre, not ATM, that is reliable. ATM has no EDC layer at all (unless you count cell header checksums)

The packets each have a checksum, the sum of all the 8-bit bytes in the packet.

No it's not: it's 16-bit one's complement checksum of the contents and a pseudo-IP-header: see http://www.networksorcery.com/enp/protocol/tcp.htm for details

In the internet, ICMP "pings" are sent by routers every 30 seconds or so. In the internet, when a ping fails, the router updates its routing table.

No, they're not pings: they're routing update messages.


-- The Anome

Thanks for the corrections. I think I need to qualify the remarks about the packet error rate. If an error rate above 1x10-11 is also coupled to a delay of three hundred milliseconds or more (i.e. a satellite link), I've heard reports that a packet storm of rebroadcast packets can occur, paralyzing the failing link until routers begin to avoid the congestion. A number of experimental and optionally-deployed protocols use more-selective packet retransmission to avoid this problem, which is a known defect in TCP caused by its windowed packet retransmission policy. Ray Van De Walker


See http://www.psc.edu/networking/tcp_friendly.html#performance and specifically the paper http://citeseer.nj.nec.com/mathis97macroscopic.html for

  • a theoretical derivation of TCP performance vs. error + delay * mathematical modelling ditto
  • actual measurements to back up the above.

This seems to suggest that error-limited performance is very roughly

bandwidth = (sqrt(mss) * C) / (rtt * sqrt(ber))

where the units are:

mss = max segment size in bits
C = dimensionless constant, approx 0.9 (see paper for more details)
rtt = round trip time in seconds
ber = bit error rate in per-bit

Note that this is a small-ber approximation, assuming loss is dominated by full-length packets.

Needless to say, the bandwidth does not go to infinity if the ber goes to zero: packet drops will occur when the b/w tries to exceed the physical link b/w.

This model appears to fit reality pretty well, according to the paper.

-- The Anome