Jump to content

User:ConorMcD1/Recursive Inter-Network Architecture (RINA)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Edugrasa (talk | contribs) at 10:40, 13 January 2015. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Recursive InterNetwork Architecture (RINA) is a computer network architecture that unifies distributed computing and telecommunications. RINA's fundamental principle is that computer networking is just Inter-Process Communication or IPC. RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA supports inherently and without the need of extra mechanisms mobility, multi-homing and Quality of Service, provides a secure and programmable environment, motivates for a more competitive marketplace and allows for a seamless adoption.

History and Motivation

The principles behind RINA, were first presented by John_Day_(computer_scientist) in his book “Patterns in Network Architecture: A return to Fundamentals” [1]. This work is a start afresh, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet or Xerox Network Systems.

From the early days of telephony to the present, the telecommunications and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support distributed computing; the paradigm shift from telephony to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current Internet protocol suite shows many symptoms of this thinking [2]:

  • The network routes data between interfaces of computers, as the public switched telephone network switched calls between phone terminals. However, it is not the source and destination interfaces that wish to communicate, but the distributed applications.
  • Applications have no way of expressing their desired service characteristics to the network, other than choosing a reliable (Transmission Control Protocol) or unreliable (User Datagram Protocol) type of transport. The network assumes that applications are homogeneous by providing only a single quality of service.
  • The network has no notion of application names, and has to use a combination of the interface address and transport layer port number to identify different applications. In other words, the network uses information on “where” an application is located to identify “which” application this is. Every time the application changes its point of attachment, it seems different to the network, greatly complicating multi-homing, mobility, and security.

Several attempts have been made to propose architectures that overcome the current Internet limitations, under the umbrella of the Future Internet research efforts. However, most proposals argue that requirements have changed and therefore the Internet is no longer capable to cope with them. While it is true that the environment in which the technologies that support the Internet today live is very different from when they where conceived in the late 1970s, changing requirements are not the only reason behind the Internet's problems related to multihoming, mobility, security or QoS to name a few. The root of the problems may be attributed to the fact the current Internet is based on a tradition focused on keeping the original ARPANET demo working and fundamentally unchanged, as illustrated by the following paragraphs.

1972. Multi-homing not supported by the ARPANET. In 1972 the Tinker Air Force Base wanted connections to two different IMPs (Interface Message Processors, the predecessors of today's routers) for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET two interfaces of the same host had different addresses therefore it had no way of knowing that they belong to the same host. The solution was obvious, as in operating systems, a logical address space naming the nodes (hosts and routers) was required on top of the physical interface address space. However, the implementation of this solution was left for future work, and it is still not done today: “IP addresses of all types are assigned to interfaces, not to nodes[3]. As a consequence, routing table sizes are orders of magnitude bigger than they would need to be, and multi-homing and mobility are complex to achieve, requiring both special protocols and point solutions.

1978. Transmission Control Protocol (TCP) split from the Internet Protocol (IP). Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP, although the two layers had the same scope. This would not be a problem if: i) the two layers were independent and ii) the two layers didn't contain repeated functions. However none of both is right: in order to operate effectively IP needs to know what TCP is doing. IP fragmentation and the workaround of MTU discovery that TCP does in order to avoid it to happen is a clear example of this issue. In fact, as early as in 1987 the networking community was well aware of the IP fragmentation problems, to the point of considering it harmful [4]. However, it was not understood as a symptom that TCP and IP were interdependent and therefore splitting it into two layers of the same scope was not a good decision.

1981. Watson's fundamental results ignored. Richard Watson in 1981 provided a fundamental theory of reliable transport [5], whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol [6] in which the state of a connection at the sender and receiver can be safely removed once the connection-state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection’s state. Had TCP incorporated Watson's results it would be more efficient, robust and secure, eliminating the use of SYNs and FINs and therefore all the associated complexities and vulnerabilities to attack (such as SYN flood).

1983. Internetwork layer lost, the Internet ceases to be an Internet. Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976 [2]. A remarkable aspect is that the selected option, as well as all the other candidates, had an architecture composed of 3 layers of increasing scope: data link (to handle different types of physical medias), network (to handle different types of networks) and internetwork (to handle a network of networks), each layer with its own addresses. In fact when TCP/IP was introduced it run at the internetwork layer on top of the Network Control Program and other network technologies. But when NCP was shut down, TCP/IP took the network role and the internetwork layer was lost [7]. As a result, the Internet ceased to be an Internet and became a concatenation of IP networks with an end-to-end transport layer on top. A consequence of this decision is the complex routing system required today, with both intra-domain and inter-domain routing happening at the network layer [8] or the use of NAT, Network Address Translation, as a mechanism for allowing independent address spaces within a single network layer.

The Internet architecture as seen by the INWG

1983. First opportunity to fix addressing missed. The need for application names and distributed directories that mapped application names to internetwork addresses was well understood since mid 1970s. They were not there at the beginning since it was a major effort and there were very few applications, but they were expected to be introduced once the “host file” was automated (the host file was centrally maintained and mapped human-readable synonyms of addresses to its numeric value). However, application names were not introduced and DNS, the Domain Name System, was designed and deployed, continuing to use well-known ports to identify applications. The advent of the web and HTTP caused the need for application names, introducing URLs. However the URL format ties each application instance to a physical interface of a computer and a specific transport connection (since the URL contains the DNS name of an IP interface and TCP port number), making multi-homing and mobility very hard to achieve.

1986. Congestion collapse takes the Internet by surprise. Despite the fact that the problem of congestion control in datagram networks had been known since the very beginning (in fact there had been several publications during the 70s and early 80s [9], [10]) the congestion collapse in 1986 caught the Internet by surprise. What is worse, it was decided to adopt the congestion avoidance scheme from Ethernet networks with a few modifications, but it was put in TCP. The effectiveness of a congestion control scheme is determined by the time-to-notify, i.e. reaction time. Putting congestion avoidance in TCP maximizes the value of the congestion notification delay and its variance, making it the worst place it could be. Moreover, congestion detection is implicit, causing several problems: i) congestion avoidance mechanisms are predatory: by definition they need to cause congestion to act; ii) congestion avoidance mechanisms may be triggered when the network is not congested, causing a downgrade in performance.

1992. Second opportunity to fix addressing missed. In 1992 the Internet Architecture Board (IAB) produced a series of recommendations to resolve the scaling problems of the IPv4 based Internet: address space consumption and routing information explosion. Three types of solutions were proposed: introduce CIDR (Classless Inter-Domain Routing) to mitigate the problem, design the next version of IP (IPv7) based on CLNP (ConectionLess Network Protocol) and continue the research into naming, addressing and routing [11]. CNLP was an OSI based protocol that addressed nodes instead of interfaces, solving the old multi-homing problem introduced by the ARPANET, and allowing for better routing information aggregation. CIDR was introduced but the IETF didn't accept an IPv7 based on CLNP. IAB reconsidered its decision and the IPng process started, culminating with IPv6. One of the rules for IPng was not to change the semantics of the IP address, which continues to name the interface perpetuating the multi-homing problem [3].

There are still more wrong decisions that have resulted in long-term problems for the current Internet, such as:

  • In 1988 IAB recommended using the Simple Network Management Protocol (SNMP) as the initial network management protocol for the Internet to later transition to the object-oriented approach of the Common Management Information Protocol (CMIP) [12]. SNMP was a step backwards in network management, justified as a temporal measure while the required more sophisticated approaches were implemented, but the transition never happened.
  • Since IPv6 didn’t solve the multi-homing problem and naming the node was not accepted, the major theory pursued by the field is that the IP address semantics are overloaded with both identity and location information, and therefore the solution is to separate the two, leading to the work on [[Locator/Identifier Separation Protocol] (LISP). However all approaches based on LISP have scaling problems [13] because i) it is based on a false distinction (identity vs. location) and ii) it is not routing packets to the end destination (LISP is using the locator for routing, which is an interface address; therefore the multi-homing problem is still there) [14].
  • The recent discovery of bufferbloat due to the use of large buffers in the network. Since the beginning of the 80s it was already known that the buffer size should be the minimal to damp out transient traffic bursts [15], but no more since buffers increase the transit delay of packets within the network.
  • The inability to provide efficient solutions to security problems such as authentication, access control, integrity and confidentiality, since they were not part of the initial design. As stated in [16]experience has shown that it is difficult to add security to a protocol suite unless it is built into the architecture from the beginning”.


A theory for a new internet architecture by John_Day_(computer_scientist).

Networking is IPC and only IPC

It is IPC if and only if Maximum Packet Lifetime can be bounded.

If MPL can’t be bounded, it is remote storage.

Two protocols: DTP/DTCP for unreliable/reliable data transfer; one for management.



Terminology

IAP: IPC Access Protocol - protocol to carry application names and access control information.

EFPC: Protocol for Error and Flow Control - maintain shared state (synchronization) about the communication between two processes. Detect errors and provide flow control. Port-ids for identifying.

Mux: Multiplexing of messages, scheduling for QoS. One Mux per physical interface.

Dir: Used by IAP to determine which interface to use to find app.

Res Alloc:

AE: Application Entity - that part of the application concerned with communication, i.e. shared state with its peer. An app can have multiple AEs

RIB: Resource Information Base

RMT: Relaying and Multiplexing Task

CAP: ?Common Application Process - ACSE, Authentication, CMIP

SDU:

DIF: Distributed IPC Facility

DAF: Distributed Application Facility


References

  1. ^ Patterns in Network Architecture: A Return to Fundamentals, John Day (2008), Prentice Hall, ISBN-13: 978-0132252423
  2. ^ a b A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp. 66-71, 2011
  3. ^ a b R. Hinden and S. Deering. IP Version 6 Addressing Architecture. RFC 4291 (Draft Standard), February 2006. Updated by RFCs 5952, 6052
  4. ^ C.A. Kent and J.C. Mogul. Fragmentation considered harmful. Proceedings of Frontiers in Computer Communications Technologies, ACM SIGCOMM, 1987
  5. ^ R. Watson. Timer-based mechanism in reliable transport protocol connection management. Computer Networks, 5:47–56, 1981
  6. ^ R. Watson. Delta-t protocol specification. Technical Report UCID-19293, Lawrence Livermore National Laboratory, December 1981
  7. ^ J. Day. How in the Heck Do You Lose a Layer!? 2nd IFIP International Conference of the Network of the Future, Paris, France, 2011
  8. ^ E.C. Rosen. Exterior Gateway Protocol (EGP). RFC 827, October 1982. Updated by RFC 904.
  9. ^ D. Davies. Methods, tools and observations on flow control in packet-switched data networks. IEEE Transactions on Communications, 20(3): 546–550, 1972
  10. ^ S. S. Lam and Y.C. Luke Lien. Congestion control of packet communication networks by input buffer limits - a simulation study. IEEE Transactions on Computers, 30(10), 1981.
  11. ^ Internet Architecture Board. IP Version 7 ** DRAFT 8 **. Draft IAB IPversion7, july 1992
  12. ^ Internet Architecture Board. IAB Recommendations for the Development of Internet Network Management Standards. RFC 1052, april 1988
  13. ^ D. Meyer and D. Lewis. Architectural implications of Locator/ID separation. Draft Meyer Loc Id implications, january 2009
  14. ^ J. Day. Why loc/id split isn’t the answer, 2008. Available online at http://rina.tssg.org/docs/LocIDSplit090309.pdf
  15. ^ L. Pouzin. Methods, tools and observations on flow control in packet-switched data networks. IEEE Transactions on Communications, 29(4): 413–426, 1981
  16. ^ D. Clark, L. Chapin, V. Cerf, R. Braden and R. Hobby. Towards the Future Internet Architecture. RFC 1287 (Informational), December 1991

References

Patterns in Network Architecture: A Return to Fundamentals, John Day (2008), Prentice Hall, ISBN-13: 978-0132252423

Pouzin Society

RINA

Distributed IPC Facility Development

Recursive InterNetwork Architecture prototype

Eleni Trouva, Eduard Grasa, John Day, Ibrahim Matta, Lubomir T. Chitkushev, Steve Bunch, Miguel Ponce de Leon, Patrick Phelan, Xavier Hesselbach-Serra (2011). Transport over Heterogeneous Networks Using the RINA Architecture WWIC, Vol. 6649, Springer, p. 297-308

J. Touch, I. Baldine, R. Dutta., G. Finn, B. Ford, S. Jordan, D. Massey, A. Matta., C. Papadopoulos, P. Reiher, G. Rouskas (2011). A Dynamic Recursive Unified Internet Design (DRUID). Computer Networks, Volume 55, Issue 4, Pages 919-935

Richard Bennett (2011) Remaking the Internet: Taking Network Architecture to the Next Level Information Technology and Innovation Foundation

Richard Bennett (2009) Designed for Change: End-to-End Arguments,Internet Innovation, and the Net Neutrality Debate Information Technology and Innovation Foundation

DeforaOS wiki: Clean Slate Internet design


R. Watson (1981). Timer-Based Mechanisms in Reliable Transport Protocol Connection Management Computer Networks, 5:47–56.

CYCLADES

OpenFlow