User:ConorMcD1/Recursive Inter-Network Architecture (RINA)
The Recursive InterNetwork Architecture (RINA) is a computer network architecture that unifies distributed computing and telecommunications. RINA's fundamental principle is that computer networking is just Inter-Process Communication or IPC. RINA reconstructs the overall structure of the Internet, forming a model that comprises a single repeating layer, the DIF (Distributed IPC Facility), which is the minimal set of components required to allow distributed IPC between application processes. RINA supports inherently and without the need of extra mechanisms mobility, multi-homing and Quality of Service, provides a secure and programmable environment, motivates for a more competitive marketplace and allows for a seamless adoption.
History and Motivation
The principles behind RINA, were first presented by John_Day_(computer_scientist) in his book “Patterns in Network Architecture: A return to Fundamentals” [1]. This work is a start afresh, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet or Xerox Network Systems.
From the early days of telephony to the present, the telecommunications and computing industries have evolved significantly. However, they have been following separate paths, without achieving full integration that can optimally support distributed computing; the paradigm shift from telephony to distributed applications is still not complete. Telecoms have been focusing on connecting devices, perpetuating the telephony model where devices and applications are the same. A look at the current Internet protocol suite shows many symptoms of this thinking [2]:
- The network routes data between interfaces of computers, as the public switched telephone network switched calls between phone terminals. However, it is not the source and destination interfaces that wish to communicate, but the distributed applications.
- Applications have no way of expressing their desired service characteristics to the network, other than choosing a reliable (Transmission Control Protocol) or unreliable (User Datagram Protocol) type of transport. The network assumes that applications are homogeneous by providing only a single quality of service.
- The network has no notion of application names, and has to use a combination of the interface address and transport layer port number to identify different applications. In other words, the network uses information on “where” an application is located to identify “which” application this is. Every time the application changes its point of attachment, it seems different to the network, greatly complicating multi-homing, mobility, and security.
Several attempts have been made to propose architectures that overcome the current Internet limitations, under the umbrella of the Future Internet research efforts. However, most proposals argue that requirements have changed and therefore the Internet is no longer capable to cope with them. While it is true that the environment in which the technologies that support the Internet today live is very different from when they where conceived in the late 1970s, changing requirements are not the only reason behind the Internet's problems related to multihoming, mobility, security or QoS to name a few. The root of the problems may be attributed to the fact the current Internet is based on a tradition focused on keeping the original ARPANET demo working and fundamentally unchanged, as illustrated by the following paragraphs.
1972. Multi-homing not supported by the ARPANET. In 1972 the Tinker Air Force Base wanted connections to two different IMPs (Interface Message Processors, the predecessors of today's routers) for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET two interfaces of the same host had different addresses therefore it had no way of knowing that they belong to the same host. The solution was obvious, as in operating systems, a logical address space naming the nodes (hosts and routers) was required on top of the physical interface address space. However, the implementation of this solution was left for future work, and it is still not done today: “IP addresses of all types are assigned to interfaces, not to nodes” [3]. As a consequence, routing table sizes are orders of magnitude bigger than they would need to be, and multi-homing and mobility are complex to achieve, requiring both special protocols and point solutions.
1978. Transmission Control Protocol (TCP) split from the Internet Protocol (IP). Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP, although the two layers had the same scope. This would not be a problem if: i) the two layers were independent and ii) the two layers didn't contain repeated functions. However none of both is right: in order to operate effectively IP needs to know what TCP is doing. IP fragmentation and the workaround of MTU discovery that TCP does in order to avoid it to happen is a clear example of this issue. In fact, as early as in 1987 the networking community was well aware of the IP fragmentation problems, to the point of considering it harmful [4]. However, it was not understood as a symptom that TCP and IP were interdependent and therefore splitting it into two layers of the same scope was not a good decision.
1981. Watson's fundamental results ignored. Richard Watson in 1981 provided a fundamental theory of reliable transport [5], whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol [6] in which the state of a connection at the sender and receiver can be safely removed once the connection-state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection’s state. Had TCP incorporated Watson's results it would be more efficient, robust and secure, eliminating the use of SYNs and FINs and therefore all the associated complexities and vulnerabilities to attack (such as SYN floods, .
A theory for a new internet architecture by John_Day_(computer_scientist).
Networking is IPC and only IPC
It is IPC if and only if Maximum Packet Lifetime can be bounded.
If MPL can’t be bounded, it is remote storage.
Two protocols: DTP/DTCP for unreliable/reliable data transfer; one for management.
Terminology
IAP: IPC Access Protocol - protocol to carry application names and access control information.
EFPC: Protocol for Error and Flow Control - maintain shared state (synchronization) about the communication between two processes. Detect errors and provide flow control. Port-ids for identifying.
Mux: Multiplexing of messages, scheduling for QoS. One Mux per physical interface.
Dir: Used by IAP to determine which interface to use to find app.
Res Alloc:
AE: Application Entity - that part of the application concerned with communication, i.e. shared state with its peer. An app can have multiple AEs
RIB: Resource Information Base
RMT: Relaying and Multiplexing Task
CAP: ?Common Application Process - ACSE, Authentication, CMIP
SDU:
DIF: Distributed IPC Facility
DAF: Distributed Application Facility
References
- ^ Patterns in Network Architecture: A Return to Fundamentals, John Day (2008), Prentice Hall, ISBN-13: 978-0132252423
- ^ A. McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account”; IEEE Annals of the History of Computing, vol. 33, no. 1, pp. 66-71, 2011
- ^ R. Hinden and S. Deering. IP Version 6 Addressing Architecture. RFC 4291 (Draft Standard), February 2006. Updated by RFCs 5952, 6052
- ^ C.A. Kent and J.C. Mogul. Fragmentation considered harmful. Proceedings of Frontiers in Computer Communications Technologies, ACM SIGCOMM, 1987
- ^ R. Watson. Timer-based mechanism in reliable transport protocol connection management. Computer Networks, 5:47–56, 1981
- ^ R. Watson. Delta-t protocol specification. Technical Report UCID-19293, Lawrence Livermore National Laboratory, December 1981
References
Patterns in Network Architecture: A Return to Fundamentals, John Day (2008), Prentice Hall, ISBN-13: 978-0132252423
Distributed IPC Facility Development
Recursive InterNetwork Architecture prototype
Eleni Trouva, Eduard Grasa, John Day, Ibrahim Matta, Lubomir T. Chitkushev, Steve Bunch, Miguel Ponce de Leon, Patrick Phelan, Xavier Hesselbach-Serra (2011). Transport over Heterogeneous Networks Using the RINA Architecture WWIC, Vol. 6649, Springer, p. 297-308
J. Touch, I. Baldine, R. Dutta., G. Finn, B. Ford, S. Jordan, D. Massey, A. Matta., C. Papadopoulos, P. Reiher, G. Rouskas (2011). A Dynamic Recursive Unified Internet Design (DRUID). Computer Networks, Volume 55, Issue 4, Pages 919-935
Richard Bennett (2011) Remaking the Internet: Taking Network Architecture to the Next Level Information Technology and Innovation Foundation
Richard Bennett (2009) Designed for Change: End-to-End Arguments,Internet Innovation, and the Net Neutrality Debate Information Technology and Innovation Foundation
DeforaOS wiki: Clean Slate Internet design
R. Watson (1981). Timer-Based Mechanisms in Reliable Transport Protocol Connection Management Computer Networks, 5:47–56.