Previous | Table of Contents | Next |
Open Systems Interconnection (OSI) Model
The OSI Reference Model is a layered architecture (Figure 7.10) consisting of a set of international networking standards known collectively as X.200. Developed by the International Standards Organization (ISO), the basic process began in 1977 and was completed in 1983. The OSI model defines a set of common rules that can be used by computers of disparate origin in order to exchange information (communicate). As is the case with SNA and other such proprietary architectures, the model is layered to segment software responsibilities, with supporting software embedded in each node to provide an interface between layers. Specific levels of service can be negotiated between nodes.
Figure 7.10 OSI Reference Model.
The transmitting device uses the top layer, at which point the data is placed into a packet, prepended by a header. The data and header, known collectively as a Protocol Data Unit (PDU), are handled by each successively lower layer as the data works its way across the network to the receiving node. At the receiving node, the data works its way up the layered model; successively higher layers strip off the header information.
Security is an issue of prime importance (Figure 7.11) across all dimensions of communications and networks, but perhaps most importantly in the world of data communications. In the traditional data world of mainframes in glass houses, security was very tightly controlled. In the contemporary world of distributed processing and networked computer resources, security is much more difficult to develop and control. Security encompasses a number of dimensions, including physical security, authentication, authorization, port security, transmission security, and encryption.
Figure 7.11 Security as a priority.
Physical security involves access controlcontrol over the individuals who have access to the facilities in which the systems reside. Clearly, access must be restricted by security guards, locks and keys, electronic combination locks and/or electronic card key systems which require additional input, such as a Personal Identification Number (PIN). The latter is preferable, as the system can maintain a record of specific access.
Authentication provides a means by which network managers can authenticate the identity of those attempting access to computing resources and resident data. Authentication consists of Password Protection and Intelligent Tokens. Password protection should be imposed to restrict individuals on a site, host, application, screen and field level. Passwords should be of reasonably long length, alphanumeric in nature and changed periodically. There is a current trend toward the use of dedicated password servers for password management. Intelligent tokens are one-time passwords which are generated by hardware devices and which are verified by a secure server on the receive side of the communication. They often work on a cumbersome challenge-response basis.
Authorization provides a means of controlling which legitimate users have access to which resources. Authorization involves complex software which resides on every secured computer on the network; ideally, it provides single sign-on capability. Authorization systems include Kerberos, Sesame, and Access Manager.
Previous | Table of Contents | Next |