Managing Multivendor Networks

Previous chapterNext chapterContents


- 1 -
Introduction


Overview

ultivendor networks and a plate of liver and onions have a lot in common: they are both undeniably real; they both provide sustenance; and both of them are repulsive to a lot of people.

Unfortunately, multivendor networks get a bad rap from computer manufacturers who want to keep customers in their folds. The promotion of connections to other manufacturers' computers and networks is cloaked in a shadow of mystery and intrigue.

"Sure," says the sales rep, "you can connect their equipment to ours as long as you choose applications and interfaces that conform to the ISO's seven-layer standard. As you probably know, we are committed to providing solutions that conform to the OSI Reference Model."

The customer replies, "Can you be more specific? I want to implement one LAN to accommodate both types of systems."

"Well," the rep responds, "we support the IEEE 802.3 LAN using the IEEE 802.2 discipline. Of course, you might need to implement TCP to accommodate both systems until our upper-layer OSI products become available. And you might need to implement some type of LAN bridge if the other system doesn't accommodate the combined 802.2 and 802.3 frame formats." In defense of the manufacturers, however, the question of connecting two systems rarely can be answered with a simple yes or no. Yet, the issues involved in connecting networks are often made more complex than they need to be. So, rather than looking to manufacturers to provide multivendor solutions, many customers turn to independent standards organizations such as the International Standards Organization (ISO), the Institute of Electrical and Electronics Engineers (IEEE), and the American National Standards Institute (ANSI). These organizations are discussed in Chapter 6, "Standards."

In theory, these organizations provide standards that can, and often should, be adopted by computer manufacturers to provide interoperability between systems. Unfortunately, standards organizations, which define standards on paper, are normally far ahead of the manufacturers, which must invent and produce the products. Thus, the ISO might recommend the perfect solution to a specific problem, but providing products that conform to that standard is a long-term goal or, even worse, not scheduled at all.

Nonetheless, adopting third-party standards remains a feasible approach for customers for-mulating long-range plans. This customer backing is extremely important to the standards organizations. After all, these organizations rarely have a stick big enough to beat any of the manufacturers into compliance; it is the pressure applied by customers that compels manufacturers to adopt standards.

Because of this chasm between standards and products, another set of solutions comes into play: those implemented by third-party companies or independent organizations to address specific or general data communications and networking products. Third-party solutions include the following:

There are other categories of products that fall under this umbrella. What role, if any, will these products play when the manufacturers adopt more international standards? In many cases the answer is none, because these products are short-term solutions that fill an immediate need. In other cases, the products might be adopted by the standards organizations and thus become standards in their own right.

Multivendor Network Scenarios

Just why you might require a multivendor networking solution is no great mystery or surprise. Some of the more prevalent reasons include the following:

Network Tools and Services

To solve the problems or answer the needs described in the previous scenarios, you might use the following network tools:

Furthermore, from a broader perspective, two additional tools can be part of a networking solution:

Each of these tools and applications is covered in more detail in the following sections.

Common Terminal Access

Being able to access any application from any terminal can solve a great number of problems, but it is not a simple technical task. In most cases, this function is provided by enabling one type of terminal to emulate another type of terminal when accessing a particular system, as shown in Figure 1.1. For example, Digital Equipment Corp. (DEC) terminals would emulate International Business Machines (IBM) terminals when they access IBM systems, and IBM terminals emulate DEC terminals when they access DEC systems.

FIG. 1.1 Common Terminal Access

The beauty of this approach is that the application program is totally isolated and unaware that the terminal it is communicating with is a foreign device. Because the emulation is handling the translation of terminal functions, no changes are required to the application program(s). With common terminal access, adding support for foreign terminals is conceptually (and sometimes literally) no different from adding support for additional native terminals.

This emulation process is not without sacrifices and difficulties. To begin with, having one type of terminal emulate another uses processing overhead. Taking the data stream of one terminal and transforming it into the data stream of another terminal involves intensive central processing unit (CPU), character-by-character processing. If this processing is performed on the system that the terminal is physically attached to or is accessing, the emulation process will, by default, consume application resources. For this reason, emulation is often performed in a separate box or dedicated computer.

Another problem arises when more than two types of systems are involved. Though it is one thing to have two types of terminals that emulate each other, it is entirely another matter to have three types of terminals, each terminal emulating the other two. In this three-terminal scenario, six separate emulation products are employed (two for each terminal), and the chances of finding six such products are slim.

In terms of the emulation process itself, it can be broken into several technical tasks:

Because of these technical difficulties and considerations, it would be advantageous to introduce a common type of terminal to which all applications conform. Historically, this has not been done successfully on a large scale. In modern times, however, X Window Systems terminals (multisession graphics devices developed by Massachusetts Institute of Technology) have come to play a significant role in defining universal standards.

Putting the technical issues aside, implementing universal terminal access can offer simple, straightforward solutions to many different problems. Some of these solutions include the following:

Before the availability of emulation products, many office workers had problems finding a place to set down their morning coffee because their desks had to hold both a dumb terminal and a PC. 5250 emulation has advanced significantly over the years, and now includes mouse and hot spot support, and the capability to run multiple sessions. Some 5250 emulators, such as Walker Richer & Quinn's (Seattle, Washington) Reflection software, are programmable, so end users are able to add functionality. Reflection comes with its own implementation of Visual Basic, called Reflection Basic, and a separate API for controlling terminal sessions from applications. IBM's own Client Access software offers an alternate method for connecting PCs to the AS/400. Client Access replaces the older PC Support product, which was widely panned as sluggish and suffering from an awkward interface. Client Access, on the other hand, has an attractive, graphical interface, and as a native Windows product, offers significantly better performance.

Resource Sharing

In addition to terminals, other resources in a network are normally controlled by the system or manufacturer. By distributing these resources, you can often avoid duplication of expensive devices. The three resources that are primary candidates for sharing are printers, disk drives, and tape drives or other storage media (see Figure 1.2).

Moreover, although each of these resources can be shared in the context of a particular LAN implementation (for example, DEC's DECnet or Novell's NetWare), the same resources might not be shared among different implementations. For example, a LAN-attached printer might be used by any DEC system in a network but be unavailable to any Hewlett-Packard system or PC in the same LAN.

FIG. 1.2 An Example of Resource Sharing

Finally, each type of resource has its own considerations, which are explored in the following sections.

Printers

Sharing a printing device among multiple users is commonplace. One system handles all output to a given printer and queues (or spools) the output to the printer. Therefore, in multivendor environments, the issue is rarely interfacing directly with the printer but interfacing with the spooling process.

In many ways, printer handling is a variation of file transfer processing. However, in addition to performing standard character code translations--translating American Standard Code for Information Interchange (ASCII) to Extended Binary Coded Decimal Interchange Code (EBCDIC) or vice versa--the printer sharing process must also deal with printer-specific directives that might differ from system to system. For example, the directive to issue a form feed might be different for specific IBM and DEC printers. This level of conversion is required because the process creating the output thinks it is writing to a native printer, so it uses native printer codes.

In addition to printer-specific code conversions, the print-sharing process must read and write these specialized queue files. On most systems, these files are stored in special locations using cryptic names, so the task of finding a print file to reroute to another printer might not be trivial. After the source print file is found, it is then written into a specialized queue file on the system handling the printer (see Figure 1.3).

FIG. 1.3 Printer Sharing

When sophisticated multivendor printer sharing is implemented, it is normally invisible to the user. The users simply initiate their output without thinking about the process that gets the file to the printer.

Disk Drives

Given the nature of minicomputers and mainframes, sharing raw storage space in multivendor networks is a rarity. For one thing, each system has its own operating system (typically a proprietary implementation) that interacts directly with disk devices in an optimized and nonsharable manner.

For PC LANs and their interfaces with minicomputers and mainframes, however, specific products have been engineered that enable PCs to access a larger computer's disk space as if it were local disk space. A portion of the larger computer's disk is transformed into a virtual PC hard disk or diskette (see Figure 1.4). Through the magic of the LAN, the PCs can read and write files and programs on these virtual disks as if they were native network disks. Of course, the load must be balanced appropriately; files should be distributed closest to where they are needed most. Otherwise, the server will be overburdened with file requests.

Although this strategy does address PC access to minicomputer and mainframe disk space, it does not do much for the computer host. The sponsoring computer, in fact, might not get similar access to the PC resources, so this style of implementation might be somewhat one-sided in terms of benefits.

At a higher level, some products allow a program on one system to read and write records in a file that resides on another system--for example, IBM's Distributed Data Management (DDM) implementation. Although IBM's implementation is, of course, specific to IBM systems, other companies have developed similar techniques to enable this level of access between dissimilar systems. Of these implementations, Sun Microsystem's Network File System (NFS) is widely implemented and has, in fact, been adopted as a network file access methodology by many of the leading computer manufacturers, including IBM and DEC.

FIG. 1.4 Virtual Network Files

Storage Media

Although tape drives are rarely viewed as network-level devices, the minimal expense of this media makes sharing attractive (see Figure 1.5). In a multivendor network, a shared tape drive can be used in one of two ways: switched access and networkwide access.

FIG. 1.5 Tape Drive Sharing

In switched-access mode, the tape drive can be shared between multiple systems, but only one system can use it at a time. The advantage of this approach over transferring files from other systems to the system that controls the tape is that if the tape is switched for direct access for different systems, each system can read and write to the tape in its native format. The switch, in this case, might be hardware, software, or more likely a combination of the two.

If the tape drive is controlled by a single network function available networkwide, files distributed throughout the multivendor network can be merged on a single tape. This is similar to how tape servers function in PC LANs. Although this is an efficient way of providing networkwide backup, it does not necessarily provide portability from system to system.

An effective enterprise storage management system goes beyond network backup--it pro-vides for the best use of resources, and makes sure that end users can access data when it is needed. An issue in storage management is establishing a way to access data stored on multiple devices and environments, and automating storage and retrieval of the data. Data classification establishes policies for different classes of data so that managers can decide on the best type of storage media for each type of data. For example, non-critical reports can be stored on less expensive media, while customer information might need to be stored so that it is immediately available. Hierarchical storage management (HSM) tools are available to automate the process of data classification, and subsequently migrate the data to the most approp-riate type of storage.

Managing storage and backup can be done from either a UNIX perspective, which offers an open architecture and less-expensive products, or with the older, highly reliable IBM DFSMS product family. Although many tape and disk storage products for storing critical data are available, mainframe-class storage devices still offer the best reliability and performance. The availability of high-speed tape-mounting robots can yield an impressive data transfer rate, nearly approaching that of DASD. Still, other solutions must be considered. Remote distance unlimited DASD, for example, is sometimes used to provide for the availability of urgent, critical data. Redundant Arrays of Inexpensive Disks (RAID) technology has also become standard in many large enterprises, while CD-ROM jukeboxes and other optical storage solutions are becoming more efficient and affordable.

File Transfer

Of all multivendor services, file transfer is probably the best understood and most sought after. Files were being moved from one type of system to another long before LANs became popular. In the first implementations, files were moved via such common storage media as magnetic tape, punched card, or paper tape. As data communications and networking developed, these media-based transports were replaced by communications-based methods that emulated such products as the IBM 2780 and 3780 Remote Job Entry (RJE) stations. One computer would emulate a card punch, for example, while the other would emulate a card reader.

As data processing grew in size and in scope, these approaches became too limited to satisfy the variety of needs and demands for moving data. For example, nontechnical (or semi- technical) users often want to control the "when" and "what" of file transfers. In many cases, they even want to initiate the transfer themselves. This level of involvement by nontechnical personnel is simply not possible when using magnetic tape or RJE transport--both approaches require too much hands-on knowledge of hardware and/or the operating system.

Typical file transfer solutions have a relatively simple user interface to accommodate all levels of personnel (see Figure 1.6). A file transfer product can perform functions such as enabling the accounting department to transfer a file from the administration department to verify payroll or facilitating the exchange of documents and spreadsheets between users on dissimilar systems (as long as the actual word processing and spreadsheet packages can understand each other's information). And if the file transfer product is simple enough to use, these transactions can occur under the management of the people responsible for the information--no big brother from data processing required.

However, an easy-to-use interface does not diminish the potential power for file transfer. The same product can also be used to address some complex, application-oriented problems. It can extract information from one database, transfer the information to another system, and then update another database with that information. Therefore, in a dual (or multiple) database environment, file transfer is often used to move subsets of information from one system to another.


CAUTION: File transfer is not a good solution for moving an entire database from one system to the other because the information must be specifically extracted from the database and put into another format for the transfer.

Many file transfer products accommodate time-fired transfers. These transfers enable one system to collect information and transfer it to another system at predefined times. For example, a bank could transmit the day's transactions at the close of business, or a retail operation could send the cash registers data at the end of the day.

In addition to time firing, some transfer products provide event-firing mechanisms. These mechanisms perform such functions as transferring a file as soon as it becomes available or after two other files are transferred. By combining time firing and event firing, you can create extremely sophisticated transfer scenarios.

FIG. 1.6 File Transfer Among Dissimilar Systems

Behind the end-user interface and application aids are significant technical issues regarding file transfer and implementation. Some of the technical issues that affect the movement of information from one system to another include:

Encapsulation, one of the primary attributes of object-oriented technology, affords a new approach to data transfer. An object is a self-enclosed body of data, functions, and services, in which the system invoking the object is shielded from its internal workings. Techniques such as OpenDoc, CORBA, and Microsoft's OLE, which permit formatted data from one application to be brought into another application unchanged, are based on this technology.

Another factor that greatly contributes to the effectiveness (or ineffectiveness) of file transfer is how the product is structured. For the purpose of simplification, this structure can be broken into two parts:

In both cases, there is a direct association between performance and price. The higher the performance...well, you know. Choosing the file transfer product that offers the best price/performance ratio is often as difficult as the transfer process itself.

Program-to-Program Communications

Whereas file transfer is the easiest multivendor networking tool to understand, program-to-program communications is the most difficult. For one, the user community usually can't see which programs manage what information. Without this knowledge, it is difficult to understand the reasons for implementing program-to-program communications.

Despite this difficulty, the flexibility of program-to-program communications enables it to address many situations for which common terminal access or file transfer products are inadequate. Examples of these situations include:

Program-to-program communications can be implemented in two ways. One implementation enables one program to appear to a remote system as though it were a terminal (see Figure 1.7). In this approach, the program logs on to the remote computer, accesses the program it wants, and interacts with it by simulating a user sitting at a terminal. Although this approach requires the overhead of emulating an end user, it requires custom programming on only one of the systems; the other program remains the same.

FIG. 1.7 Program-to-Program Communications

The second type of implementation enables two or more programs to communicate directly with each other. In most cases, both programs use a common set of access routines that let them establish a link with one another and transfer information. In large IBM networks, this is accomplished through the SNA LU 6.2 interface. For LANs, the implementation is usually unique to the networking service (for example, DECnet's task-to-task communications, HP's InterProcess Communications, and Sun's Remote Procedure Calls).

Although the tools to implement program-to-program communication are well-defined, the applications for it are wide open. Like most programming tools, the uses of program-to- program communications are highly dependent on their environment, applications, and programmers.

Electronic Mail

Electronic mail, or e-mail, can enable a network of people to communicate interactively. The backbone of every mail system is its capability to send notes and replies between its users (see Figure 1.8). This facility is faster (theoretically) than issuing memos and more convenient than tracking someone down via telephone.

FIG. 1.8 Sample Electronic Mail System

Besides providing basic electronic communications, many e-mail packages also include, or are bundled with, software for automating common desktop and office functions, or facilitating the flow of documents throughout the enterprise. These functions include time management (appointment scheduling and to-do list maintenance) and information management. Many products provide special scripting languages so they can be customized for more specific functions (companywide bulletin boards, structured training, help guides, and so on).

Implementing an e-mail package in a multivendor network can be done in a centralized or distributed manner. When implementing a centralized solution, one system can be designated the e-mail host. In this case, each terminal, regardless of manufacturer, must have access to that common system. As previously discussed, common terminal access is an appropriate means of accommodating this need.

In a distributed implementation, two or more systems serve as hosts to an e-mail solution. Although there are obvious benefits to running the same e-mail software throughout the enterprise (simplified training, better licensing deals, and universal access to vendor-specific features), different vendors' e-mail systems can still interoperate. Most modern systems comply with the SMTP standard, which allows for at least a basic level of interoperability. In this case, you must only ensure that the physical and logical links between the systems are compatible with the product's requirements.

Electronic mail differs significantly from file transfer. Some of the unique attributes of e-mail exchange include:

Workflow Systems

Much has been made of the trend towards Business Process Reengineering (BPR). BPR is an extremely arduous procedure in which all business processes are examined, rethought, and reworked from the ground up. There is often a lot of resistance to BPR, especially on the part of end users, because of the disruption that affects people's comfortable routines. However, if effectively carried out, BPR can significantly enhance productivity.

BPR is often associated with workflow documentation, and in fact, the first step of a BPR analysis is often to document workflow. Looking at the flow of work throughout a department, or indeed the entire enterprise, lends itself to redesigning the very operations being documented. Oftentimes, a certain task that was done for years might prove irrelevant after it is documented. BPR has led to new technologies, including collaborative computing, document management, and automated workflow systems. Besides merely automating the flow of work and documents throughout the business unit, these tools can also significantly transform and redesign the way the work is done.

Workflow dispatches electronic documents or forms throughout a queue, routing the document to the next person based on pre-set business rules or a defined access list. It can be used to automate business processes, route projects throughout a business unit, and track the status of a project. Workflow software, when built on top of a client/server architecture, permits business tasks to be performed in rapid succession or even simultaneously by different workers. If the documents are presented in a traditional paper format, then only one person can use it at a time and it must be physically transferred to the next person in the workflow line. The electronic presentation of documents greatly speeds up the process.

Until recently, there was no way to connect messaging systems and workflow engines. Microsoft addressed this situation by adding extensions to its Messaging Application Programming Interface (MAPI), which permits the linking of messaging and workflow systems. In addition, a workflow consortium, of which Microsoft is a member, is also planning to publish an API to define how front-end applications can access multiple workflow systems. Microsoft's MAPI Workflow Framework defines a set of extensions for routing work from desktop applications to workflow systems in the form of MAPI messages. Under the Microsoft framework, a MAPI-based e-mail system can now trigger a workflow procedure.

Similar to the workflow model is groupware, a type of solution that permits groups of individuals to work collaboratively on a project. This also is built on an e-mail framework, and provides many of the same capabilities as workflow products. One of the most prominent groupware products is Lotus Development's Notes, a fully programmable tool that can highly automate tasks, facilitate communications, and streamline access to data. To scale to the enterprise level, however, groupware products must be able to integrate with existing network management tools. To accommodate this need, Lotus released SNMP agents for OpenView, which permits event messages to be sent from a Notes server to an SNMP management console. IBM, Lotus' parent, also has plans to integrate Notes with network managers from Sun and IBM.

Network Management

Network management in a multivendor network is a technical task rarely seen by end users. However, end users do often notice the effectiveness of a network's management when they experience network changes and problems.

The problems involved with managing multivendor networks are numerous and complex. In some cases, networks are geographically separate and linked through bridging and gateway devices. In others, various manufacturers share the same physical network, but each runs its information over that network independently. Sometimes, information runs over the same physical and logical network.

Therefore, when a component in a network ceases to operate correctly, there are many potential causes for the failure. Furthermore, given the increasing use of WANs, the geographical size of the network can be huge. For example, a DEC LAN in California might connect to an IBM network in Texas that might connect to a HP network in New York. Even worse, the group responsible for managing the network might be located only in New York, thus increasing the difficulty of diagnosing the Texas and California networks, even though they are linked together.

The primary job of network management is to monitor and report on the status of the whole network. A network management solution tracks the status of every component in the network, regardless of who the manufacturer is or what type of network it is operating on (see Figure 1.9).

As already mentioned, network management products are often invisible to the end users. But the use of such a product in conjunction with the overall networking strategy is an important aspect of maintaining any large single-vendor or multivendor network.

FIG. 1.9 Network Management System

The Bottom Line

Examining networking and application needs to find the best solution is a complex task. It is important to understand networking issues and some underlying networking considerations. For example, to understand the difficulties in implementing a combined IBM and DEC network, it is important to understand how each network operates on its own. Similarly, to shop for multivendor solutions, you need to understand the application and range of available options.

The rest of this book is organized with these considerations in mind. Chapters 2 through 5 deal with the products and native networking architectures of Digital Equipment Corporation, Hewlett-Packard, IBM, and Sun Microsystems. Chapters 6 through 13 address multivendor networking issues, standards, product approaches, and network management. Finally, a glossary defines the terms and acronyms used in this book and throughout the data communications and networking industry.


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.