Visual Basic Expert Solutions

book coverQUE

Chapter 8

Modern Client/Server Computing


Client/server computing is a topic of great importance to everyone in the computing field. It is also the object of considerable hype by the trade press. Much of what you read about client/server computing is factually correct, but misses the point. Sometimes client/server is treated like a kind of database management system akin to hierarchical and relational database. In truth, the database vendors are early adopters of client/server, but the topic is bigger than just database access. At other times, we see client/server treated as another category of computer science, like multimedia or artificial intelligence. This ignores the fundamental nature of client/server as a link in the evolution of computers, not a feature of applications.

There is a sea change taking place in the computing world as we shift away from single-vendor mainframe computing toward small, single purpose computing. There are still more mainframe and midrange applications than client/server applications, but the preponderance of new development is taking place in the new client/server paradigm. This shift is the topic of this chapter. Its goal is to lead your thoughts through the next 10 years of computing changes in order to prepare you for the future. This chapter will cover the following topics:

At the conclusion of this chapter, you will be aware of the direction that the computing industry is heading.

Understanding Computing History

Client/server is not a type of computer or a feature of an application, but rather it is simply the next generation of computing technology. To understand what this means, we have to look at the history of the information industry. The invention of the computer marks the beginning of the computer age, but not of the information age. Man has been processing data, mostly by hand, for the past 500 years. As companies began to form and grow, they identified, cataloged, processed, and summarized data about their activities in support of the decision-making process. Double-entry bookkeeping dates from the time of Columbus. An examination of the books of any large company in the Nineteenth Century reveals an extremely sophisticated business approach complete with process and procedures. These processes and data were scattered throughout the enterprise and maintained by people who understood their origin, use, and value.

In the early 1950s, businesses began learning about computers. These early machines could perform certain tasks far better than humans and could potentially reduce the cost of certain activities. Two attributes dominated the understanding of computers of this era: their complexity and their cost. A computer center could cost millions of dollars and required a whole staff of computer experts to keep it going.

The expense and complexity of those computers suggested that a centralized organization was required to feed these new wonders. Because of the relative expense of the computing hardware, it was not cost effective to take the computers to where the business data naturally resides, so the early adopters brought the "mountains to Mohammed." In effect, they brought their mountains of data to the programmer/analysts of the computer room. These people transformed this data into information by putting it into the computer and running programs against it. Over the ensuing decade, businesses found themselves investing tons of money and reaping tons of benefit from this arrangement.

The biggest problem of this day was the programmer/analysts lack of understanding of the data that they were throwing about with such deftness. They knew something about what it meant, but this knowledge was far inferior to that of the functional workers in each part of the company. This was recognized by everyone, but the resulting savings was so great that everyone just lived with it. This era was dominated by the "big iron" vendors like Honeywell, Control Data, IBM, Burroughs, and Sperry. In the late 1970s, several changes began to take place:

This brought about a drastic change in the way people used the computer. Departments began to buy computers for themselves and to create programs that addressed their problems. This was especially true in design and manufacturing organizations that had a large number of engineers on staff. The mainframe companies were not nimble enough to react properly to this change, and new companies like Digital Equipment, Hewlett-Packard, and Data General emerged from obscurity to become household names.

Nothing lasts forever, and in the late 1980s, another set of changes began to fall into place:

Now, functional workers began to ask why they couldn't have a computer on their desk at work. This created a demand for the personal workstation that dominates the business computing landscape of the 1990s.

In reality, all that the computer professionals are doing is relinquishing the ownership of data to its rightful owners (the department who created the data) who now bear responsibility for its completeness and correctness. For a business to remain competitive, data must flow freely between these departments to shed light on the decisions that must be made every day.

For example, the engineering department needs to know what the consequences of its design would be on the manufacturing department. What changes to the design could it make to lower the cost? How much would an extra wingding on the side of the widget cost? How much more does marketing project that customers will be willing to pay for the extra wingding? How much capital investment will be required to purchase the wingding mold and press? How much can we buy the wingding part for if we give the specification to a supplier?

The answers to all of these questions require data from multiple organizations. Engineering owns the part design model data. Manufacturing data is required to project the fabrication cost. Marketing data contains the price projections. Procurement has data about suppliers, their capabilities, and their prices. Finance knows how much capital will be required and what other projects are competing for this investment capital.

The solution to this problem of integrating these "islands of automation" is commonly referred to as "client/server" computing. The person or program that wants a service performed is the client, and the person or program that provides that service is the server. The "/" in the middle is the responsibility of the computer scientist. Your job, should you choose to accept it, is to set up the infrastructure so that these functional experts can obtain the services that they need to do their jobs, and thereby earn the money to create paychecks for everyone.

Clients

The generic concept of a client is fairly simple: it is someone or something that wants service. This basic client/server relationship is repeated billions of times each day, everywhere on the planet. The majority of these transactions take place with the heavy involvement of humans, although more of these are being transferred to computers.

Each of the transactions of your day that involves the participation of resources other than yourself is a client/server relationship. Some of these transactions, like getting a haircut, involve only people while others, like withdrawing money from the bank via an electronic teller, involve only computers. Most services, like eating at a restaurant, have a human component as well as a computing component.

The client/server computing evolution is not anything new in a logical sense. It is just the automation of relationships that already exist.

From the standpoint of the Visual Basic programmer, the client is the application that needs data or services from another program, often running on another computer. An example of this is a Visual Basic program that allows a user to browse through a library of presentation graphics stored on a centralized server. This front end client make requests to the presentation graphics server for specific slides. The server sends the requested slides back to the client who then displays them for the user.

Servers

What types of servers will this brave new world need in order to function properly? The correct answer is probably not known at this time. Mankind usually requires a decade or more to realize the best use of a technology. A few categories have already emerged. We will look at these server types in this section.

Remember that a server is defined as a program that services multiple clients who are interested in the resources that the server owns. By definition, a restaurant is a server, but the kitchen in your home is not. To use a restaurant, you make functional requests to cook some ribs. The kitchen, on the other hand, just sits there when you give it orders. It has resources, but no process to use them, so it is not a server.

In Visual Basic, the server is usually a repository of information that could potentially be of interest to a number of different clients. The Jet Database Engine can implement this server, but often the server requires a more robust platform like Microsoft SQL Server. A database of current budget allocations might be kept by the accounting department on a server. Other departments can write client programs in Visual Basic that access this budget data over the network. This data can be displayed in the desired format by the using department.

File Servers

The earliest example of one computer providing services to another is in the area of file sharing. One computer was typically purchased with a very large ( by the standards of the day) hard disk drive. This drive would be made to appear as if it were local to the rest of the computers on the same local area network. An application's files could be loaded on this hard drive that was seen by every computer. Whenever another computer needed this program loaded into its own memory, the network would perform its magic and the program would be loaded on the local computer. Figure 8.1 illustrates this concept.

Fig. 8.1 File servers make data and programs available over the network.

Print Servers

In the mid-1980s, printers were “dumb” devices. They were nothing more than a print head, a simple processor to drive the print head, a power cord, and a communications line. Each type of printer responded to a set of commands that were sent to it over the serial or parallel line. These commands were processed sequentially as they were received. The computer program was responsible for running on the host computer to generate the commands needed to print a page.

Some computers had special programs written for a specific printer called printer drivers. These printer drivers accepted files as input and translated them into a stream of characters that the printer understood. Every computer that needed to print to that printer needed to install the printer driver or write its own printer control code.

Every computer had to have a printer attached to one of its hardware ports. For mainframe computing, this was not a problem because scores of people shared the computer, and hence the printer. For personal computers and workstations, however, this situation was intolerable because of the expense of providing everyone with a printer. Organizations began to designate one computer as the printing computer. Users would walk over to this computer with a diskette containing the file that they wanted to print and would use the designated printing machine to produce hard copy.

This lessened productivity, so companies next installed a commercially available network, like LAN Manager or Netware. This allowed that one computer to be hooked to the network. The network software made the printer on the designated PC look like it was attached to a local printer. Your local computer became a client of this printer server. Instead of using your computer to send instructions directly to the printer, your computer was really asking another computer to provide this connection and actually send the commands down the wire to the printer.

The next step in the evolution of the printer was the merging of the controlling computer and the printer into network nodes. Network-attached printers contained all of the computing power needed to drive the print engine in the same box. These printers had addresses on the network, and looked like ordinary nodes on the network. However, because this node was a special printing computer and not a general purpose computing machine, it could use RISC processors and run software that was tuned for the purpose of processing printouts as quickly as possible. These print servers removed the need for every computer to drive the printer. If an application could create an output file in Postscript or HPPCL languages, it could be translated at the printer and printed out.

These printers are important to client/server computing because they were truly service devices that offloaded significant work from the local computers. Because of the sophisticated processing that they could do, they caused the industry to think of other uses for servers. Figure 8.2 illustrates this architecture.

Fig. 8.2 Attaching a printer to a network removes the need for a computer to manage it.

Graphical User Interface Servers

Two different kinds of GUI servers appeared in the late 1980s: mainframe screen scrapers and X Windows terminals. They both provided graphics services to the user locally but displayed data that was being retrieved from another machine.

The mainframe scrapers attached to the big computer by making the workstation look like a dumb terminal to the application. Whenever the mainframe application sent what it thought was instruction on how to paint a terminal screen, the screen scraper would intercept it, transfer it to a personal computer, put a pretty face on it, and display it on the screen. Whenever the user entered data on this graphics user interface screen, the scraper would then package it up to look like an ugly terminal, and send it to the mainframe. The application was none the wiser. Figure 8.3 shows how this works.

Fig. 8.3 The computer does not know that the screen scraper is not an ordinary terminal.

X Windows terminals were designed to allow a graphical user interface to be displayed on a terminal. As workstations began to proliferate, a problem arose: organizations could not afford to let everyone have a $25,000 computer on the desktop. Whenever they did foot the bill for one of these prizes, the owner still spent much of his time in meetings, thereby wasting the resources on his desk.

X Windows terminals relieved this problem pretty well. They moved much of the GUI work to a terminal that provided a workstation look and feel for a quarter of the cost. They did this without burdening the main computer with a lot of screen handling work.

These special terminals were also popular for centralized database applications written with a GUI front end. These applications provided a beautiful user interface to each terminal without having to deal with the issues of distributed databases and without clogging the network with GUI screen painting commands.

Image Servers

The next server type, in this progression from the simple to the complex, is the Image server. At the time of this writing, images consume considerable disk space and can be stored in more formats than you can count without taking off your shoes. Image servers can assist in dealing with both of these problems.

For example, suppose that a travel agency wants to really jazz up face-to-face meetings with its customers. It wants to display pictures of the great vacation destinations that it offers. Instead of loading these huge files onto everyone's local machine, the agency purchases an array of CD-ROM photographs from these places. These pictures come from a variety of sources and come in a variety of formats. Whenever a session requires that a photo be displayed, the client program makes a request to the print server for that picture. The print server locates the photo, and converts it to the format, (.BMP, .TIF, .PCX, and so on) and sends it to the client for display.

Database Servers

Everyone thinks of immediately of distributed databases when the words "client/server" are used. There are already mature products in this niche: Oracle, Microsoft SQL Server, Sybase, and Ingres to name a few.

Earlier, when file sharing was being discussed, we mentioned how all users in a local area network could access files on the server's hard disk drive. Certainly, within five minutes of this network feature’s introduction, someone wrote some data to a file so that another user could access it. Immediately, the issues of security, concurrent access, logical schema independence, and every other mainframe database problem came to the desktop computer. These DBMS vendors set about solving these problems, as well as a host of others dealing with distributed data.

These database vendors benefited tremendously from the existence of a de facto standard database manipulation language, IBM's structured query language, SQL. SQL was designed to provide a functional user interface to database access. Fortunately, it was designed when distributed database was a hot research topic, if not yet a reality. As a result, the designers of SQL considered the importance of set-oriented processing and its effect on network performance in their design.

Set-oriented processing means that instead of asking for a row that meets the criteria of the search, you ask for all rows at once. This provides four advantages:

The downside of the SQL issue is that every vendor produced a slightly different flavor of SQL than what the standard says. So, while the learning curve is shorter than it would be otherwise, applications are really not completely portable across multiple vendor offerings. Chapter 10, “Structured Query Language," discusses SQL in more detail.

The big advantage of client/server databases is that they fit the model of modern data ownership better than a mainframe database does. Typically, the business unit that creates the data understands how to use it best. It wants to be able to increase the database as it sees fit. It is also the best qualified to determine what constitutes invalid data and to set up the rules to prevent it from entering the database. Add to this the fact that functional units often have the political power required to keep the data in their organizations, and you'll understand why mainframe database types are changing sides and declaring victory in record numbers. The focus of wise database gurus has shifted to helping these data owners deal with the pure computer science problems of performance, concurrency, security, and so on. Figure 8.4 shows how these client/server databases are linked together.

Fig. 8.4 The client/server databases are owned by the departments that created them, but everyone with permission can access the data.

Wrappered Servers

An important enabling technology that makes the evolution of computing practical is the wrapper. A wrapper is a layer of code, hopefully thin, that allows an existing system, written in another epoch, to imitate a modern application. In order to accomplish this task, an old application is "wrapped" in a disguise that makes it look like something it isn’t.

For example, suppose you have a very important database on a mainframe that contains critical data. Surrounding this database is a army of people who control the data that goes into this database very well. If you rewrite this application and move it to another platform, you will have to redesign the control structure. Because everyone that you suggest this to turns pale, you choose to avoid the issue by accessing the data in place. However, you cannot use the old development tools that normally accompany this ancient environment without permanently damaging your resume, not to mention your mental health.

The solution is to write a wrapper that makes the application look like an object with methods. Now, you can use modern tools to access this data.

Advantages of Client/Server Computing

Client/server computing saves money by reducing the need for human intervention in a large number of transactions. In a centralized database of budget allocations, each client can send a request to the server for the budget figures that it needs. These can be displayed on a screen for the user. This is much less expensive than having either a written request or a telephone call retrieve the data.

The key strategy of every enterprise is to lower its cost to either drive demand or improve its margins. Therefore, in the business world, lowering costs is a primary motive.

Labor costs are the largest cost component of almost every product and service on the planet are labor costs. Even if the cost of raw materials is high, it is normally due to the amount of touch labor that manufacturing them requires. Therefore, a key strategy of most cost reduction efforts has to focus on the reduction of labor costs. Client/server computing holds great promise in this labor-cost-reduction strategy.

Another advantage of client/server computing is the speed at which data can be accessed. Even if a company doesn’t mind paying people to call each other with questions, it probably minds waiting for the response if the person who knows the answer is unavailable. A server can be configured for 24-hour availability if necessary. This means that the speed at which a request can be fulfilled is improved, and normally the entire project is completed more quickly. In many business, the company who calls back the customer first with the answer gets the order.

A third advantage of the client/server model is in the division of labor. While becoming a client/server expert requires considerable aptitude and training, becoming a Visual Basic front-end programmer requires much less. If all of the difficult issues like data integrity and multi-user access are handled at the server, the job of creating the client program is simplified and can be done either by a lower salary person or, most often, by a worker who’s primary job is not programming.

The Obstacles of an Ideal System

Implementing the next generation of computing systems is another insurmountable opportunity for the quick and the brave. While it is conceptually simple to think about allowing all of the data to be maintained by the best qualified people, and then made available to whomever needs it to do his job, the details are kind of hard.

One requirement of our ideal system is that the delivery mechanism needs to be transparent to the client. This transparency takes several forms:

Though this is a lot of information, if you master the art of solving all these problems, you will be gainfully employed at least until the kids are out of college. You probably started estimating how much code will be required to satisfy these requirements. It will take a lot of programming to make this a reality. This is good news for those with skill at creating and implementing systems.

As you ponder that list, you can probably think of many more requirements such as:

For the next generation of systems to be usable, most of these obstacles must be overcome. In order for the system to fulfill the huge potential of truly distributed computing, it must overcome them all.

Enabling Technologies

For a client/server application to be successful, it must have a reliable connection to the server involving both hardware and software, sometimes called middleware. For example, the physical connection between the client and the server can be as simple as a wire or as complex as a wide area network (WAN). The software may be as simple as a communications program, or it may involve military-style security.

Many of these technologies have matured to the point where they are already in everyday use, while others have a ways to go.

Hardware

The standalone personal computer of the 1980s had a huge gulf between it and client/server computing: the lack of a physical connection to the other computers in the world. The hardware and operating system software engineers of that day did a fine job of solving this problem. Now, the vast majority of serious business computers are linked to some form of network.

The simplest piece of equipment purchased in the hardware world was the wire. Normally, these wires are shielded metal, have a specific resistance, and connectors on the end. They had to be connected to something, and so a device called the network interface card had to be installed in the computer. The problem of connecting computers together via wire was not hard. The main challenge of the late 1980s was the affordability of these devices for personal computers.

The next piece of the puzzle was in the area of network operation systems. For a time, Novel Netware looked like it would be the dominant system in this market, but innovations by other vendors and a desire to interconnect the PC networks into the larger networks have kept the jury out on which architectures will win out. As of this writing, multiple protocol stacks support a coexistence of architectures, although not without some pain.

Soon after, every computer in the company was filling these new wires with happy sounds. Before long, everyone learned the meaning of the word bandwidth, and how little of it there seems to be when a whole company was sharing it. They also learned that their computer did not always speak the same language as the computers that they want to talk to. Computers attached to local networks wanted to have access to the other computers in the company when they needed it, but they had to have good response time when communicating with their own departments. To address this problem, bridges, routers, and gateways were introduced.

Bridges are devices that interconnect LANs. They listen to the traffic going across the network and filter out traffic that is local to one side of the bridge, thereby lowering the traffic on the other side. Figure 8.5 illustrates how bridges work.

Fig. 8.5 Bridges interconnect networks and filter unwanted traffic.

Routers interconnect the LANs using protocol dependent information gleaned from the packets on the network. They lower the amount of traffic on a part of the network by filtering out protocols that are not supported on a LAN. Devices that both bridge and route, sometimes called brouters, are the most popular.

Gateways are devices that translate one protocol into another. The SNA gateway has been in use for years. It translates other network packets into SNA compatible packets in order to talk to the IBM mainframe.

All of these devices introduce delays in the movement of data across the network. This lowers the user's satisfaction when the data is time-dependent, like video transmission.

This niche has produced a few millionaires and will certainly produce a few dozen more in the next decade. As a result, the rate of innovation is very fast, and new products appear every month. This progress, when combined with more standardization between service providers will continue to improve the connectivity between computers.

For the vision of global client/server computing to be a reality, every computer in the world needs to be able to talk to every other computer in the world, (if it has permission) in much the same way that any telephone can talk to any other telephone. The large communications companies are spending a fortune on technologies like ATM, X.25, ISDN, and other acronyms that promise to make this interconnection easier and less expensive. Because of the high potential for revenue from these networks, innovations are a certainty. Figure 8.6 illustrates this kind of networking.

Fig. 8.6 A large mix of technologies are required to interconnect a large number of computers.

The challenges associated with creating a hardware infrastructure are formidable, but well on the way to solution. The challenges associated with the software interoperability are not nearly as mature.

Middleware

The hardware solutions of the previous section addressed a few of the obstacles to implementing a global client/server computing vision. The rest will be left to a software layer is often called middleware. Products find themselves classified as middleware when they are neither application nor operating system software.

The middleware field is interesting because it is so immature. Standards are plentiful and fluid, and new requirements are still being discovered. Five main categories of middleware are currently under development:

All of these middleware layers perform important functions, without which client/server computing would either be impossible or impractical. The evolution of the computing field is more dependent on the evolution of middleware than on any other factor.

Distributed Objects

Nearly everyone in the computing field agrees that object are the wave of the future, even if they aren't really sure what an object really is. Simply, an object is a piece of code that has characteristics, or attributes, and provides services through functions called methods. Object technology has facilitated the development of large complex software systems by hiding the complexity of software components behind simpler object interfaces. The next logical step in object technology is to implement the advantages of object technology across platforms and geographic locations.

Distributed Objects allow a client to run software (invoke methods) that resides on another computer. Logically, this activity is very similar to accessing data on another computer, except for the fact that the interaction between the objects in the two processes will be more complex. Instead of sending an SQL statement to a server which sends back a set of rows, a distributed object client might send a geometric model of an aircraft part to a server which optimizes part models for weight and strength. These complexities must be addressed in order to make the whole scheme work properly. The main complexities are as follows:

Fig. 8.7 Data travels in packets across the network from the client to the server and from the server to the client.

When applications call functions, they are sending information via an application programming interface, or API. The function called is normally located inside the same address space as the calling routine, and located by a pointer to the start of the code in memory. What is needed is a pointer to the function on the other computer. The middleware and the network must redirect what the application thinks is a local pointer to a function for calling a function on a remote computer.

Armed with an interface definition, the client can arrange the data into a string to be sent to the server. The server must now react appropriately to process the request. Having done this, the server would like pass the output back to the client. At this point, the server must call some function in its address space to pass the data back to the client. It also needs an interface to bridge this gap.

Finally, the clients in the scenario need to say goodbye to the servers and resume their own work. This is another application interface need. These problems are addressed by the definition of an object model.

The Object Model

The goal of distributed computing is to exchange data and services between computers, regardless of type and physical location. In order to accomplish this exchange, each computer must receiver requests for action that it can understand. In past and current implementations of distributed computing systems, one vendor of hardware or software would publish an interface, normally in the form of a application programming interface, and other programmers and vendors would write programs to this standard. This was a step in the correct direction, but not the final solution.

When one computer requests data or services from another, it must use an interface. This interface must provide exact details of what method performs what function, and what are the data types of the parameters that each method will accept. It must be exact enough to allow coding to take place. It must also be easy for a programmer to connect to other server objects. The specification is called the object model, because it describes the object in such a way that all software conforming to this model can communicate with all other conforming software.

Two main camps have published object models and proposed them as standards for everyone to use. One of them is the Common Object Requester Broker Architecture (CORBA) proposed by a consortium called the Object Management Group (OMG). The other is called the Component Object Model (COM) and is favored by Microsoft.

Common Object Requester Broker Architecture (CORBA)

In 1989, an industry consortium was formed to develop standards for objects to interoperate in the heterogeneous global client/server world. This group was called the Object Management Group (OMG) and was joined by most of the vendors in the industry including Microsoft.

The OMG has published and revised the Object Management Architecture Guide (OMA Guide). This guide describes four main components as shown in figure 8.8.

Fig. 8.8 There are four major components of the CORBA architecture.

The function of these services are as follows:

The Object Request Broker (ORB) is a piece of middleware responsible for managing the interconnection of a client with a server, regardless of its hardware platform or where the object and services are located. To make this possible, an Interface Definition Language (IDL) is used to describe the interface to the ORB.

The operation of the client/server request is fairly simple. The ORB is a piece of middleware that accepts a request for a service from a client. The ORB then looks to find the object which provides that service in its directories. The ORB then completes the circuit between the client and the server and passes the request to the server. The server performs the requested action and passes the output parameters back to the ORB, which then communicates them back to the original client.

Component Object Model (COM)

Microsoft designed COM so that applications could be built from components supplied by different vendors. Encouraged by the successful third-party developer industry that has sprung up to write VBXs, Microsoft wanted to generalize this effort to include all languages and all operating systems. The model sustains higher level services like those provided by OLE 2.0. Because this is a Microsoft Visual Basic book, COM piques a particular interest.

COM offers a set of services to the application provider. Microsoft claims that the following features of an object standard should exist if it is to be taken seriously:

Briefly, COM deals with the problem of calling a function outside of its own address space by specifying that a small piece of code (called a proxy) be linked into the local application. In order to call a function in the interface of another object, the application calls that function in the local proxy. The job of the proxy is to capture the calling parameters, package them in a way to travel across the network intact, and send them to its counterpart on the other machine, the stub.

The stub resides in the same address space as the server application and can therefore access all of the server's functions via a normal C or Pascal style pointer. The stub unpacks the call that was sent by the proxy, and makes the call to the local method or function. The method then passes the returning parameters to the stub, which then packages them and transmits them back to the client's proxy. The proxy unpackages the results and hands them back to the client. Figure 8.9 shows what this looks like.

Fig. 8.9 The proxy and stub do the communications work for the client and server application.

The proxy and stub work fine as messengers, but they do not concern themselves with the calls made to the methods of the server object. How is this coordinated between the client and server? COM specifies that any objects worthy to be called a COM object must be able to answer the following question: "Do you speak the XYZ language?" (or in computer terms: "Do you support the XYZ interface?"), where XYZ is any interface that the client speaks (has implemented).

The server must either pass back a pointer to the interface in question, or NULL, which indicates that it cannot speak XYZ. The details of the XYZ interface is beyond the interest of COM except to pass calls back and forth between client and server for the duration of the conversation.

The value of this strategy is based in three simple features:

These objects are compiled and loaded onto a computer. They can be communicated with at the binary level and therefore can be accessed by any language that can call a function via a pointer to a pointer.

Version management is unnecessary because the rule of COM is that once an interface has been defined, the interface specification cannot be changed. The implementation may be improved over and over, but the definition of how the client application calls its methods is fixed. If a change is needed, a new interface spec is written, and the interface receives a new name.

COM has the advantage of being used commercially in OLE 2.0. This means that every developer who writes the next generation of .VBXs, called .OCX, will be somewhat familiar with COM. This gives COM some early momentum in the race for universal adoption. Because Visual Basic provides such a large market for these .OCXs, the number of developers likely to learn how to code to the COM model is likely to number in the thousands in the next several years.

Comparing COM and CORBA

By closely examining the details of both COM and CORBA, you can understand that the differences are not cosmetic or petty, but rather each represents a different set of goals. CORBA has a certain beauty and completeness about it. It attempts to solve the problem of interoperability in its general form, in a complete and elegant way.

COM, however, is for the pragmatic at heart. It is simple enough to be immediately usable without waiting on anyone or anything, at least on homogeneous systems. It has a "quick to market" orientation that emphasizes "getting it done" as opposed to "getting it right."

So who is right? Neither. Depending on what you are trying to accomplish, one approach is better than the other. In some ways CORBA is reminiscent of the Open Database Connection (ODBC), which attempted to solve the client/server database interconnection problem generally. ODBC, however, pays a performance penalty for the flexibility, and so it is bypassed for the pragmatic version, pass-through mode, more often than not. In time, however, this performance penalty may be less important as machines become more powerful. Many old-timers can remember a time when languages like Fortran and COBOL wore the same "too slow to be practical" label, and were bypassed by those who wrote assembly language.

The Future of Client/Server Computing

Plenty of consumers are telling their hardware and software vendors in plain English, Spanish, French, and every other known language that they want improvement. The vendors themselves have begun to talk about interoperability as a good thing, but some of their behavior still causes us to blink twice.

This need for interoperability brought about an age of ad hoc solutions. This is the situation that the computer business faces now. Because of the lack of agreement on what the right answer is, we are implementing technologies like ODBC, and CORBA that mask differences, but at a price. Routers and gateways perform similar services on network packets. Wrappers fill this niche in the object-oriented world. While these solutions don't truly satisfy, they allow a certain level of interoperability to take place while the industry experts ponder the deep question of what we really need to solve the fundamental problem.

Predicting the future is easy, but getting it right is the hard part. Having said that, the following seem likely to occur:

The difference between clients processes and server processes will blur beyond distinction and become time-dependent. Instead of saying that a program is a server, we will say that it is acting as a server, at this moment.

From Here...

In this chapter, we defined client/server computing in a broad sense as any process that receives services from another process. This definition allows us to view the future of computing from a better perspective.

We examined the kinds of servers from simple print servers, to complex object servers. We looked at the future of computing with objects and object standards. Finally, we looked to the future to see what it might look like.

In order to increase your understanding of databases and client/server, you should examine the following chapters:


| Previous Chapter | Next Chapter | Search | Table of Contents | Book Home Page |

| Buy This Book | Que Home Page | Digital Bookshelf | Disclaimer |


To order books from QUE, call us at 800-716-0044 or 317-361-5400.

For comments or technical support for our books and software, select Talk to Us.

© 1996, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster Company.