Client/server (C/S) database computing is a relatively new technology that only recently has been adapted as a system architecture for the deployment of applications. C/S database computing is the wave of the 90s and it is anticipated that C/S database computing will continue to gain popularity. To understand the reasons behind the success of C/S database computing, it helps to understand the other common types of database computing: mainframe and PC/file server.
Before the late 80s and early 90s, mainframe computing was about the only computer choice for organizations that required heavy-duty processing and support for a large number of users. Mainframes have been in existence for over 20 years. Their longevity has lead to their reliability. The ability of mainframes to support a large number of concurrent users while maintaining a fast database retrieval time contributed to corporate acceptance of mainframes.
Mainframe computing, also called host-based computing, refers to all processing carried out on the mainframe computer. The mainframe computer is responsible for running the Relational Database Management System (RDBMS), managing the application that is accessing the RDBMS, and handling communications between the mainframe computer and dumb terminals. A dumb terminal is about as intelligent as its name implies: it is limited to displaying text and accepting data from the user. The application does not run on the dumb terminal; instead, it runs on the mainframe and is echoed back to the user through the terminal (see Figure 1.1).
Figure 1.1.
Mainframe database computing.
The main drawback of mainframe computing is that it is very expensive. Operating
a mainframe computer can run into the millions of dollars. Mainframes are expensive
to operate because they require specialized operational facilities, demand extensive
support, and do not use common computer components. Additionally, the idea of paying
thousands of dollars to rent software that runs on the mainframe is almost inconceivable
for PC users who have never used mainframe technology.
Rather than using common components, mainframes typically use hardware and software proprietary to the mainframe manufacturer. This proprietary approach can lock a customer into a limited selection of components from one vendor.
PC/file server-based computing became popular in the corporate environment during the mid to late 80s when business users began to turn to the PC as an alternative to the mainframe. Users liked the ease with which they could develop their own applications through the use of fourth-generation languages ( 4GL) such as dBASE III+. These 4GL languages provided easy-to-use report writers and user-friendly programming languages.
PC/file server computing is when the PC runs both the application and the RDBMS. Users are typically connected to the file server through a LAN. The PC is responsible for RDBMS processing; the file server provides a centralized storage area for accessing shared data (see Figure 1.2).
Figure 1.2.
PC/file server database computing.
The drawback of PC-based computing is that all RDBMS processing is done on the local
PC. When a query is made to the file server, the file server does not process the
query. Instead, it returns the data required to process the query. For example, when
a user makes a request to view all customers in the state of Virginia, the file server
might return all the records in the customer table to the local PC. In turn, the
local PC has to extract the customers that live in the state of Virginia. Because
the RDBMS runs on the local PC and not on the server, the file server does not have
the intelligence to process queries. This can result in decreased performance and
increased network bottlenecks.
PC/File Server Headaches
As a consultant, I am often called into projects that are running behind schedule and require additional resources. About two years ago, a mortgage banking corporation called me in to convert a mainframe application to the PC environment. The majority of the company's income was generated from the application I was converting. Not only was I converting their money maker, the system was required to be up and running within six weeks. The project manager decided that I should build the system using a popular PC/file server database product. The application design specified a maximum of three concurrent users. Based on the type of queries that were to be performed, I felt comfortable in stating that the performance would be acceptable for the users. After rushing to meet my deadline, the system was implemented. Everything went smoothly until this company's business skyrocketed and more loans than anticipated had to be processed. Before I knew it, the number of users had increased to fifteen. With fifteen users on the system, the network came to a standstill. The reason the application brought the network to standstill is simple: in a PC/file server architecture, all database processing occurs on the local PC. Therefore, when the users issued complicated queries to the server, the network jammed with data being sent back to the local workstations. Often, the queries being issued from the applications required thousands of rows to be returned to the local PCs. In the PC/file server environment, this is the equivalent of calling a car dealership and asking how many blue pickup trucks they have in stock. To get the answer, the dealer drives every car to your house and you count the number of blue pickup trucks. Obviously, this is not very efficient. In the C/S database computing environment, a different approach is taken. Someone at the dealership counts the number of blue pickup trucks and passes the information back to the caller. Eventually, the mortgage banking system was rewritten using a C/S computing database. Performance was improved, network bottlenecks were decreased, and users were happy.
C/S database computing evolved as an answer to the drawbacks of the mainframe and PC/file server computing environments. By combining the processing power of the mainframe and the flexibility and price of the PC, C/S database computing combines the best of both worlds (see Figure 1.3).
Figure 1.3.
Client/server database computing.
C/S database computing can be defined as the logical partitioning of the user interface,
database management, and business logic between the client computer and the server
computer. The network links each of these processes.
The client computer, also called a workstation, controls the user interface. The client is where text and images are displayed to the user and where the user inputs data. The user interface may be text based or graphical based.
The server computer controls database management. The server is where data is stored, manipulated, and retrieved. In the C/S database environment, all database processing occurs on the server.
Business logic can be located on the server, on the client, or mixed between the two. This type of logic governs the processing of the application.
In the typical corporate environment, the server computer is connected to multiple client computers. The server computer is a high-powered computer dedicated to running the RDBMS. The client workstations are usually PC based. The client computer and database server communicate through a common network protocol that allows them to share information.
Many corporations have turned to client/server database computing as their computing answer. Following are some of the underlying reasons for its popularity:
Now you understand that with C/S database computing, the user interface runs on the client computer and the RDBMS runs on the server computer. A third component in the C/S database computing environment is the placement of business logic. As mentioned previously, business logic is the rule that governs the processing of the application. Business logic can be placed on the server, on the client, or mixed between the two.
A fat server locates business logic within the RDBMS on the server (see Figure 1.4). The client issues remote procedure calls to the server to execute the process. The advantage of the fat server is centralized control and decreased network traffic. Fat servers are best suited for structured and consistent business logic, such as online transaction processing (OLTP). Modern RDBMS products support fat servers through stored procedures, column rules, triggers, and other methods.
Figure 1.4.
A fat server.
A fat client embeds business logic in the application at the client level
(see Fig- ure 1.5). Although a fat client is more flexible than a fat server, it
increases network traffic. The fat client approach is used when business logic is
loosely structured or when it is too complicated to implement at the RDBMS level.
Additionally, fat client development tools, such as 4GL languages, typically offer
more robust programming features than do RDBMS programming tools. Decision support
and ad-hoc systems are often fat client based.
Figure 1.5.
A fat client.
A mixed environment partitions business logic between the server and the client
(see Figure 1.6). For practical reasons, an application may have to implement this
approach. This balancing act is a common approach with C/S database computing.
Figure 1.6.
A mixed environment.
RDBMS (Relational Database Management System) has become the standard for C/S database computing. Database software vendors and corporate IS departments have rapidly adapted the RDBMS architecture. It is based on the relational model that originated in papers published by Dr. E.F. Codd in 1969. In an RDBMS, data is organized in a row/column manner and is stored in a table. Records are called rows and fields are called columns (see Figure 1.7).
Figure 1.7.
Row and column layout in a relational model.
Data is structured using relationships among data items. A relationship is
a link between tables (see Figure 1.8); relationships allow flexibility of the presentation
and manipulation of data.
Figure 1.8.
Relationships among data items in a relational model.
The RDBMS has become the standard in client/server database computing for the following reasons:
The number of RDBMS vendors has increased over the years as C/S has grown in popular-ity. Although each vendor's database product stems from the relational model, vendors take different approaches to implementing it. These differences--combined with price, performance, operating systems supported, and a host of other items--make choosing the right RDBMS difficult. Following is a brief summary of popular RDBMS vendors:
Vendor: Microsoft
Product: SQL Server
The SQL Server product was originally developed by Sybase in the mid-1980s. Microsoft partnered with Sybase and, in 1988, released SQL Server for OS/2. In 1993, Microsoft shipped the NT version of SQL Server. In 1994, Microsoft and Sybase ended their partnership. Microsoft's SQL Server has grown to be a huge success in the RDBMS market. Microsoft has been successful in combining performance, support for multiple platforms, and ease of use. When SQL Server shipped in 1993, it set a new price/performance TPC benchmark. Since then, it has continued to be a leader in the price/performance benchmark. Support for multiple platforms is accomplished through Microsoft's NT operating system, which runs on the Intel, RISC, and other chip sets. Ease-of-use is accomplished through SQL Server's graphical management tools.
Vendor: Computer Associates
Product: INGRES
The INGRES database software was one of the original RDBMS products to be offered. INGRES supports the OS/2, UNIX, and VAX/VMS platforms. Computer Associates was the first company to provide cost-based optimization, which has become an industry standard. Distributed processing support is available as an INGRES add-on product.
Vendor: IBM
Product: DB2
DB2 is IBM's mainframe relational database that offers impressive processing power. DB2's support for massive databases and a large number of current users gained it corporate acceptance during the 1980s. IBM is the original developer of the relational model and SQL.
Vendor: Centura Technologies
Product: SQL Base
Centura introduced SQL Base for the PC/DOS platform in 1986. Since then, Centura has added support for the NT, OS/2, Novell NLM, and UNIX platforms. Price, fully scrollable cursors, and declarative referential integrity help differentiate SQL Base from its competitors.
Vendor: INFORMIX Software
Product: INFORMIX OnLine
INFORMIX Software was the first vendor to release a UNIX RDBMS. Although available on other operating systems, INFORMIX for UNIX is the company's most popular offering. INFORMIX offers high-performance transaction processing, advanced security, and distributed processing capabilities.
Vendor: Oracle
Product: Oracle
Oracle is one of the largest and most popular vendors in the RDBMS industry. They have the honor of being the first company to offer an RDBMS for commercial use. Oracle's portability to practically every major hardware and operating system platform is impressive. This means that Oracle code written on a VAX/VMS platform can easily be ported to run on a Macintosh platform. Currently, Oracle supports over 80 different hardware platforms.
Vendor: Sybase
Product: System 11
Sybase originally released SQL Server in the mid-1980s. Its latest product is known as System 11. Sybase's System 11 is designed to run on UNIX, Novell NLM, NT, and VMS platforms. Sybase has proven its innovative-ness by being one of the first companies to offer features such as triggers and symmetrical multiprocessing support. UNIX is Sybase's predominate platform. Reliability, performance, and scalability have enabled Sybase to become one of the most respected RDBMS vendors in the industry.
Vendor: XDB Systems
Product: XDB-Server
XDB-Server's strength lies in its 100 percent DB2 compatibility. Developers can downsize to XDB-Server from IBM's DB2 or can upsize from XDB-Server to DB2. In addition to full DB2 support, the product also offers SQL/DS and ANSI level 2 compatibility. XDB-Server is available for DOS, OS/2, and Novell.
An enterprise network links multiple information servers so that they can be accessed and managed from a centralized source (see Figure 1.9). In the 1980s and early 1990s, distributed computing grew in popularity. Distributed computing physically moved computer systems closer to the source of the information. In doing so, distributed systems are more widely dispersed, geographically, than are their stay-at-home mainframe counterparts.
Figure 1.9.
An enterprise network.
With distributed computing comes decentralized control; with decentralized control
comes increased difficulty in managing and accessing data among distributed systems.
To solve this problem, tools such as SQL Server's Enterprise Manager have been developed
to manage the enterprise network.
In this chapter, you learned about the following:
The next chapter discusses the role of the database administrator in RDBMS computing.
To order books from QUE, call us at 800-716-0044
or 317-361-5400.
For comments or technical support for our books and software, select Talk to Us.
© 1997, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster
Company.