Page 686
Listing 27.6 Continued) (CONNECT_DATA= (SIDGRP=fruit_group) (SERVER=DEDICATED) ) (FAILOVER_MODE= (TYPE=select) (METHOD=basic) ) )
If you decide to take advantage of the failover options offered by OCI, you should be aware that they can also be used to implement a soft failover. If you need to bring down an instance, you can move the active connections to their failover instance transparently to the user. Use the command:
SHUTDOWN FAILOVER
to stop the current transactions and alert the application of the shutdown with an error message. As with unplanned failover, this technique relies on the OCI program to trap an error message and invoke the failover connection. You can also failover a single connection using the command:
ALTER SYSTEM DISCONNECT SESSION `sid,serial#' POST-TRANSACTION;
Again, this command signals the application with an error that is trapped and managed by the OCI code, causing the user to reconnect to the failover instance when the current transaction completes.
Other than setting up the components necessary to redirect connections to the alternate instance, there is no need to make any database design changes to support a simple failover option. With only one node active at a time, you do not have to plan for the sharing of blocks and the PCM locks required to do this. A single lock for the whole database would, in theory, be sufficient. Due to internal algorithms, however, you will never be able to reduce the number of locks quite this much. By setting the parameters as follows:
GC_DB_LOCKS = 1 GC_RELEASABLE_LOCKS = 0
and leaving GC_FILES_TO_LOCKS at its default value of a null string, you enable Oracle to assign the minimum number of locks possible.
Scalability refers to the ability of a system to grow as demands grow, whether they be for supporting more users, handling more data, or performing more complex tasks.
Page 687
The primary goal of the design phase in parallel server is to ensure that there is minimal contention for the PCM locks between instances. This is an easy issue for query-intensive databases because all locks can be acquired in shared mode concurrently by all the instances that need them. For transaction intensive databases, particularly Online Transaction Processing (OLTP) systems, the challenge is greater. You have to partition the database so that each instance typically uses one subset of blocks while the other instances use their own different and distinct subsets. A secondary goal is to spread the workload across the instances as evenly as possible so that you are not overworking the resources of one server while under utilizing those on another.
You have a number of options as to how you can partition your database. These include application, functional, departmental/line of business, physical table, and transaction partitioning. The following sections examine these options.
Application partitioning and functional partitioning are very similar approaches with the significant difference being the business level at which you begin the partitioning.
Application partitioning involves running different applications, such as Financials and Manufacturing, on different instances. Generally, applications use completely different sets of tables, so there is little need to combine them in the same database. If you do have some data in common between applications, then you might want to run both in the same database. To design this database for parallel server, you use the same approach as for functional partitioning.
Functional partitioning involves separating the distinct functions of an application. You could, for example, separate a Human Resource application into four functions: Recruitment, Employment History, Payroll, and Benefits. Such an application may need to be run on a Parallel Server if the company is large and offers its employees and recruits telephone and Web access to its database, resulting in a large database with high transaction rates.
For both application and functional partitioning, you need to consider the data implications rather than the functional definitions. This requires that you analyze how each table in the database is used by each application or function. To simplify this task, you can record your findings in tables. Using the Human Resource example, you can proceed by first creating a table similar to Table 27.4, which identifies the tables that are used by each function.
Table 27.4 Partitioning TablesTable Use by FunctionRecruitment | Employment History | Payroll | Benefits |
Table 1 | Table 1 | Table 1 | Table 1 |
Table 2 | Table 2 | Table 3 | Table 2 |
Table 3 | Table 4 | Table 5 | Table 4 |
Table 6 | Table 7 | Table 5 |
Page 688
You next record the overlaps. To simplify the example, only the Payroll and Benefits sets of tables will be considered. The next iteration of the table is shown in Table 27.5.
Table 27.5 Partitioning TablesTable Overlaps by FunctionPayroll | Overlaps | Benefits |
Table 3 | Table 1 | Table 2 |
Table 7 | Table 5 | Table 4 |
If you discover many overlaps in your application, you may want to reexamine the table definitions to determine if they can be redefined. Your objective should be to minimize the number of tables in the Overlaps column. If you are fortunate enough to have no table overlaps, you have a perfectly partitionable application. In this case, simply place the tables for the different functions into separate tablespaces and assign single hash locks to each of the files involved, or even a single hash lock to the set of files associated with each function's tablespace or tablespaces. Users of each function will connect to their appropriate instance.
For any tables that remain in the overlap column, your next step is to examine and record the types of activity that each instance performs on the table and the frequency of the accesses. In Table 27.6, the values for the Human Resource example are inserted.
Table 27.6 Partitioning TablesOverlap Table Activity
Overlap Table Name |
Payroll Access Type | Volume |
Benefits Access Type | Volume |
Table 1 | SELECT | 100/second | SELECT | 300/second |
Table 5 | SELECT | 200/second | SELECT | 10/second |
INSERT | 10/second | INSERT | 0 | |
UPDATE | 20/second | UPDATE | 200/second |
You should now be ready to design your tablespaces to reduce PCM lock contention and block pinging. The tables that are not in the Overlap table can be placed in their own tablespaces and only need minimal PCM locks. The files in these tablespaces can be covered with hash locks rather than fine grain locks because their contents are not shared with other instances. The tables in the overlap column that are read-onlythat is, they only have SELECTs performed on them by all instancescan also be placed in their own tablespace or tablespaces. You should alter these tablespaces to be READ ONLY after loading the data because Oracle does not need to use any PCM locks for data that cannot be updated. Leave datafiles for READ ONLY tablespaces out of the GC_FILES_TO_LOCKS parameter to avoid assigning locks unnecessarily.
You now have to deal with the tables that are used by one or more functions and modified in at least one of them. First, you should categorize these into low, medium, and high activity tables.