Previous | Table of Contents | Next

Page 683

freelist(s) associated with the segment as a whole, not to the one(s) in the freelist group of the instance that caused the new extent to be added.

Any instance that runs out of space in its own extents can use blocks in such an extent. This can result in rows being INSERTed into the same block by different instances, causing pinging between these instances. To avoid this, you need to create the segment with an initial extent of minimal size and a maximum extent limit set to one plus the number of freelist groups. This prevents Oracle from assigning additional space automatically.

For the segment to grow, you need to change the storage clause to increase the maximum extent limit each time you need to allocate another extent manually. The statements in Listing 27.4 build a table with freelist groups and extents for two separate instances.

Listing 27.4 Creating a Table for Two Instances to Share
CREATE TABLE my_table (id NUMBER(10), . . .)
  TABLESPACE user_data
  STORAGE (INITIAL 10K MAXEXTENTS 3 FREELIST GROUPS 2);

ALTER TABLE my_table
  ALLOCATE EXTENT (SIZE 1M DATAFILE `user_data_instance1_01.dbf' INSTANCE 1);

ALTER TABLE my_table
  ALLOCATE EXTENT (SIZE 1M DATAFILE `user_data_instance2_01.dbf' INSTANCE 2);

When you perform manual extent allocation, you should include the DATAFILE option along with the INSTANCE option of the ALTER TABLE...ALLOCATE EXTENT command, as shown above. This not only associates the new blocks with the desired freelist group, but enables you to place each extent into a datafile specific for the instance using that freelist group. By doing this, you should be able to reduce the number of hash locks you need to cover the datafiles because the INSERTs, at least, should not require block sharing. If your applications and users also perform further work on stored rows, such as UPDATEs and DELETEs, using the same instance as for the original INSERTs, then the locks acquired for the INSERTs should still be in place for the additional DML.

If you rely on fine grain locks, you can still benefit from fewer lock allocations by using freelist groups. The group option in the GC_FILES_TO_LOCKS parameter enables Oracle to acquire a single fine grain lock for a group of blocks added to a freelist just as for hash locks. The blocks are added to a freelist in the freelist group for the instance performing the INSERTs.

Determining when Parallel Server Can Solve a
Business Need

To determine if Parallel Server is a good fit for your failover needs, you need to compare its costs and complexity with other options. If you need your database to be available on a 24

Page 684

hours per day and 365 days per year basis, you may be a candidate for Parallel Server. While at least one instance remains active, your users can continue to access your database.

You should consider Parallel Server as a means to grow your applications, database size, or user community.

The following two sections highlight the work needed to build available and scalable applications on Parallel Server. By working through the steps described in either section, you should be able to decide if your application would be suited for Parallel Server. You will probably find some issues that fit well with the architecture and some that mitigate against it. In order to make the best decision, you should complete a similar analysis of the alternative strategies and compare the pros and cons of them all.

Designing a Parallel Database for Failover

The simplest failover setup consists of a two-node cluster with an Oracle instance running on each node against a database stored on a shared disk system. The users all connect to the same instance by default, with a secondary path to the alternate database. In client-server situations, SQL*Net can be configured to provide the alternate, failover address. In three- or n-tier architectures, a TP-style monitor can route the users to the active instance.

Although this model can be extended to include multiple nodes and instances, there is a point at which a single node and instance may not offer sufficient protection against failures. If two instances were to fail at the same time, which is an increasingly likely occurrence as the number of nodes increases, a single failover instance may not be able to handle the workload from both. Deciding which users to move to which failover node is a subset of the partitioning activity you would need to undertake for scaling your applications, as discussed in a later section.

Whether you have one or many failover instances, you still have to build your TNSNAMES.ORA file to identify the primary and failover instances or else code the routing scheme in a middle tier component, such as a TP monitor. For the former option, you will need to use an ADDRESS_LIST in the DESCRIPTION clause of TNSNAMES.ORA and instances with the same name. This is exemplified in the following entry from a TNSNAMES.ORA file (see Listing 27.5).

Listing 27.5 Identifying Primary and Failover Nodes in a TNSNAMES.ORAEntry
PROD.world=
                (DESCRIPTION=
                        (ADDRESS_LIST=
                                (ADDRESS=
                                        (COMMUNITY=tcp.world)
                                        (PROTOCOL=TCP)
                                        (HOST=apple)
                                        (PORT=1521)
                                )
                                (ADDRESS=

Page 685

                                        (COMMUNITY=tcp.world)
                                       (PROTOCOL=TCP)
                                        (HOST=pear)
                                        (PORT=1521)
                                )
                        )
                        (CONNECT_DATA=
                                (SID=PROD)
                        )
                )

Oracle8 adds to the failover capability of Parallel Server provided in Version 7.3. First, you have the option of starting a parallel dummy connection on the failover instance for each user connection on the primary database. If the primary fails, the reconnection to the standby is faster than if the pre-spawned connection has not been created. You should consider this option if you don't intend allowing any other connections to the standby instance while the primary is running.

Second, the Oracle Call Interface (OCI) language has been extended to allow a user disconnected from the primary instance, due to its failure, to be connected to the failover instance automatically. In addition, if the user were in the process of executing a query, the query can be restarted for him following such an automatic reconnection. The TNSNAMES.ORA file contains a FAILOVER_MODE clause to identify what type of auto-reconnection is required. The new keywords are shown in Table 27.3, and a sample TNSNAMES.ENTRY is shown in Listing 27.6.

Table 27.3 Keywords for TNSNAMES.ORA to Provide Reconnection Options
Keyword Name Function
BASIC Do not use pre-created connections on the alternate instance.
PRECONNECT Use an existing connection when switching to the alternate instance.
SELECT Continue any interrupted query after reconnecting to the alternate instance.
SESSION Reconnect to the alternate instance following instance failure.
NONE Do not attempt an automatic reconnection following primary instance failure.


Listing 27.6 Defining Failover Nodes in TNSNAMES.ORA
(DESCRIPTION=
          (ADDRESS=
                    (PROTOCOL=TCP)
                    (HOST=apple)
                    (PORT=1521)


                                                             continues

Previous | Table of Contents | Next