Page 809
The following are recommendations to configure checkpointing mechanism:
CAUTION |
Very heavy OLTP environments will need many large redo logs to support this type of configuration. Otherwise, you may need to reduce your checkpointing activity from less than all the way to only once per log switch event, perhaps every 30_60 minutes. |
CAUTION |
If using ARCHIVELOG mode, ensure that your longest running transaction, the one generating the most redo information, will not consume the total of all your available redo log members. Otherwise, as whenever redo log switching outstrips archiving, your database will halt! |
How do you monitor and tune redo wait events and redo latch contention issues? Examine the Statistic section of report.txt for `redo log space requests' and the System wide wait events section of report.txt for `log file space/switch'. These events can also be reviewed using the V$SYSSTAT and V$SYSTEM_EVENT views, respectively.
NOTE |
Guideline: If either of these events is much greater than 0, this indicates that LOG_BUFFER is sized too small. Increase it by 5 percent or more, let your application run for some time, and query these measures again. Repeat this process until these events are at or very near zero.n |
Use the statistic `redo size' Total from the Statistic section of report.txt, or the VALUE from V$SYSSTAT to help you determine how to size your redo appropriately. This statistic is the total redo information generated for that sample interval.
TIP |
Reduce the amount of (unnecessary) redo information generated by using SQL*Loader in Direct Path mode. If you're archiving, also specify UNRECOVERABLE. If you have PQO, specify UNRECOVERABLE for your parallel table and index created. |
Page 810
Redo copy latches, such as all latch (or `spin lock') structures, are memory-based mechanisms that provide `no-queue' locking of the redo LOG_BUFFER. There is actually only one redo allocation latch, and a process may hold it exclusively (described as a WAIT or WILLING_TO_WAIT request or status) while it writes and others wait; or it may release it (described as a NOWAIT or IMMEDIATE request or status) and hold a redo `copy' latch while it writes, which allows another process to own the one redo allocation latch. How can you measure the contention for this resource? Use the V$LATCH view and compute the MISSES/ GETS ratio where NAME is like `redo%'. Similarly, compute the MISSES/GETS and NOWAIT_MISSES/NOWAIT_GETS ratios from the two latch sections. Additionally, observe the HIT_RATIO and NOWAIT_HIT_RATIO figures, which are the inverses of the misses-to-gets ratios just computed.
NOTE |
Guideline: If any of the misses-to-gets ratios are much greater than 1 percent (.01) or either of the two hit ratios are much less than 99 percent (.99), there may be too much redo latch contention. To help remedy this, increase the init.ora parameter LOG_SIMULTANEOUS_COPIES, which is set by default to the number of CPUs, up to twice the number of CPUs. To reduce locking times and promote Shortest Job First (SJF) allocation, which is optimal, reduce the size of the init.ora parameter LOG_SMALL_ENTRY_MAX_SIZE. It represents a threshold. If a process's redo information is less than thisthat is, it generates relatively "small" redo log entriesit is allowed to exclusively hold the one redo allocation latch (and quickly write its information and free up the latch faster than larger processes, thereby increasing concurrency). Examine its value using SHOW PARAMETER in SVRMGRL. Reduce it by a percentage (5 percent or 10 percent) at a time. Repeat the process until the amount of redo contention is acceptable.n |
In addition to the Oracle8 new index features covered in Chapter 30, such as Index-Only Tables and Equi-Partitioned Objects, there are some other new Oracle8 features that offer opportunities for I/O performance gains. These include Partition-Extended Table Names and Direct Load INSERTs.
Table partitioning, as discussed with regard to Equi-Partitioned Objects (tables and indexes, equally partitioned along the same value sets), offers an alternative to partition views. Partition views using PQO must be manually created by horizontally fragmenting a single, long table into many smaller tables, creating a view to unite them all, creating constraints on the value sets within each smaller table, and setting the init.ora parameter PARTITION_VIEW_ENABLED to TRUE. Partitioned tables are created by partitioning on a chosen column. However, rather than having a view made up of a union of tables, you have a table made up of a union of partitions. In addition, you can reference the partitions directly, with certain restrictions:
SQL> SELECT * FROM T1 PARTITION (P1);
Page 811
In addition, Oracle8 provides the capability of migrating previously created Oracle7 partition views to Oracle8-partitioned tables.
With Direct Load INSERTs, INSERTs can take advantage of the same Direct Path mode of SQL*Loader capability, which bypasses the database buffer cache, generates no redo information, and writes directly to the datafiles. Furthermore, in Oracle8, you can put a table, an index, or a tablespace in No-logging mode. This will additionally enhance your Direct Load INSERTs. Direct Load INSERTs trade off space for performance because they insert rows above the HWM of a segment. This wastes space, but it is not a major issue if a segment is often close to full, does not experience major grow-shrink patterns, and (especially) has a tablespace to itself. For hot tables, this capability might be used in conjunction with planned, periodic reorganization.l