Previous | Table of Contents | Next

Page 815

APPENDIX A
Oracle on UNIX

In this appendix

Page 816

Oracle exists on many platforms and primarily exists on UNIX (in most of its incarnations), VMS, NT, and Novell. Oracle has also been moving into the relational mainframe market, which has always been dominated by DB2/MVS. However, Oracle's presence has always been, and still remains, strongest on UNIX. Hence, here is this UNIX-specific appendix.

Solaris

Currently, Solaris is Oracle's number one platform to which they port. Solaris, the modern Sun operating system, is the successor to SunOS. Since Solaris 2.0, Solaris has been predominantly System V Release 4 (SVR4) based and compliant with numerous UNIX standards that proliferated in the early '90s. At the time of this writing, Solaris is currently up to version 2.6. SunOS, on the other hand, a long-time favorite of the academic and scientific communities, was Berkeley Systems Distribution (BSD) based. Most modern UNIX operating systems today are largely SVR4 based, except for some particularly proprietary High Performance Computing (HPC), Massively Parallel Processor (MPP) machines, BSD itself, and some shareware operating systems such as Linux. This means that, believe it or not, UNIX has become more standardized over the years. This is good news, because UNIX has long been largely divided along the major System V and BSD fronts, further divided down vendor lines, and further still divided by the actual shells (command interpreters). Aside from some of the rarer exceptions, today the shell differences are what an Oracle DBA worries about most often. Because Solaris is currently the market-leading UNIX operating system and Oracle's number one port, this appendix uses mainly Solaris examples to illustrate the general UNIX issues.

A UNIX Primer for Oracle DBAs

Before launching too quickly into the more advanced Oracle on UNIX topics, it is useful to review some UNIX basics that are often used by Oracle DBAs working on UNIX platforms. Skip this section if you are a UNIX veteran; otherwise please read it before continuing.

Shells and Process Limits

As you will run across in the SA and DBA setup, there are three main UNIX command interpreters, or shells: Bourne, Korn, and C. These shells are not only interactive command interpreters, but are also used to write noninteractive programs called scripts. The Bourne shell was the first UNIX shell, and its backward compatibility makes it the shell of choice for portability. However, it lacks several important features, such as job control and command-line history. Next came the C shell, developed with the BSD UNIX. It offered a simplified command C-like language, job control, and other features. Last came the Korn shell, which is a superset of the Bourne shell. Hence, Korn can run any Bourne or Korn shell script. Korn also incorporated many C shell features, such as job control. C shell remains the predominant user choice of interactive shell, although administrators tend to use Bourne more often. Korn, however, gained some popularity with the POSIX standard.

Page 817

In any case, because Oracle runs so many UNIX processes, many Oracle environment configurations are shell specific. A good example is process limits. Processes are limited in the C shell by using the limit/unlimit commands. In Korn or Bourne, processes are limited by using the ulimit command. Remember, you use these commands when logged in as Oracle. For example, in C shell, you might want to unlimit the number of file descriptors by doing the following:

% limit -f
% 1024
% limit -f unlimited
% limit -f
% unlimited

This is similar in the Korn/Bourne shells, except you use a different command (ulimit) and different options. See the man pages for csh, ksh, or sh for further information.

Soft and Hard Links

A link in UNIX is a filename that contains nothing more than a pointer to the actual file. That is, a link is an alternate name for the same file. With a hard link, you see only the filenames and cannot tell if indeed two files are the same, unless you inspect the inodes:

% ls
% file1
% ln file1 file2
% ls -l file2
% -rwxr-xr-x    user1    group1 10000    file2
% ls -f
% 1234 file1 1234 file2

With a soft link, on the other hand, you can see the link you make:

% ls
% file1 file2
% ln -s file1 file3
% ls -l file3
% -rwxr-xr-x    user1    group1 10000    file3 -> file1

In general, for this reason, it is better to use soft links, because unless your system is well-documented, you will likely forget which files are hard linked. It's not that you can't figure it out again, but the maintenance of doing so outweighs most any benefit as your system size increases. See the man pages for ls for further information.

Named Pipes and Compression

A pipe in UNIX is a program that has its standard output connected to file descriptor #1, stdout, and its standard input connected to file descriptor #0, stdin. Hence, multiple commands can be "piped" together to form a pipeline. In the following example, the ls (list directory contents) command is run first, then piped to lp (the print client process) to print it:

% ls
% file1 file2 file3
% ls | lp
% standard input accepted by printerque1

Page 818

Notice that to pipe programs together, you use a vertical bar |. A named pipe in UNIX is a user-defined filename whose purpose is to accept input to stdin and produce output to stdout. It is a passthrough with a name. The named pipe is memory resident. For Oracle DBAs, the most useful example is

% mknod pipe1 p
% export file=pipe1 &
% dd if=pipe1 | compress | tar cvf /dev/rmt/0

This series of commands creates the pipe pipe1; starts an export to that file in the background; and then reads the pipe, compresses it, and sends it to tape. You could of course export directly to tape (or first to a file), compress it, and then go to tape, but this is the only way to get a compressed file to tape without having to store it first on disk. With regard to Oracle, this is extremely useful for those who must export very large tables to tape and have little extra disk space to use as a staging area.

Temporary Directories

In UNIX, there are at least three kinds of temporary directories: system volatile, user-defined volatile, and system nonvolatile. The /tmp directory is system volatile, any tmpfs directory is user-defined volatile, and the /var/tmp directory is system nonvolatile. The commonly known one is /tmp. The directory /tmp is mounted to the UNIX virtual memory area known as swap. It is a special kind of temporary file system (tmpfs), known as swapfs. Its contents are cleaned out on reboot. An SA might create other nonswap directories that are tmpfs. These are memory-based file systems. As mentioned, they are volatile. That is, their contents are lost at shutdown or power loss. However, these special file systems offer increased I/O, because the information that is written to/read from these directories is less than that required, per file, for normal UNIX file systems (ufs). Performance is further enhanced, because you are reading from/writing to either main memory or virtual memory at any given time. Obviously, main memory is fastest, but virtual memory is faster overall than ordinary disk, because it is cached in memory. One last type of temporary directory in UNIX is /var/tmp (Solaris), sometimes /usr/tmp. This directory is typically smaller than /tmp but is nonvolatile. That is, it does not lose its contents at shutdown or power loss. The /var/tmp directory is used for many things. For example, it is the default sorting location for the UNIX sort program, and it stores temporarily buffered editor (vi) files. The main thing is that it retains its contents, unless a program cleans up after itself.

Oracle uses these directories for various purposes. For example, SQL*Net listener files are found in /var/tmp/oracle, and temporary installation files are sometimes stored in the /tmp directory.

The SA and DBA Configuration on UNIX

The DBA should actually know quite a bit about UNIX System Administration (SA) in order to effectively manage an Oracle RDBMS and instance/database on a UNIX system. In many companies, the SA and DBA were one and the same. For example, I was a UNIX SA, a Sybase

Page 819

DBA, and an Oracle DBA. Other companies, more often than not, separate the SA and DBA jobs. Although this provides more jobs and truly does enable technical people to specialize fully in one or the other, it also winds up being a largely segregated working situation, especially if the SA and DBA work for different bosses. In this type of situation, clear, concise, and timely communication is of the utmost importance. In any case, I encourage DBAs to learn as much as possible about their particular variant of the UNIX operating system, specifically basic SA tasks. This means having root access. You should be able almost to act as a backup for the SA! This extra effort will reward you in being able to take full advantage of your operating system with the Oracle RDBMS. Many of the following issues are covered in your Operating System Specific Installation and Configuration Guide (ICG) or Server Administrator's Reference Guide (SARG). These are the purple-striped hard-copy books, or you can also use the online documentation.

Setting Up the dba Group and OPS$ Logins

In UNIX, in order to Connect Internal (in svrmgrl), you must be a member of the UNIX dba group, specified in /etc/group. An example entry in /etc/group might look like this:

dba:*:500:oracle, jdoe, jsmith, bfong, ppage

In order to add this group, you must be the root user. Once done, to connect with DBA privileges without a password, you can do either

SVRMGRL> CONNECT INTERNAL;

or

SVRMGRL> CONNECT / AS SYSDBA;

The latter is preferred and the former obsolete, although both currently work. A related mechanism for any Oracle user account is OPS$. This enables the user to use the same login that he or she has at the UNIX level for the Oracle account. A recommendation is to set the init.ora parameter OS_AUTHENT_PREFIX="" (the null string) so that you can use names such as jdoe and not OPS$jdoe. Remember, on UNIX, everything is case sensitive, so use the same case everywhere. To create your Oracle user, do the following:

SQL> CREATE USER jdoe
    2> IDENTIFIED EXTERNALLY;

Of course, you would probably specify tablespace defaults, quotas, and maybe a profile. If the user already exits, use an ALTER rather than a CREATE. There are some SQL*Net catches, however. OPS$ logins are not supported by default. To enable them, set the init.ora parameter REMOTE_OS_AUTHENT=TRUE. Also, the daemon user must exist in /etc/passwd, and it must not be an OPS$ login.

Using the oratab File and dbstart/dbshut Scripts

When you finish running the install, one of the most important post-installation steps is running the root.sh script (as root, of course). When complete, you will have a /var/opt/oracle/oratab (or /etc/oracle/oratab) file, which is a list of ORACLE SIDs and some other information. An installation with two (instance) entries might look like this:

Page 820

SID1:/dir1/dir2/dir3/oracle:Y
SID2:/dir1/dir2/dir3/oracle:N

where the Y specifies that, Yes, start this instance (SID) up when dbstart is run. The dbstart and dbshut programs are Oracle-supplied UNIX shell scripts. You must create your own script to wrap around these scripts and run as Oracle, at boot time for dbstart and at shutdown for dbshut. Create a Bourne shell script like the following:

#! /bin/sh
# ora.server
ORACLE_HOME=/dir1/dir2/dir3/oracle
# start
case "$1" in
`start')
su - oracle $ORACLE_HOME/bin/dbstart &
# stop
`stop')
su - oracle $ORACLE_HOME/bin/dbshut &
;;
esac

Save this file to the /etc/init.d directory. Then (soft) link this ora.server script like so:

# ln -s /etc/init.d/ora.server /etc/rc0.d/K99ora.server
# ln -s /etc/init.d/ora.server /etc/rc2.d/S99ora.server

At boot or reboot (# init 6) time, Solaris will call all the S* scripts in the /etc/rc?.d directories, including S99ora.server, and pass each start as the $1 parameter. Similarly, at shutdown (# init 0), all the K* scripts will be called, including K99ora.server, and pass each stop as the $1 parameter.

Using the oraenv Scripts and Global Logins

Oracle supplies an oraenv shell script for Bourne and Korn shells and a coraenv shell script for C shell. When placed in the user's startup file or read interactively into the current shell, they will set the proper Oracle environment for that user, in particular, the SID value. Add the oraenv file to a user's startup file. The user's startup file depends on the shell used. If using the Bourne or Korn shells, add the following lines to the user's .profile file (or alternatively, the ENV file for Korn):

ORAENV_ASK=NO
. /opt/bin/oraenv
ORANEV_ASK=

For C shell, add the following lines to the user's .login (or .cshrc if preferred):

set ORAENV_ASK=NO
source /opt/bin/coraenv
unset ORAENV_ASK

Notice that the oraenv files are located in the /opt/bin directories. This is true for Solaris. For other machines, it might be /usr/local/bin. There are a few variations on this setup. Adding these lines to each user's local login (and maintaining them) can be tedious. An alternative is to add them to the global login files. For Bourne and Korn shells, this is /etc/profile, and for C shell, this is /etc/.login. When a user logs in, the global login file is read first and then the

Page 821

user's local login. Hence, local overrides global. By adding these lines to the global login files, you need only add them to, at most, two files, rather than to all users' files. In addition, you are guaranteed a common Oracle environment for all users, unless users override it. It's a policy question as to whether different users might need different environments. A last variation exists on using the oraenv files: If you have multiple instances and have the need to change the SID at login time, simply don't set the ORAENV_ASK variable before reading the oraenv. Each user will then be asked what SID he or she wants, like, for example:

ORACLE_SID = [ SID1 ] ?

Then you may enter another SID to override the default SID, SID1 in this case. Further, after login, each user may rerun the oraenv to change to another instance, by doing

. /opt/bin/oraenv for Bourne or Korn shells, or
source /opt/bin/coraenv for C shell.

Configuring Shared Memory and Semaphores

Shared memory is, of course, memory that is shared among many processes or threads. An Oracle instance's SGA resides in shared memory. If your total SGA size exceeds that which you configure for your UNIX OS, your instance cannot start. Shared memory is segmented and can be allocated according to three models: one segment, contiguous multi-segment, or noncontiguous multi-segment. Oracle will try to allocate the SGA memory requested in the order of these models as listed. If all three possible models fail to allocate enough shared memory for the requested Oracle SGA, Oracle will raise an ORA error, and the instance will fail to start. The values that control the SGA size, of course, are mostly DB_BLOCK_SIZE, DB_BLOCK_BUFFERS, SHARED_POOL_SIZE, and LOG_BUFFER. Sorting parameters affect SGA and non-SGA memory, too.

The Solaris (and most SVR4 systems) shared memory parameters that you would set are

SHMMAX Maximum size (bytes) of a single segment
SHMSEG Maximum number of segments for a single process
SHMMNI Maximum number of segments, system-wide

The most critical of these are SHMMAX and SHMSEG. Let's work through an example and set the init.ora and UNIX parameters. Your system supports one instance, and the machine has 1GB of main memory. There are no other really competing applications with Oracle, and especially none that would use shared memory. In other words, this is a "dedicated" database (hardware) server. You could start with 1/2 or 3/4 main memory. Let's start with 3/4, or 768MB, main memory. Your init.ora parameters might look like the following:

DB_BLOCK_SIZE=16384 # 16KB
DB_BLOCK_BUFFERS=45056 # ¥ 16KB = 704MB
SHARED_POOL_SIZE= 16777216 # 16MB
LOG_BUFFER= 8388608 # 8MB = size of 1 redo log

Page 822

The database buffer cache (704MB), plus the shared pool (16MB), plus the log_buffer (8MB) take up 728MB, 40MB shy of 768MB. Now, set your shared memory to exactly 768MB. Your Solaris shared memory parameters might be set as follows (in the file /etc/system):

set shmsys:shminfo_shmmax= 805306368
set shmsys:shminfo_shmseg=10

As long as Oracle is starting up with no competition from other applications for the shared memory, it will allocate exactly one shared memory segment of something less than 768MB—and use most of it.

Semaphores are global (shared) memory locking mechanisms. They are "true" locks in that they are made up of a "gate" and a queue. Sometimes they are referred to as queued locks, as opposed to nonqueued ones, such as latches or spin locks. Oracle will use at least one semaphore per Oracle process. Set your maximum number of UNIX semaphores greater than your PROCESSES parameter in init.ora for all your instances combined, if you have multiple instances running concurrently. Suppose you expect to have no more than 100 concurrent users, and then add your Oracle background processes (these can vary considerably by the other init.ora parameters and RDBMS options that you run) to this number. Suppose you have 15 Oracle background processes. Then you might set

PROCESSES=120

in your init.ora to have a little room for error. In Solaris (and most SVR4 systems), you might then set the following in the /etc/system file:

set semsys:seminfo_semmsl=150

to be safely higher than 120. SEMMSL sets the maximum number of semaphores per set. As shared memory is allocated in segments, semaphores are allocated in sets. Two other parameters you could also set, but which are not as critical as the previous ones, are

SEMMNI Maximum number of sets, systemwide
SEMMNS Maximum number of semaphores, systemwide

There are other Solaris (SVR4) kernel parameters, which can also be set in the /etc/system kernel file, that affect shared memory, semaphores, and other memory-based structures. Please refer to the Solaris system configuration reference manuals for further details.

Understanding the OFA

The Optimal Flexible Architecture (OFA), offered by Oracle with the release of version 7, provides a set of installation recommendations and guidelines. The OFA was developed with UNIX in mind. The OFA white paper is free and downloadable from www.oracle.com. Remember, these aren't requirements. Further, suggested naming conventions are just that: suggestions, not requirements. Hence, the OFA is flexible in this respect, too. Primarily, though, the flexibility the OFA gives is the capability of coping with large installations, many instances, and many versions. Also, OFA helps when you have instances moving through many stages, or phases, such as development, integration, testing, and production.

Page 823

Maintaining separate environments can be a challenge, but using an OFA-like setup can help ease the administration of your implementation. The OFA offers more than just directory and file organization and naming suggestions; it also deals with tablespace naming and separation, for example. In any case, an example of a two-version, two-instance (production and development) configuration might have the following environment variable settings and OFA structure:

$ORACLE_BASE=/u01/oracle.

$ORACLE_HOME=$ORACLE_BASE/product/7.3.3 for the production SID.

$ORACLE_HOME=$ORACLE_BASE/product/8.0.3 for the development SID.

$TNS_ADMIN=$ORACLE_HOME/network/admin; note that, like
$ORACLE_HOME, this can only belong to one instance at a
time.

oratab is located in /var/opt or /etc.

oraenv files are located in /opt/bin or/usr/local/bin.

datafiles, control files, redo log files, and rollback datafiles are located in
/u0[1-n]/oracle/<sid> subdirectories, where
n represents the number of root /u0 directories; soft links may be used as necessary.

administrative files are stored in $ORACLE_BASE/admin/<sid>, such as bdump, cdump,
udump, pfile, and exp subdirectories.

Please refer to the OFA White Paper for further details on its guidelines. Even though OFA grew out of Oracle on UNIX, it can be retrofitted for Windows NT, Netware, VMS, and other platforms. Figure A.1 shows a graph of the sample OFA structure, using DEV as the name of the development SID and PROD for production.

FIG. A.1
An OFA directory
structure for the two-
version, two-instance
configuration.

Page 824

Comparing Raw Disk and UFS

This comparison is a classic debate between Oracle and other RDBMS DBAs that has no resolution. In other words, there is no blanket statement anyone can make that will hold true for all applications. A raw partition in UNIX is a disk partition on which a UNIX file system (ufs) is not created. Hence, there is no file system (or general UNIX I/O) buffering for that partition. In fact, aside from its device filename at the UNIX level, UNIX otherwise has nothing to do with a raw partition. Why use raw partitions? If near random I/O is the characteristic usage pattern of your application type (for example, OLTP), a raw partition can help by bypassing the UNIX buffer. If your application type involves heavy sequential I/O (for example, DSS and DW), buffering helps minimize the electromechanical disadvantage. In other words, the UNIX (read-ahead) buffer will outperform raw partitioned access when I/O is the bottleneck. However, other factors come into play, such as think time and computational time. Think time is the time a user needs between interactions in an interactive system such as OLTP. Computational time is the batch analog of think time. It is the time a program uses in computing between significant functional tasks. In either case, if think or computational time is high under heavy I/O, the benefit of the UNIX buffering might be negated, and hence, a raw partition could yield better results. Here are some of the major advantages and disadvantages of using raw partitions.

The advantages are as follows:

The disadvantages are as follows:

Let's briefly look at the last disadvantage, because there are many inherent administrative problems with using raw partitions, including the following three:

Page 825

Final notes: If opting to use raw disks, before you do so, do complete disk checks (bad sector checks) on all disks that might be used. Make sure you have enough disk space—plus some. Make all your raw partitions the same size, or choose 3 or 4 fixed raw partition sizes, such as 256MB, 512MB, 768MB, and 1024MB (1GB), and this will mitigate the inflexibility of managing them. This latter solution of using a few classes of sizes is better for most installations. For example, it will increase the likelihood of having a properly sized partition at the expense of some internal fragmentation (wasted space). Even so, this is nearly unavoidable when using raw disks.

CAUTION
When formatting your raw disks, always skip cylinder 0 to leave room for the disk label. Otherwise, the Oracle DBMS will likely overwrite it at the worst possible time. (By comparison, the high-level ufs I/O access routines have the knowledge to skip it built in.)

Using Additional UNIX Performance Tuning Tips

Use asynchronous I/O if supported. Normally, a DBWR/LGWR process must write, wait, write, wait, and so forth. Although buffer transfers are involved, this is not real time. It is, however, synchronous (serial) I/O. Asynchronous (parallel) I/O (AIO) reduces wait times between buffered read/write acknowledgments. AIO works for either ufs or raw disks. If supported, use KAIO, which is an additional enhancement of AIO residing within the kernel, rather than above it. Hence, it is even faster. Ensure that the following init.ora parameters, enabled by default, are set:

ASYNC_WRITE=TRUE
ASYNC_READ=TRUE

or ASYNC_IO=TRUE in place of the previous two for some platforms.

Use multiple database writers (DBWRs). If you are not using AIO, set the following init.ora parameter:

DB_WRITERS=<the number of distinct disks containing datafiles>

Increase this up to twice the number of distinct disks containing datafiles as necessary. This will parallelize I/O without using asynchronous I/O, because I/O is synchronous by default for each DBWR. Use KAIO first, AIO second, and multiple DBWRs last, but none of these together.

Page 826

Use the readv() system call. For heavy sequential I/O, readv() reduces buffer-to-buffer transfer times. The readv() system call is disabled by default. Enable it by setting

USE_READV=TRUE

Use outer disk cylinders first. Because of the Zone Bit Recording (ZBR) technology used by Sun, outer cylinders tend to outperform inner cylinders. The outer cylinders would be cylinders 0, 1, and 3, as opposed to 4, 5, 6, and 7.

CAUTION
Remember, don't use cylinder 2, the overlap cylinder, because it retains the total size of the disk. Some programs are baffled when this cylinder gets changed.

Use the Oracle post-wait driver if available on your platform. The Oracle post-wait driver provides a substitute mechanism for using semaphores that is faster and provides the same functionality of semaphores for Oracle usage.

Use the UNIX interprocess communication status (ipcs) command to monitor shared memory and semaphores. To monitor shared memory, use ipcs -mb, or use the Oracle shared memory utility (tstshm) if you have it. To monitor semaphores, use ipcs -sb. These utilities will help you determine how much shared memory and how many semaphores are being used. They can also help you determine the fragmentation of your shared memory. See the man page on ipcs for further details. Here is one example run of each utility:

# ipcs -mb

IPC status from <running system> as of Tue Nov 18 11:59:34 1997
T    ID    KEY    MODE    OWNER    GROUP    SEGSZShared Memory:
m    500    0x0898072d    --rw-r----    oracle    dba    41385984
m    301    0x0e19813f    --rw-r----    oracle    dba    38871040
m    2    0x0e3f81dc    --rw-r----    oracle    dba    4530176

hostname:oracle> tstshm

Number of segments gotten by shmget() = 50
Number of segments attached by shmat() = 10
Segments attach at lower addresses
Maximum size segments are not attached contiguously!
Segment separation = 4292345856 bytes
Default shared memory address = 0xeed80000
Lowest shared memory address = 0xfff00000
Highest shared memory address = 0xeed80000
Total shared memory range = 4010278912
Total shared memory attached = 20971520
Largest single segment size = 2097152
Segment boundaries (SHMLBA) = 8192 (0x2000)

# ipcs -sb

IPC status from <running system> as of Tue Nov 18 11:59:39 1997
T    ID    KEY        MODE        OWNER    GROUP    NSEMS

Page 827

Semaphores:
s    327680    00000000    --ra-r----    oracle    dba    25
s    327681    00000000    --ra-r----    oracle    dba    25
s    196610    00000000    --ra-r----    oracle    dba    20
s    4    00000000    --ra-r----    oracle    dba    25
s    5    00000000    --ra-r----    oracle    dba    25

Use direct I/O if supported. Some UNIX systems support direct I/O with ufs, effectively bypassing the UNIX buffer caches. This negates the need to use raw partitions as an option. Enable this facility if your platform supports it. Use it in place of raw partitions.

Don't use processor binding or change the scheduling classes for processes. Because SMP machines have most of the market share of Oracle database servers, I make this recommendation. Programs written to run on SMP machines are most efficient when their processes are processor independent, hence the S for Symmetric in SMP. Available facilities exist (tcpctl, pbind, nice/renice, and prioctl) that enable reformulation of processor affinity and process priority. In general, on SMP machines, it is a bad idea to mess with these parameters for Oracle system (background) processes.

Don't modify the general UNIX buffer cache or the file system buffer, unless you have a really good idea of what you're doing. In general, modifying these UNIX kernel and ufs parameters has little effect on Oracle performance. For example, the kernel parameter bufhwm is the maximum KB that can be used by the general UNIX I/O buffers. By default, the UNIX buffer grows to up to 2 percent of available memory. The program tunefs controls the ufs logical block (or buffer), which is 8KB by default. Be careful in changing these. Have a good reason for doing so. For example, if you have heavy sequential I/O and are using ufs, you could increase bufhwm in /etc/system.

Keep your ufs file systems below 90 percent full. Unless you changed it, your minfree for each of your ufs is 10 percent. At less than 90 percent capacity, a ufs is speed optimized. At greater than 90 percent, the ufs optimization strategy switches to space-optimization. Hence, attempt to keep your ufs housing Oracle datafiles and other Oracle components at 89 percent or less.l

Page 828

Previous | Table of Contents | Next