Page 773
Page 774
In Oracle, "tuning memory" usually means tuning the SGA. This includes monitoring and tuning the shared pool (the data dictionary and library caches) and the database buffer cache. Tuning memory is closely intertwined with tuning the application and tuning I/O because one of the primary goals in tuningreducing or eliminating contentionmust involve all aspects of the tuning process. For example, this book covers tuning rollback segments and redo logs under tuning I/O, yet both have memory components that must be tuned, such as the rollback buffer area and the redo log buffer. However, because the major emphasis with each is I/O contention, they are covered under tuning I/O. You must also tune other memory-based structures such as locks and latches, and some of those are covered in this chapter. Sorting is another memory issue. When you sort anything, whether it is as a result of a CREATE INDEX, an ORDER BY, a join, or so forth, the ideal situation is to do as much as possible in memory and then resort to disk only as necessary. Finally, as a DBA, you must be an active participant in the integration with and tuning of the operating system (OS).
Unless it comprises part of a specialized database machine, any RDBMS, Oracle included, must request memory from the OS. The OS handles most of the memory management at the lower level, such as shared memory, for the RDBMS. Depending on the RDBMS, it may handle some more of the higher-level functions, such as locking. As examples, consider Sybase and Oracle with regard to memory management. Sybase opts for using a large shared memory segment and managing itself through low-level OS system calls. Oracle chooses this same approach. Sybase handles its own locking and interprocess communications natively, within its multithreaded process. Oracle, on the other hand, manages some of its locking and interprocess communications internally, yet also relies on the OS to do some of the work. These methods of handling memory are a direct result of the architecture of the RDBMS. In either case, the RDBMS, in and of itself a micro-operating system, must integrate closely with the OS for it to have its resource requests handled efficiently. This not only holds true for memory, but for all resources, including I/O. There are few exceptions. Raw disk in UNIX is one notable exception (see Appendix A, "Oracle on UNIX"). For now, let's turn our attention to actually collecting data and diagnosing memory problems.
As you learned in Chapter 29, "Performance Tuning Fundamentals," the UTLBSTAT and UTLESTAT scripts lay the foundation for all your diagnostic work. You often supplement these with manual queries or customized scripts of the V$ dynamic performance views, such as V$SYSSTAT, but this is precisely where (along with the X$ tables) these two scripts gather their information in the first place. By using these from one job to the next, you help ensure a common denominator to communicate with other Oracle DBAs and consultants regarding the performance of a particular system.
Page 775
As you've seen, these scripts are usually found in the $ORACLE_HOME/rdbms/admin subdirectory. Before you actually use them, set TIMED_STATISTICS=TRUE in your init.ora parameter file or ALTER SESSION SET TIMED STATISTICS=TRUE at your session level. Then, follow these steps:
This creates your beginning collection tables and views (in the SYS schema); these objects will have names with BEGIN in them. The beginning statistics are then selected and stored there. Next, run your application, if it is not already running. Your goal as a DBA is to capture your system statistics during its peak activity. When the peak has tapered, or after some reasonable period of time, run utlestat.sql. This interval of time need not be overly long, but should be sufficient. For example, if your peak load activity lasts 2 hours (120 minutes) each day, sample at least 20 percent of this, or about 24 minutes, preferably toward the middle of the interval. (Also, remember, your database must remain up at all times during this sample run. This should not be a problem, however, since you are trying to sample from a live, peak-loaded, production database, and one hopes the database stays up anyway!) This creates your ending and differences collection tables and views. The ending objects will have names with END in them. The ending statistics are then selected and stored there. The differences between the beginning and ending statistics (the deltas) are stored in the differences tables. Finally, utlestat.sql selects from the differences objects, formats the data, and stores the information in the file report.txt. Then, the hard work beginsinterpretation.
No two consultants will agree on every last detail when it comes to tuning Oracle, but they should agree on most things. It has often been said that performance tuning is part science, part art. I generally agree with this assessment, but try not to turn this type of work into a priesthood. In other words, yes, the part that is most art is the interpretation part because it part requires human beings to make value judgments on the relative strengths and weaknesses of a system.
Interpretation can be, and has been, semi- or fully automated, using sophisticated parsers and programs. However, the undeniable reality is that despite how programmed this process can be, it nevertheless remains partly subjective. Although it may be programmed, will the same two tuners or programmers use the same rules or guidelines as to what level of performance is "good" or "bad"? Most assuredly not. Furthermore, this considers only general-purpose tuning, if there is such a beast. When you bring to bear the problem of application specifics, these so-called fully automatic performance-tuning software programs quickly lose their usefulness.