Page 733
You have a classic case here. When selecting most or all of the rows from a table, the optimizer must choose a full table scan over any indexes as its access path. This in itself cannot be avoided if that is what the user desires, and this type of access is part of the application requirements. What can you do to speed things up? Well, you can initially set your DB_BLOCK_SIZE to 4KB on your UNIX system. You can increase this to 16KB or 32KB. The larger the block, the more cost effective your reads are. In other words, you get more rows of data per block with each read. Hence, you can make the reads more efficient by increasing block size for large data retrievals.
You can also consider increasing DB_FILE_MULTIBLOCK_READ_COUNT and DB_BLOCK_BUFFERS as necessary. Reading more blocks at once and increasing block buffering should also help in DSS cases, but we'll discuss these later in Chapter 32, "Tuning I/O."
Segment the application appropriately is the principle. Assign the various pieces of the application to the appropriate places (the client, the middle tier, or the server).
Follow along with this scenario: You have a client/server database system that does order entry and control. However, the order entry screens themselves tend to run slowly on the PCs. Response time is poor, and the application seems to burden the client because a single entry might take seconds (which is slow, in terms of response time).
This could be anything. You first need to examine how the application is partitioned. It turns out that most of the data validation is done in the front-end code, and this might be the culprit because it accounts for a large majority of the lines of code. Most of the front-end code algorithms go like this:
The problem with this is that the clerk's transaction is active the entire time, and it interactively validates data among tables from the front end. This validation process (step 4) should take place on the server, non-interactively. It belongs there because it deals directly with the data. Processes should normally be located as close as possible to the data on which they operate, unless there is some compelling reason to do otherwise (such as that the server is underpowered). A standard way to segment the preceding application is to divide it into three major steps: read (the input data) with a transaction, change locally (interactively) without a transaction, and write back the changes (the output data) with a transaction. This application segmenting is also known as transaction chopping. If done correctly, it should result in a substantial reduction in response time.
Page 734
There are different ways of determining the goals of a performance tuning effort. A DBA should consider them all. Consider your application type, which we discussed in Chapter 3 and will discuss again later in this chapter (see the section "Revisiting Application Types"). Database systems can be sampled on various quantitative measures, which we also discussed in Chapter 3 and will discuss in "Understanding Benchmarks" later in this chapter. The most important of these are
Throughput. Work per unit time, as measured by transactions per second (tps); higher is better.
Response time. The time it takes for an application to respond, as measured in milliseconds or seconds; lower is better.
Wall time: The elapsed time a program takes to run; lower is better.
In any system, throughput and response time usually run counter to one another as tuning goals. If response time is high (poor), throughput might be high (good). If throughput is low (bad), response time might be low (good).
Common sense helps when sorting out these two conflicting measures. The more users that are concurrently using a system within a certain amount of time, the more likely it is that each user will experience longer delays than normal, but the number of transactions going through the system will be greater. On the other hand, if you decrease the number of concurrent users accessing the system within a certain time window, each user will enjoy faster response time at the expense of fewer overall transactions being completed in that duration.
Typically, OLTP systems want low response time or high throughput, in terms of transactions per second, depending on the application needs. A DSS wants low response time. However, a DSS also might want high throughput in terms of blocks read or written per unit time. This type of throughput is not necessarily counterproductive to high concurrency and low response times. A batch (production) system typically wants lower wall times. For example, everyone likes for the payroll application to complete on time!
Always consider the two central tuning goals:
Maximize your return on investment. Invest your time and effort wisely by working the problems most likely to yield the most improvement.
Minimize contention. Bottlenecks are characterized by delays and waits; eliminate or reduce these whenever possible.
Finally, consider the following general-purpose database tuning goals:
Minimize the number of blocks that need to be accessed; review and rewrite code as necessary.
Use caching, buffering, and queuing whenever possible to compensate for the electromechanical disadvantage (memory is faster than disk); prefetch.
Minimize the data transfer rates (the time it takes to read or write data); fast disks, RAID, and parallel operations help do this.
Page 735
Schedule programs to run as noncompetitively as possible; they might run concurrently and yet still be noncompetitive for the most part.
The return on investment (ROI) strategy is a top-down, cost-effective way of viewing the performance tuning process. It helps you find the best way to approach the tuning of your particular application. We have seen ROI turn up before as one of the major performance tuning principles (triage) and as one of the two major performance tuning goals. Now we take it to heart and use at as a high-level, step-by-step methodology for performance tuning.
You want to do performance tuning in the order that gives you the greatest gain for your time and effort. In the ROI strategy that follows, notice that steps 1 through 3 amount to logical and physical design, which were covered in Chapters 1, "Databases, DBMS Principles, and the Relational Model," and 2, "Logical Database Design and Normalization." Logical design is regardless of DBMS vendor. Hence, there are no Oracle specifics when dealing with logical design (not counting logical design tools). We'll revisit application types in the next section and apply some Oracle-specific recommendations for physical design and performance tuning recommendations for these applications. In the later chapters, we'll revisit these performance tuning recommendations for Oracle more closely, looking at tuning the application, tuning memory, and tuning I/O.
Steps 4 and 5 amount to tuning the application, which we'll cover in Chapter 30, "Application Tuning." Steps 6 and 7 amount to tuning memory and are included in Chapter 31, "Tuning Memory." Lastly, steps 8 and 9 amount to tuning I/O and appear in Chapter 32. Steps 10 and 11 deal with components even more external to the database than the OS: the network and the client. Step 12 offers some advanced and less-common solutions, which should be tried only after standard methods have been applied and the desired performance gains have not been realized (steps 1 through 11). Chapters 30, 31, and 32 contain Oracle specifics to help you do proper performance tuning.
For now, let's review the steps in a little more detail, in descending order of return on investment.
As covered in Chapter 2, you can reduce storage. In practice, this often means more tables with fewer rows per table. In turn, this means the capability for faster searching. The fewer rows that are stored and must be searched through, regardless of the searching technique, the quicker you'll be able to find what you're looking for.
In Chapter 3, we offered a general-purpose hardware layout. In the section "Step 12: If All Else Fails, Consider More Exotic Solutions," we'll offer some more application-specific twists to this general hardware layout.