Previous | Table of Contents | Next

Page 741

sometimes required instead of, or in addition to, TP monitors to help defray the overhead in servicing the numerous users of high-concurrency systems.

In addition to FIFO buffering, this type of software often guarantees delivery of the client request to the server. Hence, this software speeds things up indirectly not only by buffering client/server requests in a networked system, but by preventing unnecessary retransmissions of the same requests.

Revisiting Application Types

Chapter 3 covered the three major application types (OLTP, DSS, and batch) and mentioned some lesser ones (OLAP, VCDB). Now let's reconsider Oracle tuning recommendations for these different types and some new, application-specific twists on the general hardware layout we presented. First, let's compare and contrast OLTP and DSS, the two major application types.

OLTP Issues

An OLTP system has high concurrency (a large number of interactive users) and is update intensive (has large amounts of insert, update, and delete statements). Because OLTP applications might grow or shrink significantly due to inserts and deletes, the extents in the tablespaces supporting the most volatile tables and indexes should be pre-allocated to their maximum expected size. Dynamic extension only hurts performance in this case. Basically, you must plan for capacity and implement your findings now, long before you reach those maximums.

You must have enough redo and rollback to handle the transaction needs. If transactions are relatively short or modify relatively small amounts of data, you need many smaller rollback segments. Because you have many concurrent users, you need many rollback segments to minimize contention. You should likewise have many redo logs to handle frequent check- pointing, if the system has this critical need. The rollback segments and redo logs should be separated and can be sized using Oracle diagnostic tools (see the section "Using Oracle Diagnostic Tools").

Database check constraints and referential integrity constraints cost less overall in terms of computing time, compared to placing this logic in the application code. In any case, make sure that the application code is as reusable as possible to promote sharing. Oracle will use the library cache of its shared pool to efficiently reuse this code, when called multiple times. For OLTP, the application process overhead should be reduced by using bind variables instead of literals. If there are fewer symbols within a statement to parse, there is a better chance of shared statements given less parsing.

In addition, with OLTP systems you need more indexes, because you are trying to access relatively small amounts of data, in a more or less random fashion, within tables. But you don't want so many indexes that you actually wind up slowing down the application. As a rule of thumb, establish indexes on all of your primary and foreign keys and add others sparingly. Remember, updating and rebuilding indexes is very costly, and high DML (insert, update, delete) activity will trigger just this sort of index restructuring.

Page 742

As mentioned earlier, in "Step 12: If All Else Fails, Consider More Exotic Solutions," the MultiThreaded Server (MTS) option might be used when your database system is under extremely heavy load, or on a daily basis if required. Remember, though, that the primary goal of MTS is to improve throughput of highly concurrent systems, and many OLTP systems are concurrent to varying degrees. However, maximizing throughput often increases response time delays. So using MTS is a matter of policy. That is, if high throughput defines good performance to you, use it. If you want low response times, don't.

DSS Issues

With DSS, the databases are often large, historical, and read only. Hence, queries (selects) are the meat of DSS activity. Oracle's Parallel Query can be used to speed up particularly slow queries that might be paralleled through some physical (tablespace and data file) restructuring. As mentioned before, you want to read as many rows as possible with one Oracle block read. And you want to maximize your block reads by reading multiple, sequential blocks, because DSS queries typically trigger full table scans, which are sequential by nature. So set the DB_BLOCK_SIZE and DB_FILE_MULTIBLOCK_READ_COUNT parameters as high as possible. These parameter settings are explained further in Chapter 32.

However, here are some words of caution:

You have three methods for using indexes with DSS. Because most queries return full table scans of large tables, you might wonder why you should use indexes at all with a DSS. (In fact, not using them is Method 1.) The three methods are as follows:

Method 1: Don't use any indexes at all, especially if you never have any highly selective queries or can't afford the additional storage required to create conventional indexes on every table.
Method 2: Keep only a few indexes on those tables that are selected by a small percentage of your DSS queries.
Method 3: Use bitmap indexing, as briefly discussed in the "Step 12: If All Else Fails, Consider More Exotic Solutions" section. Because DSS queries tend not to be highly selective most of the time and would otherwise trigger full table scans even if conventional indexes existed, use bitmap indexes as an alternative.

Of course, in reality a DBA will probably use all of these methods from time to time.

Don't use bind variables in DSS application code. Why? This is in direct contrast to our OLTP recommendation, because with OLTP, you want to minimize application process overhead (parsing). However, with relatively long-running DSS queries, parsing is a much smaller percentage of the overall query time. You want to ensure that the optimizer chooses the best access plan. If bind variables are used, the optimizer cannot call upon its stored statistics (via the

Previous | Table of Contents | Next