You now have SQL Server installed and running. Maybe you are about to roll out your first production database and your boss is breathing down your neck asking, "Mister (or Madam) DBA, have you tuned and optimized the server?" You tell your boss, no, but tuning and optimizing SQL Server was next on your list. You bring up the SQL Enterprise Manager. You begin to stare at the Enterprise Manager configuration screen and ask yourself, "So, which knobs do I turn?"
In this chapter, you learn how to modify SQL Server's configuration parameters. Each configuration parameter, its function, and its effect on SQL Server performance and tuning is examined. When SQL Server is installed, the configuration parameters are given an initial default value. Microsoft has done a good job of selecting default values for the configuration parameters. In many cases, the default values provide the optimum performance for your database server. SQL Server enables you to set close to 50 parameters. We use tips to highlight the most commonly tuned parameters. This is not to say that you do not want to touch the other parameters, but in general, the default settings for the others is sufficient.
To Tune Or Not To Tune?
As you start to tune SQL Server, keep in mind that you are tuning the database server. Other factors, such as the hardware configuration chosen for the database, the network the database clients and SQL Server belong to, and the overall size and structure of your databases also affect the performance of the database server. I have a friend who used to work for a Fortune 500 company that was bringing several SQL Servers online. He had spent some time tuning SQL Server and everything was up and running quite smoothly. As the days went on, the organization started to experience problems with a particular application running progressively slower. Upper management thought the problem must be a SQL Server configuration problem.
My friend tried to explain to them that the problem was not a SQL Server tuning issue but an application issue. He explained how they had done everything right: researched and purchased a very fast RAID 5 machine, tuned Windows NT Server, and then used SQL Server tools to properly configure SQL Server. Management didn't buy it, so they brought in another consulting firm with a highly certified and expensive specialist. The specialist examined the SQL Server and did not change any configurations parameters because they were all reasonably set. So the hired guns left without fixing anything and people in upper management were left scratching their heads. I stopped by to see if I could help them out, and as it turned out, their problem was a failure to issue a simple command that needed to be executed on three of their tables. (UPDATE STATISTICS, what else?)
The moral of the story is that many things affect the overall performance of SQL Server. Performance issues start from square 1 when you research and purchase the machine and set up Windows NT Server. Too often, the real problems are not understood, so people think tuning SQL Server is the answer. Tuning SQL Server enables the server to use the available resources on the machine optimally but does not prevent problems resulting from poor database and application design or failure to perform periodic maintenance. It is important to understand the value and limitations of tuning SQL Server. Oh yeah, when queries are running slow and the table has the correct indexes--remember the UPDATE STATISTICS command. In many cases, it can fix the problem!
Before discussing the many different configuration parameters, you must learn how to modify SQL Server parameters using the Enterprise Manager. Start up the Enterprise Manager, select a SQL Server, and perform the following steps:
Figure 19.1.
The Server Con-figuration/Options dialog box.
Figure 19.2.
The Configuration tab in the Server Configuration/Options dialog box.
Stranger than Fiction!
While I was modifying a configuration value that could be updated immediately, I noticed that the value displayed in the Running column of the Server Configuration/Options dialog box did not change to the Current column value when I clicked the Apply Now button. I thought this was pretty strange because apply now means just that--change the configuration and running value now if the value can be changed without restarting SQL Server. So I pressed F1 to display the help screen, which states:
Apply Now - Applies the information you have specified. For values that take effect immediately, you must close the Server Configuration/Options dialog box for the value to take effect. After reading the help, I was even more confused: now I can't determine the difference between the OK button and the Apply Now button. Do they both take effect when you close the Server Configurations/Options dialog box?
Well, I brought up a second instance of the Enterprise Manager and did some testing using the Server Configuration/Options dialog box on one instance to change values and using sp_configure on the second instance to check values. Guess what I found? The configuration values do take effect when you click the Apply Now button; however, the Server Configurations/Options dialog box is not refreshed! The configuration value takes effect when you click the Apply Now button, not when you close the window. Closing the window forces you to redisplay the Server Configuration/Options dialog box (that is, it refreshes the display) to see the changed values. Hmmm, because it's in the help file, it must be a feature not a bug, right?
To change system configuration parameters the way those poor folks stuck in the dark ages of ISQL (Sybase) do, use the system stored procedure sp_configure, which has the following syntax:
sp_configure [configuration_name [, configuration_value]]
In this syntax, configuration_name is the configuration parameter name you want
to change and configuration_value is the new value for the configuration parameter.
NOTE: Using sp_configure with no parameters displays the current SQL Server configuration.
SQL Server configuration values are stored in the system table sysconfigures. If you use sp_configure to modify an option, you must use the RECONFIGURE command to make the change take effect. RECONFIGURE has the following syntax:
RECONFIGURE [WITH OVERRIDE]
The WITH OVERIDE parameter is required only when you set the allow updates configuration parameter to 1. This provides an added security check to make sure that you really want to modify allow updates.
We have mentioned several times configuration parameters that can take effect without restarting SQL Server. These configuration parameters are referred to as dynamic configuration variables and are as follows:
The following sections examine each of the SQL Server configuration parameters. The configuration parameters are in alphabetical order, except in special cases when parameters are grouped together (memory configuration parameters, for example). Also, the advanced options are reviewed separately from the standard options. Start with the most important tunable configuration parameters--memory.
NOTE: The following configuration parameters are new for SQL Server 6.5:
By this time, you have already made several decisions that affect the performance of your SQL Server, such as the type of computer and disk system you have chosen. Now comes the question, "Which knobs do I turn?" or "Which parameters do I change?" As a Microsoft SQL Server DBA, the most important parameters for you to configure are the memory parameters. In general, SQL Server loves memory; that is, the performance of your SQL Server system can be enhanced by adding more memory. SQL Server is not a memory hog and is very smart with memory usage, particularly data caching. So how does SQL Server use memory? Look at Figure 19.3 to understand how memory is allocated.
Figure 19.3.
Windows NT and SQL Server memory usage.
Before you can give memory to SQL Server, you must allocate enough memory for Windows
NT Server to run sufficiently. The minimum memory requirement for Windows NT Server
is 12M but the amount of memory required by Windows NT varies, depending on the overall
use of the NT Server. In the case of a dedicated SQL Server machine, I like to start
with 16M of memory for the NT Server and work my way up, if necessary. When the operating
system requirements are met, you then can allocate memory for SQL Server. SQL Server
requests the memory configuration parameter amount of memory from the Windows NT
operating system and then uses the allocated memory to set up shop. SQL Server uses
the memory as follows.
First, a certain amount of memory is allocated for SQL Server overhead, which includes the size of the SQL Server executable. The amount of SQL Server static overhead is roughly 2M and is not affected in any way by configuration parameters.
SQL Server then allocates memory for various configuration options, such as user
connections and open databases (for example, each user connection configured in SQL
Server requires 37K of memory up front). The remaining amount of memory is then used
for the procedure cache (which stores the most recently used stored procedures and
query trees) and the data cache (which stores the most recently used data and index
pages). As you increase the amount of SQL Server memory, the sizes of the procedure
cache and disk cache increase, boosting performance because SQL Server can retrieve
more information from memory without performing disk I/O to retrieve the information.
If the temporary database (tempdb) has been placed in RAM, SQL Server requests
the memory to create the database from the Windows NT operating system during startup.
CAUTION: An important point to remember: If the temporary database, tempdb, is placed in RAM, the memory allocated to the temporary database is not part of the memory configured for SQL Server. I have seen many SQL Server systems fail to start because someone switched the temporary database to RAM and allocated all the remaining memory to SQL Server, assuming that tempdb in RAM was taken from SQL Server's memory allocation. If you find yourself in this situation, you can restart SQL Server using the default settings by adding the startup parameter -f to the SQL Server startup parameter list. You can find the startup parameter list under the option Set Server Options in the SQL setup program.
Monitoring Memory This section offers some tips to help you determine the correct amount of memory for your SQL Server. The primary tools are the SQL Performance Monitor and the DBCC MEMUSAGE command. To determine whether you have enough memory or too much memory, use the Performance Monitor and watch the counters listed in Table 19.1.
Performance Monitor Object | Counter |
Memory | Page Faults/sec |
SQLServer | Cache Hit Ratio |
SQLServer | I/O Page Reads/sec |
NOTE: When tuning SQL Server memory as just described, many of the counter scenarios used the word may instead of will to describe whether or not adding more memory could benefit your SQL Server. Why so vague? In many cases, adding additional memory to SQL Server can increase the performance of your system by keeping more information in the cache. However, other factors come into play that cannot be accounted for in a generalized sense. You need to be aware of the type of actions taking place on SQL Server while you are watching the Performance Monitor and tuning SQL Server. For example, if the work load executing on SQL Server is accessing the same values in a table over and over, you should see a high cache-hit ratio, indicating that you have sufficient memory allocated to SQL Server.
But if, in your actual production environment, many different tables and data-bases are being accessed instead of one table, your tuning session did not reflect your production environment and your assumption of sufficient memory may be wrong. This can also work in reverse. Your tuning session may access many different tables and databases simultaneously, indicating insufficient memory allocation. However, in your actual production environment, the typical situation may be that the same data is being retrieved over and over (that is, enhancing the chance of being in the cache), which would change your assumption of your required memory allocation. Make sure that you tune to your production environment and carefully evaluate your results.
To determine the actual memory pages used by the procedure cache and the data
cache, subtract the total SQL Server static overhead and the overhead from SQL Server
configuration values from the total amount of memory allocated to SQL Server.
TIP: A new Performance Monitor counter object called SQLServer-Procedure Cache Object has been added in SQL Server 6.5. One of the object's counter values is Procedure Cache Size, which gives you the size of the procedure cache in 2K pages. The new counter object is discussed later in this chapter in more detail in the section, "procedure cache."
To get the size of the procedure cache, multiply the total cache page value by
the default percentage value of the procedure cache. For the data cache, subtract
the procedure cache from the total cache value. For example, a SQL Server system
has a configuration value of 16M of memory. Assume the static and configura- tion
values overhead are 3M. The amount of memory available for the caches is 16M - 3
M = 13M.
NOTE: To determine the amount of memory consumed by configuration variables, multiply the configured values for each of the following configuration parameters by the approximate amount of memory consumed by the object and add the totals:To determine the size of the procedure cache using the default value of 30: 13M x 30% (.30) = 3.9M for the procedure cache. The data cache is 13M - 3.9M (the size of procedure cache) = 9.1M data cache.
Locks 32 bytes User Connections 37 kilobytes Open Databases 1 kilobyte Open Objects 40 bytes Devices 17.9 kilobytes (Total for 255 configured devices in SQL Server 6.x minus 70 bytes for each device)
TIP: The DBA Assistant utility on the CD that accompanies this book provides a graphical representation of SQL Server memory allocation using the formulas discussed above.
To double-check your memory computations, use the DBCC MEMUSAGE command to get the exact sizes of SQL Server and configuration overhead, as well as the size of the procedure and data cache. The partial listing in Listing 19.1 is actual output from a DBCC MEMUSAGE command.
Memory Usage:
Meg. 2K Blks Bytes
Configured Memory: 8.0000 4096 8388608
Code Size: 1.7166 871 800000
Static Structures: 0.2385 123 250048
Locks: 0.2480 127 260000
Open Objects: 0.1068 55 112000
Open Databases: 0.0031 2 3220
User Context Areas: 0.8248 423 864824
Page Cache: 3.3020 1691 3462416
Proc Headers: 0.0795 41 83326
Proc Cache Bufs: 1.3359 684 1400832
Using the terminology established earlier to compute the SQL Server memory breakdown, the DBCC MEMUSAGE output translates as follows:
Configured Memory = Total Amount of Memory Allocated
to SQL Server
Code Size + Static Structures = SQL Server Overhead
Locks + Open Objects + Open Databases + User Context Areas = Configuration Overhead
Page Cache = Data Cache
Proc Headers + Proc Buffers = Procedure Cache
The DBCC MEMUSAGE command also prints out the 20 buffered tables and indexes in the cache, as well as the top 20 items in the procedure cache. Listing 19.2 shows a partial listing of the top 20 procedures from a DBCC MEMUSAGE command output.
Procedure Cache, Top 20:
Procedure Name: sp_MSdbuserprofile
Database Id: 1
Object Id: 233051866
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.144749 Mb, 151780.000000 bytes, 76 pages
Procedure Name: sp_help
Database Id: 1
Object Id: 1888009757
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 1
Size of plans: 0.051249 Mb, 53738.000000 bytes, 27 pages
NOTE: For each SQL Server 6.5 configuration parameter, the Minimum, Maximum, and Default values are listed in table format; so is the parameter's dynamic variable status.
memory The memory configuration parameter sets the amount of memory allocated to SQL Server from the operating system. The values are represented in 2K (2048 byte) units. For example, 16M would be 16M/2K = 8192 pages. In general, adding memory to SQL Server increases the performance of your system.
TIP: If you make calls to many different extended stored procedures, you can use the DBCC command option dllname(FREE) to unload the DLL from SQL Server memory, freeing up the memory to be used by other processes. Otherwise, the DLL remains in memory until SQL Server is shut down.
Use the Performance Monitor and the DBCC command MEMUSAGE to help properly
configure your SQL Server with the appropriate amount of memory.
TIP: Is it possible to have too much memory? The answer is yes! If you allocate too much memory to SQL Server and not enough to Windows NT, the performance of your SQL Server can decrease because of excessive paging. To determine whether you have allocated too much memory to SQL Server and not enough to Windows NT, use the Performance Monitor and watch the Page Faults/sec counter. If page faults are being continuously generated (after system and SQL Server startup), you are running SQL Server with too much memory. Reduce the configuration amount and check again.
Microsoft prints out a table of suggested memory allocation for SQL Server, based on the amount of memory on the computer, which can be found in Books Online. Remember that the table is just a suggested starting point; you should always determine the correct amount of memory by monitoring your SQL Server.
Minimum: 1000 (2M)
Maximum: 1048576 (2G)
Default: Depends on setup, 8M for computers with <= 32M, 16M for computers with > 32M
Dynamic Variable: No
CAUTION: Be careful about allocating more memory to SQL Server and Windows NT than is physically available (that is, relying on virtual memory). Configuring SQL Server so that it uses virtual memory exceeding a ratio of 1:1 can hurt SQL Server performance and shows up as excessive paging in the Performance Monitor.
The tempdb Database in RAM The temporary database, tempdb, is used by SQL Server, applications, and developers as a temporary work area for creating temporary tables or storing temporary information. For example, a developer may create a temporary work table in a stored procedure or SQL Server may create a temporary work table as a result of a query with a group by or order by clause. The temporary database resides on disk as part of the master device and has a default size of 4M. Like a regular database, the size of tempdb can be increased using the ALTER database command. In SQL Server 4.21 for Windows NT, Microsoft included the option to allow the temporary database to reside in memory.
Following are the minimum, maximum, and default values for the config parameter:
Minimum: 0
Maximum: 2044 (Note: Depends on the size of your temporary database.)
Default: 0
Dynamic Variable: No
Developers Feel the Need for tempdb in RAM
When the tempdb in RAM feature first came out, I was working with a group of developers who had sa privileges on one of the development servers. They could not wait to put the temporary database in RAM to help speed up their stored procedures, which made heavy use of temporary tables. The problem was that the development machine had only a modest 32M of memory. SQL Server had been configured with 16M of memory and tempdb had the default size of 4M. At the time, the developers thought that the memory from tempdb was part of the 16M configured with SQL Server, because the documentation for version 4.21 was not clear about exactly where the memory for tempdb in RAM was coming from. These guys placed tempdb in RAM, stopped and restarted the system, and were as happy as could be thinking that they had just gotten a huge performance boost. Later on, they had to shut down SQL Server; when they tried to restart SQL Server, it would not start. This is where I came in. Turns out they had several other applications running on the NT Server that grabbed the memory needed to create the tempdb in RAM. When SQL Server failed while creating the tempdb, the server halted. I was able to shut down the other processes, restart the server, and place tempdb back on disk. The moral of the story is, "Watch out for the developers; they will do everything they can to convince you they need tempdb in RAM!"
Microsoft recommends that you not place tempdb in RAM unless your NT Server has at least 64M to 128M of memory. I make the same recommendation because it has been my experience that, in systems with less memory, the available memory is better used as part of the data or procedure cache. If you have enough memory to place tempdb in memory, run a series of benchmark tests with your queries or stored procedures that use the temporary database. Make sure that placing the temporary database in memory gives you the benefits you want. If you find substantial performance gains, leave tempdb in RAM; otherwise, place tempdb back on disk and use the memory for larger procedure and data caches. procedure cache The procedure cache configuration parameter is often tuned to enhance SQL Server performance. SQL Server 6.5 has added a new Performance Monitor counter object called SQLServer-Procedure Cache that can be used to help determine the correct procedure cache value.
Refer back to Figure 19.3 and notice that SQL Server maintains two cache areas that increase as the amount of memory is increased: the procedure cache and the data cache. The procedure cache stores the most recently used stored procedures and also is used for compiling SQL for ad-hoc queries. The data cache is used to store the most recently used data or index pages. Both caches use an LRU/MRU (least recently used/most recently used) management scheme to determine what stays in the cache and what is overwritten.
So how do you know whether you have the correct values for your procedure cache? First, you must understand that stored procedures are not reentrant; that is, if the same stored procedure is executed simultaneously by two users, SQL Server has separate copies of the compiled stored procedure in the procedure cache--one for each user (that is, the two users cannot share the same stored procedure). If 50 users run the same or different stored procedures simultaneously, you have 50 copies of the stored procedures in the procedure cache.
NOTE: In the SQL Server Books Online section, "Estimating SQL Server Overhead," the default size of the procedure cache is incorrectly listed as 20 rather than the correct default value of 30. Other parts of the documentation have the correct default value for the procedure cache.
Minimum: 1
Maximum: 99
Default: 30
Dynamic Variable: No
TIP: Because the procedure cache is of a limited size, it is possible to run out of available memory in the procedure cache, causing error 701 to occur. For example, if your procedure cache is large enough to run 200 stored procedures, and 200 stored procedures are being executed, the next stored procedure or query tree that attempts to load in the cache will be rejected for lack of space. If this occurs frequently, increase the size of your procedure cache.
Now that you understand how users affect the number of stored procedures or query trees in the procedure cache, what about the size of a stored procedure? The minimum size of a stored procedure is 2K. To determine the correct size of a stored procedure, use the DBCC MEMUSAGE command. The following formula, found in the Microsoft SQL Server Administrator's Companion in Books Online, can be used to estimate the size of the procedure cache:
Procedure Cache = (Max. Number of Concurrent Users)
x
(Size of Largest Plan) x 1.25
To come up with the configuration value, use the following:
Procedure Cache Config. Value =
(Procedure Cache/(SQL Server Memory - (Static Overhead + Configuration Overhead)
) ) x 100%
A better method of getting a starting value for procedure cache is to determine the most widely used stored procedures and use an average size of the procedures in the preceding formula rather than the size of the largest procedure. If your SQL Server has a large amount of memory available for the cache (greater than 64M), you may want to consider lowering the procedure cache value. For example, if you have 100M available and use the default setting, your procedure cache would be 30M. If the maximum number of concurrent users is 175 with an average plan size of 55K, the following would be true:
Procedure Cache = 175 users x (55k plan size) x 1.25
= 12 MB
Procedure Cache Config Value = (12 MB/100 MB) x 100% = 12
By using the formulas, you find that you could probably get by with a procedure
cache configuration value of 12 percent instead of 30 percent, freeing up an
additional 18M of memory for the disk cache!
TIP: You have spent some time determining the correct size of the data cache. Now it's time to tune the procedure cache; however, here's the catch: If you modify the procedure cache configuration value, you change the data cache! If you add more memory to SQL Server, you change the size of the data cache and the procedure cache. To size either the procedure cache or data cache without changing the other cache value requires setting both the memory configuration value and the procedure cache value simultaneously, adjusting the two values so that the other cache remains the same. Use this technique when fine-tuning the correct cache values.
Once you have determined a configuration value for the procedure cache, use the Performance Monitor and the new SQL Server 6.5 object SQLServer-Procedure Cache to further fine-tune the procedure cache value. Using the SQLServer-Procedure Cache object takes the guess work out of selecting a correct value for the procedure cache. Monitor the following counter values during normal and peak periods for your SQL Server:
- Max Procedure Cache Used
- Procedure Cache Used
If the Max Procedure Cache Used counter value exceeds 95 percent while you are monitoring it, consider increasing the size of the procedure cache (you are very close to running out of memory for the procedure cache). Note: If your procedure cache is running out of memory, definitely increase the size of procedure cache. If, during maximum load periods, the Max Procedure Cache Used value does not exceed 70 percent, consider decreasing the amount of memory allocated to the procedure cache.
A DBCC command you can use to help establish the correct cache size is the DBCC SQLPERF( LRUSTATS) command. Execute the command after your SQL Server has been running for a few days. Examine the Cache Flushes and Free Page Scan (Avg) values. The Free Page Scan (Avg) value should be less than 10 and the value for Cache Flushes should be less than 100. If your values are higher, your current SQL Server cache is not large enough and you can improve your SQL Server performance by increasing the cache size (that is, by adding memory).
Determining the correct size of the procedure cache involves trying different values and monitoring the SQL Server-Procedure Cache object. Getting the correct size for your procedure cache enhances the performance of your system.
If the value of allow updates is set to 1, the SQL Server system tables can be modified. But do not set this configuration value to 1 unless told to do so by Microsoft Technical Support.
CAUTION: Directly updating system tables is risky business and could prevent your SQL Server from running.
If you need to update system tables, use the system stored procedures. If you need to turn on this option, start SQL Server in single-user mode (command-line option -m) to prevent any other users from accidentally modifying the system tables.
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: Yes
NOTE: Stored procedures created to modify system tables while the allow updates option is on can always modify the system tables, even after the allow updates option is turned off.
A SQL Server 6.x option, backup buffer size, enables you to set the size of the buffer used to load/dump a database to increase the performance of backup/load operations. The value is in 32-page increments.
Minimum: 1
Maximum: 10
Default: 1
Dynamic Variable: Yes
The backup threads option is another SQL Server 6.x feature added to speed up dump/load processing. The backup threads configuration value reserves a number of threads to be used for parallel striped backups and recoveries.
Minimum: 0
Maximum: 32
Default: 5
Dynamic Variable: No
The database size option determines the default size allocated to each new database created without specifying the database size. The configuration values are in megabytes.
Minimum: 1
Maximum: 10000
Default: 2
Dynamic Variable: No
The default language option determines the number of the language used to display system messages.
Minimum: 0
Maximum: 9999
Default: Varies (US_English = 0)
Dynamic Variable: No
The fill factor option specifies how densely packed you want your index and data pages while creating an index. The default is 0, which leaves room on the nonleaf pages but makes the leaf pages 100 percent full. Use a low fill factor value to spread data over more pages. For more information on using fill factor, see Chapter 21, "Understanding Indexes."
Minimum: 0
Maximum: 100
Default: 0
Dynamic Variable: No
NOTE: The fill factor option is not maintained by SQL Server after index creation and is maintained only when the index is built.
The language in cache option determines the number of languages that can be held simultaneously in the language cache.
Minimum: 3
Maximum: 100
Default: 3
Dynamic Variable: No
The LE threshold maximum is a new configuration parameter that can help ease potential locking problems in environments with extremely large tables.
LE threshold maximum stands for Lock Escalation threshold maximum and is a SQL Server 6.x configuration parameter. In previous versions of SQL Server, when many page lock requests began to pile up on a table, SQL Server escalated the lock request to a table lock, preventing other users from accessing the table. The magical number--regardless of the size of the table--was 200 and could not be tuned. Developers working in environments with large amounts of data were penalized because the majority of their transactions could affect over 200 pages. Now you can tune the level of the number of page locks that SQL Server will hold before a table lock is placed on the table.
Minimum: 2
Maximum: 500000
Default: 200
Dynamic Variable: Yes
NOTE: The use of configurable locking parameters is just one of many ways SQL Server 6.x has improved SQL Server locking capabilities and available options for the developer and DBA. I take my hat off to the folks in Redmond, WA!
The LE threshold percent option (a SQL Server 6.x option) enables you to control locking escalation as a percentage rather than a fixed number. A zero value causes a table lock to occur when the LE threshold minimum is reached.
Minimum: 1
Maximum: 100
Default: 0
Dynamic Variable: Yes
The locks configuration variable sets the number of available locks. If you are getting error messages that the SQL Server is out of locks, increase the number.
TIP: The locks configuration parameter is an often-tuned parameter.
Minimum: 5000
Maximum: 214748364
Default: 5000
Dynamic Variable: No
The logwrite sleep (ms) option specifies the number of milliseconds to delay before writing to disk a buffer that is not full. This is done in the hope that other users will fill up the buffer before it is written to disk.
Minimum: -1
Maximum: 500
Default: 0
Dynamic Variable: No
The max async IO option is the number of outstanding asynchronous I/Os that can be issued. Modify this option only if you have databases that span multiple physical drives or if you are using disk striping and your database server has separate disk controllers or a smart disk controller (like a Compaq smart array) that supports asynchronous I/O. If you meet these requirements, you may be able to increase your system throughput by modifying the max async IO parameter. Using the Performance Monitor, monitor the SQL Server object counter I/O Batch Writes/sec and the Transactions/sec counter before and after you modify the parameter. You should notice an increase in both values; if you do not, reset the value back to the default. Be careful when increasing this parameter. A value that is too high can cause performance degradation by causing excessive overhead.
Minimum: 1
Maximum: 255
Default: 8
Dynamic Variable: No
The max text repl size option is a new SQL Server 6.5 feature that controls the maximum number of bytes that can be added to a text or image data column during replication in a single data modification statement (INSERT, UPDATE, or DELETE).
Minimum: 0
Maximum: 2147483647
Default: 65536
Dynamic Variable: Yes
Worker threads are used by SQL Server for things such as checkpoints, users, and network support. The max worker threads configuration parameter sets the maximum number of worker threads SQL Server can use. If the configured value is greater than the number of concurrent user connections, each user connection has its own thread; otherwise, the user shares a pool of worker threads.
Minimum: 10
Maximum: 1024
Default: 255
Dynamic Variable: Yes
The media retention option sets the number of days you want to retain backup media before overwriting it with a new dump. If you attempt to overwrite the media before the number of retention days has expired, you get a warning message.
Minimum: 0
Maximum: 365
Default: 0
Dynamic Variable: No
When the nested triggers option is set to 1, a trigger can call another trigger (that is, triggers can be nested). When this option is set to 0, calling a trigger from another trigger is prohibited.
Minimum: 0
Maximum: 1
Default: 1
Dynamic Variable: Yes
NOTE: You can have up to 16 levels of nesting.
The network packet size option was first introduced in SQL Server 4.21 for Windows NT, but with the major restriction that it could be set only using NETBEUI. With SQL Server 6.x, however, you can now set the packet size for any of the network protocols supported by SQL Server. If you have a network that supports a large packet size, you can increase the network performance with SQL Server by increasing the packet size. The default of 4096 bytes is a welcome change to the anemic 512-byte packet size used in previous versions.
Minimum: 512
Maximum: 32767
Default: 4096
Dynamic Variable: Yes
The open databases option specifies the maximum number of databases that can be open at one time on SQL Server. If you receive error messages that indicate you have exceeded the number of open databases, increase the number. The overhead is fairly insignificant (about 4K per configured open database).
Minimum: 5
Maximum: 32767
Default: 20
Dynamic Variable: No
NOTE: Unlike locks--which are not shared and are per user/per object--the open databases configuration is for the entire server, regardless of the number of users.
The open objects option specifies the maximum number of database objects that can be open at one time on SQL Server.
Minimum: 100
Maximum: 2147483647
Default: 500
Dynamic Variable: No
Setting recovery flags to 0 displays minimum information during the SQL Server recovery process at startup. When this option is set to 0, a recovery message, along with the database name, is displayed. Setting this option to 1 results in the display of more informational messages (information about individual transactions).
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: No
SQL Server uses the recovery interval option, the database truncate log on checkpoint setting, and the amount of database activity to determine when a checkpoint should be performed to write the "dirty pages" (modified pages not yet flushed to disk). The recovery interval specified is not the amount of time between SQL Server checkpoints; it is the maximum amount of time per database that SQL Server needs to recover the database in the event of a system failure. The checkpoint process checks each database every minute to see whether the database needs to be checkpointed.
The recovery interval configuration parameter is an often-tuned parameter.
Minimum: 1
Maximum: 32767
Default: 5
Dynamic Variable: Yes
The remote access option controls the logins from remote SQL Servers. When set to 1, users from remote SQL Servers have access to the server.
Minimum: 0
Maximum: 1
Default: 1
Dynamic Variable: No
The remote conn timeout option determines the amount of time to wait before timing out inactive connections from a remote server. The timeout value is in minutes. This option has no effect on connections involved in a distributed transaction.
Minimum: -1
Maximum: 32767
Default: 10
Dynamic Variable: No
When set to 1, the remote proc trans option provides a DTC distributed transaction that protects the ACID properties of transactions.
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: Yes
The show advanced options option displays the advanced configuration options when using the SQL Server Enterprise Manager or the sp_configure system stored procedure. Set this value to 1 to display the advanced options.
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: Yes
The user connections option specifies the maximum number of simultaneous user connections allowed on SQL Server. If the maximum number is exceeded, you get an error and are unable to establish the new connection until one becomes available. Be careful about setting this parameter too high because each user connection takes up approximately 40K of memory overhead, regardless of whether or not the connection is used.
The user connections configuration parameter is an often-tuned parameter.
Minimum: 5
Maximum: 32767
Default: 20
Dynamic Variable: No
Parameter Overkill
I was once at a site with a well-configured machine (128M of memory for SQL Server); those at the site were wondering whether they had properly configured SQL Server for optimum performance. I issued a DBCC MEMUSAGE command to get an overview of how things looked in memory and I was astounded to see a very large amount of memory being used for user connections (40M)! It turned out that someone had bumped up the number of user connections to 1000, not realizing how SQL Server allocated the memory up front for user connections. The funny thing was they only had 30 users, with at most 60 connections at any one time. We quickly got back an extra 38M of memory for the data and procedure cache.
The user_option option allows you to set global default options for all users. Using this parameter, you can control implicit transactions, ANSI warnings, ANSI NULLs, ANSI defaults, the distinction between a single quote and a double quote, and several other options. The options set take effect during the user's login session; the user can override them by using the SET statement.
Minimum: 0
Maximum: 4095
Default: 0
Dynamic Variable: Yes
CAUTION: The following configuration parameters are considered to be advanced configuration parameters and can be seen only by turning on the show advanced options configuration option. I highly suggest leaving these parameters alone. Microsoft has done a good job setting the default values and you can easily hinder the performance of your SQL Server by incorrectly setting one of the advanced configuration options. If you do modify them, make sure that you fully understand the options and the overall impact of your changes!
The affinity mask option allows you to associate a thread to a processor.
Minimum: 0
Maximum: 2147483647
Default: 0
Dynamic Variable: No
The cursor threshold option determines how the keyset for a cursor is generated. If the option is set to -1, all cursor keysets are generated synchronously (which is good for small cursor sets). If the option is set to 0, all cursor keysets are generated asynchronously. Otherwise, the query optimizer compares the number of expected rows in the cursor set; if the number of expected rows exceeds the cursor threshold configuration variable, the keyset is built asynchronously.
Minimum: -1
Maximum: 2147483647
Default: -1
Dynamic Variable: Yes
The default sortorder id option shows the current sort order ID installed on SQL Server. Do not use sp_configure or the SQL Enterprise Manager Configuration dialog box to change the sort order! Changing the sort order is done through the SQL Server setup program and is a major change to your SQL Server.
Minimum: 0
Maximum: 255
Default: Varies
Dynamic Variable: No
The free buffers option determines the threshold of free buffers available to SQL Server. The values automatically change as the SQL Server memory is changed. The value equals approximately 5 percent of the available memory.
Minimum: 20
Maximum: Varies
Default: Varies
Dynamic Variable: Yes
You use the hash buckets option to create the number of buckets available for hashing to speed access time when retrieving data from the data cache. The standard default is sufficient for systems with less then 160M of memory.
Minimum: 4999
Maximum: 265003
Default: 7993
Dynamic Variable: No
The LE threshold minimum option sets the minimum number of lock pages required before escalating to a table lock.
Minimum: 2
Maximum: 500000
Default: 20
Dynamic Variable: Yes
Use the max lazywrite IO option to tune the priority of batched asynchronous I/O performed by the Lazy Writer process. Do not modify this option unless told to do so by Microsoft Tech Support.
Minimum: 1
Maximum: 255
Default: 8
Dynamic Variable: Yes
If the priority boost configuration value is set to 1, SQL Server runs at a higher priority on the Windows NT server.
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: No
CAUTION: Even if you have a dedicated machine for SQL Server, do not boost the priority of SQL Server. It runs fine as a regular Windows NT service; boosting the priority can cause some unexpected problems when trying to bring down SQL Server or when trying to use other NT tools on the server.
The remote login timeout option specifies the number of seconds to wait before returning from a remote login attempt. The default of 0 specifies an infinite timeout value.
Minimum: 0
Maximum: 2147483647
Default: 0
Dynamic Variable: Yes
The remote query timeout option specifies the number of seconds to wait before timing out as a result of a remote query. The default of 0 specifies an infinite timeout value.
Minimum: 0
Maximum: 2147483647
Default: 0
Dynamic Variable: Yes
The resource timeout option specifies the number of seconds to wait for a resource to be released.
Minimum: 5
Maximum: 2147483647
Default: 1
Dynamic Variable: Yes
If the value of the set working set size option is set to 1 when SQL Server starts, Windows NT locks all the memory in the memory configuration value and the tempdb value (if it is in RAM), as a working set to increase performance. You can disable the creation of the memory working set by setting the option to 0. When this option is disabled, SQL Server asks the cache manager for memory as needed up to the value in the memory configuration parameter. Memory is still reserved for tempdb (if it is in RAM).
Minimum: 0
Maximum: 1
Default: 0
Dynamic Variable: No
The SMP concurrency option determines the number of threads SQL Server releases to Windows NT for execution. The default value for this parameter assumes a dedicated computer for SQL Server, allowing SQL Server to automatically configure itself in an SMP environment or single-processor environment for the best performance.
Minimum: -1
Maximum: 64
Default: -1
Dynamic Variable: No
The sort pages option specifies the maximum number of pages allowed to a user performing a sort. This option may require adjusting if SQL Server performs large sorts.
Minimum: 64
Maximum: 511
Default: 64
Dynamic Variable: Yes
The spin counter option specifies the maximum number of attempts a process will make to obtain a resource.
Minimum: 1
Maximum: 2147483647
Default: 10000
Dynamic Variable: Yes
The time slice option specifies the amount of time a user process can pass a yield point without yielding.
Minimum: 50
Maximum: 1000
Default: 1000
Dynamic Variable: No
SQL Server 6.x ships with an exciting new feature called Parallel Data Scan, also referred to as Asynchronous Read Ahead. The read-ahead technology (RA for short) decreases the time required to perform logical sequential data reads, which translates to improved performance for table scans, index creation, DBCC commands, UPDATE STATISTICS, and covered queries. Microsoft claims that, depending on the hardware platform, databases, and so on, RA technology can boost performance up to a factor of three on some queries over performance in version 4.21. The idea behind the parallel data scan is simple. Using separate threads, SQL Server reads extents (8 data pages or 16K) into memory before the thread running the query needs the extent. When the extent is needed to satisfy the query, the data pages are already in the memory cache thanks to the read-ahead. Figure 19.4 gives you an idea of how RA works.
A user issues a query that performs a table scan on a table called big_table. SQL Server gets the query request and begins to process the query. When SQL Server goes to retrieve the first data page, SQL Server first checks to see whether the data page is already in the data cache (memory). If the page is not in memory, SQL Server reads the page from disk and places it in the data cache.
Figure 19.4.
A parallel data scan.
Regardless of whether RA is available, SQL Server always checks the data cache first
before reading the data page. So, how does the RA technology fit in? First, a Read-Ahead
Manager operates in the background to manage several threads that perform the asynchronous
data reads. SQL Server also has several configuration variables that determine how
the read ahead will work. For example, there is a parameter that determines the number
of read-ahead threads that exist and the number of slots handled per each thread.
There also is a parameter that determines how many cache misses can occur in a sequential
data operation before the RA Manager assigns a thread to prefetch the data into the
data cache.
Look again at Figure 19.4. The query Select * from big_table is being executed by SQL Server. SQL Server checks to see whether the data page is in the data cache. The page is not in the cache, so SQL Server goes out to disk. The RA Manager counts 1 cache miss and compares the number of misses to the SQL Server configuration parameter RA cache miss limit, which defaults to 3 (the value used for this example).
Because the number of cache misses is less than the configuration parameter, nothing happens. The query continues to run and SQL Server checks to see whether the next data page is in the data cache. Again, it's not, so SQL Server reads the page and the RA Manager checks the number of cache misses against the configuration value.
This process continues until the number of cache misses exceeds the configuration value. Then the RA Manager checks thread 1 for an empty slot and assigns the slot to handle read-aheads for the big_table query. Thread 1 then begins to prefetch an extent 16K (8 data pages) into the data cache, as shown in Figure 19.5.
Figure 19.5.
The Read-Ahead Manager using a thread to prefetch data.
When the query checks to see whether the next page is in the data cache, the answer
is yes. The thread performing the query does not have to wait for SQL Server
to read the data page into memory.
The RA thread prefetches up to a configuration amount of extents into the cache before becoming idle and maintains the number of extents ahead of the operation. If another request comes along and thread 1 has an empty slot, the new request is also assigned to thread 1. Thread 2 does not have any request assigned to it until all the slots in thread 1 are filled. The next sections examine the different configuration parameters for RA technology.
CAUTION: SQL Server 6.x documentation recommends that you do not modify the RA cache hit limit parameter unless instructed to do so by your primary SQL Server support provider.
The RA cache hit limit option specifies the number of cache hits that a read-ahead request can have before it is canceled.
Minimum: 1
Maximum: 255
Default: 4
Dynamic Variable: Yes
CAUTION: SQL Server 6.x documentation recommends that you do not modify the RA cache miss limit parameter unless instructed to do so by your primary SQL Server support provider.
The RA cache miss limit option specifies the number of cache misses that can occur before the Read Ahead Manager assigns a thread and slot to begin to prefetch data.
Minimum: 1
Maximum: 255
Default: 3
Dynamic Variable: Yes
The RA delay option sets the amount of time in milliseconds to wait between the time the read-ahead event is set and when the read-ahead thread is awakened by the operating system. This configuration parameter is required only in non-SMP systems.
Minimum: 0
Maximum: 500
Default: 15
Dynamic Variable: Yes
The RA prefetches configuration parameter specifies the number of data extents the read-ahead thread prefetches in the data cache before becoming idle.
Minimum: 1
Maximum: 1000
Default: 3
Dynamic Variable: Yes
The RA slots per thread option specifies the number of slots maintained by each RA thread. A slot corresponds to a read-ahead request and each RA thread simultaneously manages the number of RA slots per thread.
Minimum: 1
Maximum: 255
Default: 5
Dynamic Variable: No
The RA worker threads option specifies the number of threads available to handle read-ahead requests.
Minimum: 0
Maximum: 255
Default: 3
Dynamic Variable: No
This chapter examined all the SQL Server configuration parameters. Following are some of the more important points to remember about tuning and configuring SQL Server:
The answer to the question, "Which knobs do I turn?" is "Not many!"
The nice thing about SQL Server is that many of the default values provide optimal
SQL Server performance for most database installations right out of the box. I have
noticed that, with the newer versions of SQL Server, Microsoft has increased the
default values for several configuration parameters to meet more real-world needs
(and probably to reduce the number of calls to tech support). For example, the default
size of the procedure cache is now 30 instead of 20; the configuration value
devices has been removed (the old default value was 10), increasing the
number of devices to the previous versions' configured maximum of 255. These are
just a few of the changes; if you are familiar with previous versions of SQL Server
or a Sybase SQL Server, I'm sure that you will also notice the differences. Tune
your SQL Server installation for the best possible performance but do not forget
that tuning is a many-phase process. It includes the hardware, the installation of
the operating system, the installation and tuning of SQL Server, and the overall
design of the databases and applications.
To order books from QUE, call us at 800-716-0044
or 317-361-5400.
For comments or technical support for our books and software, select Talk to Us.
© 1997, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster
Company.