Teach Yourself Microsoft SQL Server 6.5 in 21 Days

Previous chapterNext chapterContents


Day 20

Performance Tuning and Optimization


On Day 19, "Data Distribution (Replication)," you looked at data distribution, including replication. SQL Server provides replication from multiple servers and gives you significant functionality built into the product.

Now take a look at SQL Server's performance tuning options. There are a lot of things you can do to affect the your SQL Server database's performance, almost all of which are not covered in this chapter. Don't let this stop you from continuing on today. Most of the performance tuning that can be done is based upon a good logical model that has been properly implemented. Indexing is also extremely important, as you looked at on Day 13, "Indexing."

This is not to say that what you are doing today isn't important. You can make important, sometimes dramatic, performance improvements by tuning both Windows NT and SQL Server options. Setting up your hardware correctly is critical to your database's reliability and performance. Even more performance gains are based on how the database is designed and built.

The day begins by examining hardware selection and tuning, then examines tuning param-eters for Windows NT as well as performance monitoring for Windows NT. You then examine SQL Server tuning parameters and SQL Server performance monitoring, both with the Windows NT performance monitor and SQL Enterprise Manager.

Hardware Selection and Tuning

The first step is the proper selection of hardware. People often ask what the most important component of a SQL Server system is. The answer is that a good functioning machine requires components that work well together. If you had to pick one component, it would have to be memory. SQL Server rarely has too much RAM. You should examine each component in detail.

Memory

SQL Server requires RAM to hold all data, index, and log pages in memory. It also holds compiled queries and stored procedures. You focus on memory in a while and then discuss several options for tuning memory. A good starting point on systems being deployed with SQL Server is probably 128MB of RAM. If you can afford more, it's a good place to invest your money. For small implementations of SQL Server (and small is a hard term to describe), 64MB of RAM may well be enough, but given today's memory prices, starting with the larger amount of RAM is smart.

Processor

As with any system, it never hurts to have plenty of CPU power. Although SQL Server is not necessarily the most CPU-intensive program, it certainly can use the CPU, depending on the workload being run. SQL Server and Windows NT can both take advantage of multiple processors. SQL Server 6.5 can currently take advantage of up to four processors. The Enterprise Edition of SQL Server 6.5 is supposed to be able to support eight processors. SQL Server is programmed to run a workload spread over multiple CPUs and to provide thread parallelization (running SQL Server simultaneously on multiple CPUs).

Network

Although often overlooked, the network is critical to the success of your SQL Server deployment. The first consideration is what kind of network card you should use in your server. It is recommended you use a 32-bit bus mastering network card. Not only does it have the best throughput in terms of raw amounts of data, but it also tends to have the least impact on the CPU. Another consideration is the type of network card. Most networks are still 10MB ethernet, although variations of 100MB networks--as well as ATM, FDDI, and so on--are becoming more popular. Remember that all of your database activity has to be sent through the network. If you have a 10MB ethernet network card, you can expect throughput of about 1.2MB per second. If you want more than that amount of data to be sent out of SQL Server, you need to add a second, or even a third, network card.



NOTE: Make sure that you put each network card on a separate physical network. If you place multiple network cards on a single network, you get duplicate name errors, and some of the network cards--or worse, all network cards--could be disabled. This is because your SQL Server machine would announce its presence on the network, one network card at a time. When the second network card announced its server name, the first card would announce that it already had this name. Because NETBIOS has a flat name space, all names must be unique on the network. Therefore, the secondary network cards return errors about name problems and function improperly. Again, simply place each network card on its own subnet to avoid this problem.

Disks

Disks are a critical component of any server. That's where the data is. If you'd like to get access to your data, you typically need to access it from your hard disk. The speed at which data is written back to the disks is also important; then there are bandwidth considerations. You first need to examine what kind of disks you should use. More small disks are better than fewer large disks. SCSI disks tend to perform better than IDE disks, but cost a little more. However, it's worth the money. It is also worthwhile to use multiple SCSI cards in your computer, with disks distributed among them. A new specification, SCSI-3, may provide even further performance and speed enhancements. Some vendors are calling this Ultra SCSI. Make sure that all components of the SCSI system are the same (all fast/wide, for instance). Mismatches can cause devices to default to the slowest possible speed.

Another question that comes up frequently is the recommended layout of disks. The system files and the transaction log files should be duplexed. This means that they are mirrored, preferably on different SCSI controllers. The data files have two options, either of which work. The most expensive option is to have stripes of disks, and then mirror the stripes. This could work by using hardware striping, and then using Windows NT stripes mirroring. That way, any disk in the system could be lost with no performance loss. You could also set up the disks using RAID-5. This is the less expensive option. However, your performance won't be as good, and you see a significant loss of performance if you lose one of the disks. A recommended intermediate and advanced configuration is shown in Figures 20.1 and 20.2.

With all of this put together, there's quite a bit to consider. Most well-known server vendors do an excellent job of putting together packages with all of this in mind.

Figure 20.1. A high fault tolerant/moderate performing server configuration.

Windows NT Configuration Parameters

Most of the Windows NT configuration options that need to be set are done by default by the setup program. The most important configuration option is Maximize Throughput for Network Applications (as the configuration value of the server service in Control Panel-Network). Take a look at Figure 20.3. The SQL Server setup program sets this value correctly, and you should not change it.

It is recommended that SQL Server be installed on a member server (or standalone server, they're really the same thing). SQL Server functions on a domain controller, but it is not a recommended implementation. Domain controllers have additional demands on their memory, which can conflict with the memory needs of SQL Server.

This brings up the topic of additional services running on the SQL Server machine. Being a domain controller is mostly a function of another service running on the server. Other services, such as DHCP Server, WINS server, DNS server, or other BackOffice applications, most likely interfere with SQL Server's performance. It is recommended that you not run these services on the same machine with your database.

Another consideration is the Windows NT paging file. You should place the paging file on a disk not being used by SQL Server. Otherwise, usage of the page file might interfere with SQL Server.

Figure 20.2. A high fault tolerant/advanced performing server configuration.

Figure 20.3. The Network icon's Services tab from control panel.

Using RAID to protect your data was discussed earlier. You should look for a hardware implementation of mirroring or RAID-5 rather than using Windows NT Server's RAID capabilities. Most hardware implementations will outperform NT server's built-in RAID capabilities.

NT Performance Monitor Counters

This book does not attempt to provide a thorough discussion of Windows NT performance monitoring or performance counters. If you want to know more, pick up Sams' Windows NT 4 Server Unleashed, by Jason Garms (ISBN 0-672-30933-5), for a good discussion of Windows NT performance monitoring. However, there are a few counters that are very important for SQL Server.

The objects and counters mentioned here can be monitored with the Windows NT Performance Monitor tool. Now look at each of the four types of resources you started with today.

Memory

Memory is one consideration for a machine dedicated to SQL Server.

In general, you should keep page faults to a minimum. Page faults occur when data was needed in memory, but wasn't found and had to be fetched from disk. You can monitor this counter with the MEMORY object, Page Faults/Sec counter. Although the value is not zero, it should not be a high value. What's a high value? Whatever is not normal for your server. In general, you should see this value as close to zero as possible.

Another memory counter to look at is the MEMORY object, Available Bytes counter. This value should not fall below 4MB. If it does, Windows NT attempts to reclaim memory from applications (including SQL Server) to free up some memory. If this value frequently falls below 4MB, it may indicate that you have assigned too much memory to SQL Server.

Processor

The counter most commonly monitored is the PROCESSOR object, % Processor Time counter. This tells you how busy the CPU is overall. There is an instance number, starting at 0, for each CPU in the system.

Network

Network monitoring can be done using either the Performance Monitor tool or a network monitoring tool, such as Microsoft Network Monitor. There are literally too many network objects and counters to mention here. You can monitor objects based on each separate network protocol, such as NetBEUI, TCP, UDP, IP, and so on. You look at some SQL Server-specific counters shortly.

Disk

You can monitor disk usage only after you have turned on the Windows NT disk counters. You enable these with the command diskperf -y from a Windows NT command prompt, and then restart the server. Otherwise, all the disk counters reflect a value of 0.

After you've enabled the counters, you can monitor the following counters: LogicalDisk object, Avg. Disk Queue Length counter. If this value is consistently higher than the number you've configured for maximum asynch IO (which defaults to 8), you may want to increase this value. You may experience significant performance benefits on systems with multiple disks by increasing this counter.

You may also choose to monitor the PhysicalDisk object. The difference between this and the LogicalDisk object is that the PhysicalDisk counter monitors physical hard drives, whereas the LogicalDisk counter tells you about drive letters. Hence, if you have two logical disks on a single physical drive, you can use the PhysicalDIsk counter to show how much activity is occurring on the physical disk, as well as break it down on a logical disk-by-logical- disk basis.

SQL Server Tuning

There are a variety of different tuning options available with SQL Server. If you turn on the Show Advanced option with sp_configure, you see quite a few options.

sp_configure `show advanced option', 1
Configuration option changed.  Run the RECONFIGURE command to install.

Now run the sp_configure all by itself.

sp_configure

name                    minimum     maximum     config_value run_value
----                   --------    --------    ------------ -----------
affinity mask           0           2147483647  0            0
allow updates           0           1           0            0
backup buffer size      1           32          1            1
backup threads          0           32          5            5
cursor threshold        -1          2147483647  -1           -1
database size           2           10000       2            2
default language        0           9999        0            0
default sortorder id    0           255         52           52
fill factor             0           100         0            0
free buffers            20          524288      409          409
hash buckets            4999        265003      7993         7993
language in cache       3           100         3            3
LE threshold maximum    2           500000      200          200
LE threshold minimum    2           500000      20           20
LE threshold percent    1           100         0            0
locks                   5000        2147483647  5000         5000
LogLRU buffers          0           2147483647  0            0
logwrite sleep (ms)     -1          500         0            0
max async IO            1           1024        8            8
max lazywrite IO        1           1024        8            8
max text repl size      0           2147483647  65536        65536
max worker threads      10          1024        255          255
media retention         0           365         0            0
memory                  2800        1048576     8192         8192
nested triggers         0           1           1            1
network packet size     512         32767       4096         4096
open databases          5           32767       20           20
open objects            100         2147483647  500          500
priority boost          0           1           0            0
procedure cache         1           99          20           20
Protection cache size   1           8192        15           15
RA cache hit limit      1           255         4            4
RA cache miss limit     1           255         3            3
RA delay                0           500         15           15
RA pre-fetches          1           1000        3            3
RA slots per thread     1           255         5            5
RA worker threads       0           255         3            3
recovery flags          0           1           0            0
recovery interval       1           32767       5            5
remote access           0           1           1            1
remote conn timeout     -1          32767       10           10
remote login timeout    0           2147483647  5            5
remote proc trans       0           1           0            0
remote query timeout    0           2147483647  0            0
remote sites            0           256         10           10
resource timeout        5           2147483647  10           10
set working set size    0           1           0            0
show advanced options   0           1           1            1
SMP concurrency         -1          64          0            1
sort pages              64          511         64           64
spin counter            1           2147483647  10000        0
tempdb in ram (MB)      0           2044        0            0
time slice              50          1000        100          100
user connections        5           32767       25           25
user options            0           4095        0            0
(1 row(s) affected)

ANALYSIS: Most of the configuration parameters shown here can have some impact on performance. Perhaps the most important options are related to how memory is used and configured for SQL Server. To start, you should examine a way to view how memory is used. DBCC MEMUSAGE is the command that you need to run:

DBCC MEMUSAGE

Yes, that's really it. There is nothing more to say about this command. It couldn't be simpler.

Here's the output of DBCC MEMUSAGE on one computer, corresponding to the previous sp_configure output. The output is somewhat lengthy, but has been trimmed down somewhat.

DBCC MEMUSAGE

Memory Usage:

                             Meg.         2K Blks     Bytes
      Configured Memory:     16.00        8192        16777216
              Code size:      2.45        1254         2568192
      Static Structures:      0.25         129          263232
                  Locks:      0.29         147          300000
           Open Objects:      0.12          61          124000
         Open Databases:      0.02          13           26080
     User Context Areas:      1.01         515         1054326
             Page Cache:      9.38        4804         9836864
           Proc Headers:      0.13          68          138226
        Proc Cache Bufs:      2.21        1134         2322432
Buffer Cache, Top 20:
           DB Id     Object Id        Index Id    2K Buffers
           1         5        		   0           176
           1         3        		   0           64
           1         1        		   0           30
           5         5        		   0           20
           1         99         		   0           16
           1         6        	 	   0           10
           1         1        		   2           8
           1         5        		   1           8
           1         2        		   0           7
           1         6        		   1           4
           5         1        		   0           4
           5         2        		   0           4
           5         3        		   0           4
           5         5        		   1           4
           1         36        		   0           3
           1         704005539        0           3
           2         2        		   0           3
           5         1        		   2           3
           5         99       		   0           3
           1         37        		   0           2
Procedure Cache, Top 16:
Procedure Name: sp_MSdbuserprofile
Database Id: 1
Object Id: 107147427
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.171616 Mb, 179952.000000 bytes, 90 pages
Procedure Name: sp_configure
Database Id: 1
Object Id: 1952009985
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 1
Size of plans: 0.051842 Mb, 54360.000000 bytes, 27 pages
Procedure Name: sp_helpdistributor
Database Id: 1
Object Id: 171147655
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 1
Size of plans: 0.021484 Mb, 22528.000000 bytes, 12 pages

ANALYSIS: Start with the top section first, as it has a lot of information in a small amount of space.

Memory Usage:

                             Meg.         2K Blks        Bytes
      Configured Memory:     16.00        8192        16777216
              Code size:      2.45        1254         2568192
      Static Structures:      0.25         129          263232
                  Locks:      0.29         147          300000
           Open Objects:      0.12          61          124000
         Open Databases:      0.02          13           26080
     User Context Areas:      1.01         515         1054326
             Page Cache:      9.38        4804         9836864
           Proc Headers:      0.13          68          138226
        Proc Cache Bufs:      2.21        1134         2322432

ANALYSIS: Configured Memory is the first line. This corresponds to the listed MEMORY setting. The configuration parameter is specified in 2K pages, so the setting of 8192 indicates 16MB of memory is assigned to SQL Server on this machine.

Microsoft has recommended settings for memory, which you can find in Microsoft's knowledge base or in Technet (in article number Q110983). These settings are recommended only for machines that are not running anything except SQL Server. These settings may be a little high, especially if the machine is doing anything besides running SQL. Tuning memory exactly involves monitoring and tuning over a period of time and is somewhat machine-specific.

Now back to the chart. The next line in the output is Code size. Code size is the overhead of the SQL Server executable and other fixed requirements. There is no tuning that the authors know of to change this value.

Locks is the fourth item in the list. This parameter is set with, suprisingly enough, the LOCKS parameter. The default setting is 5000 locks, which is probably sufficient for most small installations. Each shared, updated, or exclusive lock used by the server must come from this pool of available locks. If you run low on locks, a message to that effect appears in the Windows NT event log. Each lock takes 60 bytes, for a total value of 300,000 bytes by default. Increase this value if your server has moderate-to-heavy activity. You can monitor this using the SQL Server Performance Monitor object SQLServer-Locks with the counter Total Locks. You look at how to see this counter later in this chapter. The proper value for this parameter is high enough so that you never run out.

Open Objects and Open Databases are the next two parameters. They again correspond to configuration values of the same name. The defaults are 500 and 20, respectively. Each of these take relatively small amounts of memory and should be configured similarly to locks. If you don't get any errors, you probably have the proper amount set for these options.

User Context Areas is something to which you should pay attention. This is directly related to USER CONNECTIONS. Each user connection you configure requires a varying amount of memory, with an approximation being about 50KB up to the number of max worker threads (another configuration option set with the sp_configure command). If you do the math, it averages to about 42KB on this server. It seems to vary literally from one service pack to another. These settings came from a server with service pack 3 installed (build 258). Relative to the other configuration parameters, it's easy to use large amounts of memory here. For instance, assume that you wanted to allow 100 concurrent connections to SQL Server. You set User Connections to 100. Even using the smaller figure of 42KB per connection, that's 4MB of RAM taken away from SQL Server just to allow users to connect to the server. SQL Server can be configured for as many connections as you have memory (and money), but avoid overextending your resources.

Skip Page Cache for a moment, and look at Proc Headers and Proc Cache Bufs. When you add these two, they relate to the PROCEDURE CACHE configuration option. The default value for this parameter is 20. This is the percentage of remaining memory to be used to store compiled versions of ad hoc queries, triggers, and stored procedures. You might ask from what that 20 percent comes. If you take the value of MEMORY (16MB earlier) and subtract all the values except the last three (Proc Headers, Proc Cache Bufs, and Page Cache) leaves 12,441,386 bytes, or 11.8MB. It's 20 percent of that value that should be the total of these two values, or 2,488,277 bytes. The actual value comes out to 2,460,658, which is pretty close. It's rare to see them match up exactly.

To understand how memory works for SQL Server, examine Figure 20.4.

Figure 20.4. Memory configuration for SQL Server.

The default value for Procedure cache is 30, or 30 percent of the remaining memory to be used to hold the objects listed previously. For anything but the smallest installations of SQL Server, this value may be a little high. You can monitor this value using SQL Server Performance Monitor. The counter to monitor is the SQLServer-Procedure Cache object, Max Procedure Cache Used % counter. This tells you whether the amount you've assigned is being used.


WARNING: Be careful not to set the value for Procedure Cache too low. If you do, you may receive error 701:

There is insufficient system memory to run this query.

For that matter, you may not even be able to start Enterprise Manager if you set it too low. One of the first things Enterprise Manager does is run several stored procedures when you log in. If there's not enough memory to store those procedures in the procedure cache, you can only log on with ISQL or ISQL/w. For systems with less than 64MB of RAM, try not to set this value below 10 percent.


After all that, whatever is left is used as Page Cache. Page Cache is the amount of memory you actually have used to hold data, index, and log pages in memory. It's this number that should be optimized most of the time. In this example, out of the 16MB that you've assigned SQL Server to used, only 9.38MB is actually being used for your data/page cache (you can use these two terms interchangeably).

Buffer Cache, Top 20:

       DB Id     Object Id     Index Id    2K Buffers
       1         5             0           176
       1         3             0           64
       1         1             0           30
       5         5             0           20
       1         99            0           16
       1         6             0           10
       1         1             2           8
       1         5             1           8
       1         2             0           7

This section tells you about the 20 largest objects in the Page Cache referenced earlier (note that the naming conventions aren't even consistent within the DBCC MEMUSAGE). The DB ID is the database number (1 is master on all installations of SQL Server). You can always look up the DB ID values from the sysdatabases table in master, or run the sp_helpdb system stored procedure to find these values.

The Object ID can be found in the sysobjects table in each database. For instance, Object ID 5 in your master database is the sysprocedures table. This report indicates that it takes 176 2KB pages in memory, or about 300KB. You can look at the value over time; it changes. The largest objects on your server are most likely your production user tables. This should help give you an idea of how memory is being used over time.

Procedure Cache, Top 20:

Procedure Name: sp_MSdbuserprofile
Database Id: 1
Object Id: 107147427
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.171616 Mb, 179952.000000 bytes, 90 pages

The last section is a reflection of which objects are in the procedure cache. This includes how many 2K pages are being taken by the number of trees and the number of plans in memory. The trees are the precompiled versions of the procedures (copies of the rows from sysprocedures in each database) and the plans are the compiled and optimized versions of these trees. You can see which stored procedures are taking the most amount of memory. See the previous comments on which counter to monitor with the SQL Server Performance Monitor tool.

tempdb in RAM

tempdb in RAM is one more configuration option that should be discussed. tempdb is the database used to hold temporary tables, intermediate query results, and other data that is temporary in nature. By default, it is 2MB in size, residing on the master device. You have the option of configuring tempdb to reside entirely in RAM. If you set this option, you are specifying the number of megabytes of memory you would like to reserve just for tempdb, in addition to the memory requirements of SQL Server. If you assign memory to tempdb, you should reduce the amount of memory assigned to the SQL Server memory configuration parameter. The default value of 0 means that tempdb resides on disk.

When you change this configuration parameter, and then stop and restart the MSSQLServer service, tempdb resides in memory, in a virtual device known as TEMP_DB. For that reason, you should never use that name on any of your devices. The device's size reflects the size of the configuration parameter. You can alter the tempdb database while it's still in memory. You would run a command such as the following to increase the size of tempdb while it is still in memory.

ALTER DATABASE TEMPDB on TEMP_DB = 5

You can alter the size of the tempdb database up to 10 times while it's still in memory. If you need to increase the size after that, you must readjust the configuration parameter and then stop and restart the MSSQLServer service.

Placing tempdb in RAM can positively impact performance on servers with memory sufficient enough to allow this option. Only specific testing on your server shows whether you will benefit from using this option. You should probably not even consider it if your server has less than 128MB of RAM. Remember that any memory you assign to tempdb is essentially taken away from SQL Server for use as data and procedure cache. tempdb is in memory just like any other database most of the time. This option forces it to always remain in memory. If you do a significant amount of order by, group by, joins, or make extensive use of temporary tables and stored procedures, setting tempdb in RAM may significantly speed these operations.

The downside of setting tempdb in RAM is that it is a fixed size. If you set tempdb in RAM to 10MB and you later need more space, you must increase the size with the ALTER DATABASE command. It does not dynamically grow in size. If you have tempdb on disk, you have the ability to set it to a larger size so that you do not run out of space. Running out of space in tempdb has a very negative effect on query processing.

After you've placed tempdb in RAM and would like to put it back to disk, set the value of tempdb in RAM back to 0. At that time, it is resized to 2MB and placed on the default device with the lowest name, alphabetically speaking. The master device is the only device that has the default device option turned on by default. If you turn this option off and don't enable it on any other device, tempdb remains in RAM sized at 2MB. You won't necessarily see any error messages that indicate something had not worked. Instead, if you look at the config-uration screen in Enterprise Manager, you see something like that shown in Figure 20.5.

Figure 20.5. Server Configuration/Options Configuration Tab.

The tempdb is currently running 2MB, but is set to 0. To fix this situation, set a device to be a default device, and then stop and restart the MSSQLServer service.


NOTE: One other performance option might be considered here.

You may want to move tempdb off to its own device to avoid excessive I/O on the master database device. Here are the directions:

1. Create a new device (preferably on another physical disk).

DISK INIT
NAME = tempdb_device,
PHYSNAME = `d:\mssql\data\tempdb_device.dat',
VDEVNO = 100,
SIZE = 51200 /* or whatever size you need (this is 100 MB) */

2. Move tempdb into RAM.

exec sp_configure "tempdb in ram", 2
go
reconfigure with override
go

3. Stop and restart the MSSQLServer service.

From a command prompt, type NET STOP MSSQLSERVER.

Then type NET START MSSQLSERVER.

Instead, you can use the SQL Enterprise Manager to stop and restart the server.

4. Next, turn the Default Device option off for the master device, and on for the tempdb_device created.




exec sp_diskdefault master, defaultoff
go
exec sp_diskdefault tempdb_device, defaulton
go

(You can use the SQL Enterprise Manager to this also.)

5. Now configure tempdb to be on disk again.

exec sp_configure "tempdb in ram",0
go
reconfigure with override
go

6. Stop and then restart the MSSQLServer service as done in Step 3.

7. Now, tempdb is 2MB on the tempdb_device device. Alter the database to fill the rest of the device. For example:

Alter database tempdb on temp_device = 98
/* to fill up our 100 MB device we created before */

tempdb is now 100MB and only exists on the tempdb_device device. Do not reset the default database options, or it may revert back to master if you ever change the option again.

If you want to undo this change (for instance, to move tempdb to another device), simply set the master device Default Device option on, and the tempdb_device Default Device option off, and stop and restart the MSSQLServer service. tempdb reverts to being 2MB in the master device.


SQL Server Performance Monitor Counters

Now focus on the SQL Server-specific counters from the previously mentioned objects.

Memory

Look at the SQLServer Object, Cache Hit Ratio counter to determine how efficiently memory is being used on your server. This value reflects how frequently data was found in memory when SQL Server went to find it. This value should be higher than 90 percent, preferably as close to 100 percent as possible.

You may also want to monitor the SQLServer Object, Cache - Number of Free Buffers counter to view how much free memory space is still available that has been assigned to SQL Server.

One other counter you may want to view is the SQLServer Object, Max Tempdb Space Used (MB) counter. This helps you keep track of how full tempdb gets.

Processor

If you want to monitor how much CPU SQL Server is using, you could monitor the PROCESS object, % Processor Time counter, with an instance of SQLSERVR. You can monitor how much processor time the SQLExecutive service is using by monitoring the instance of SQLEXEC.

Network

If you monitor the SQLServer Object, NET - Network Reads/Sec counter, it tells you how much data is coming in from the network. The NET - Network Writes/Sec counter reflects how much data is being transmitted out of SQL Server.

Disk

There are a significant number of IO counters to monitor for the SQLServer Object. All of them, oddly enough, begin with I/O. For instance, the I/O - Transactions/sec. counter measures the overall throughput of the server. The I/O - Page Reads/sec counter monitors read activity, and I/O - Single Page Writes/sec counter monitors how much write activity is occurring on the server. You may want to compare this to the I/O - Log writes/sec counter. If single page writes are occurring that are not the result of Log writes, this may indicate that your server doesn't have enough memory.

Current Activity Window

Now that you've looked into all of these performance monitoring counters, look at an easier way to get a good feel for the activity on your server. This is the Current Activity window inside SQL Enterprise Manager. You can show this screen by clicking on the bar graph icon or from the Server menu/Current Activity. Figure 20.6 shows the Current Activity window.

All of the red lines here (which indicate SQL Server locks) are completely normal when Enterprise Manager is running. If you are not sure what an icon means, click the last icon to the right on the icon bar to see the legend (see Figure 20.7). This window gives you a quick snapshot of what activity is occurring on your server. You can see what activity is occurring if you double-click a connection.

Figure 20.6. The Current Activity window.

Figure 20.7. The Activity Legend in the Current Activity window.

Try the following:

Open ISQL/w and run the following query:

USE PUBS
GO
SELECT * FROM AUTHORS
GO

Now, switch over to Enterprise Manager and start the current activity window. Double-click the connection labeled something like this:

11:pubs.dbo/AWAITING COMMAND/SQLGUY (Microsoft ISQL/w)

Your server name and session number will likely be different, but what you are looking for is your ISQL/w session. Now double-click that entry. You should see a Process Details window similar to that shown in Figure 20.8.

Figure 20.8. The Process Details window.

You can directly observe the last SQL statement submitted by this connection, kill the connection, or send a message to that connection.


NOTE: There is nothing magical about Enterprise Manager. If you turn on SQL Trace and run the previous sequence, you can see everything that the Graphical Interface has shown you. Try running the stored procedure sp_processinfo and see if you recognize the results compared to the Current Activity window.

You can use the DBCC INPUTBUFFER command in order to view the last SQL statements sent into SQL Server by a connection. You can even see the last query results with the DBCC OUTPUTBUFFER command.


DBCC INPUTBUFFER (spid)
DBCC OUTPUTBUFFER (spid)

spid is the system process ID. You can find this number either in the current activity window, or by running either sp_who or sp_processinfo.

Summary

No single day of study could possibly cover all the SQL Server performance tuning issues. Microsoft has a five day class on the topic, and after that class people want to learn more about the product. This chapter should have given enough information to get you started with performance tuning. For additional information, see Microsoft SQL Server 6.5 Unleashed from Sams Publishing.

Q&A

Q What counter measures whether I have enough memory allocated to SQL Server?

A
The counter most directly related to this setting that gives the most global view is the Cache Hit Ratio counter.

Q If I would like a quick snapshot of activity on SQL Server, where should I look?


A
In the Current Activity window of Enterprise Manager.

Q What kind of network card should I install in my server?


A
A 32-bit bus mastering network card.

Workshop

The Workshop provides quiz questions to help you solidify your understanding of the material covered, as well as exercises to provide you with experience using what you've learned. Try to understand the quiz and exercise answers before continuing onto the next day's lesson. Answers are provided in Appendix B, "Answers."

Quiz

1. In terms of overall performance impact, which component of the system deserves the most attention for performance tuning?

2. Why should the transaction logs be kept on different disks than the Windows NT paging file?

3. If you were concerned that you had over-allocated memory for SQL Server, what Windows NT counter would you monitor?

4. What parameter would you monitor to view Windows NT paging?

5. Which hardware solution provides the best performance but doesn't necessarily provide fault tolerance?

Exercises

Performance tuning usually involves monitoring a system with many users. Because you may or may not have this available to you, this exercise does its best to show illustrate a multiple-user system might look like.

1. Start several ISQL/w sessions and run stored procedures and queries from the windows.

2. Start SQL Enterprise Manager and monitor the activity with the Current Activity window. Observe what information can be seen and try each of the options, such as the Kill command, and observe the behavior of the client programs (ISQL/w in this case).


Previous chapterNext chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.