This chapter presents several issues to keep in mind as you administer your SQL Server system. For the day-to-day management of SQL Server, you need to know about many different concepts, routines, and ideas.
The everyday operation of SQL Server requires that you spend some time managing the database engine so that the system operates at its fullest potential. By staying on top of the system, you can ensure the optimum response times for users, and you can prevent some common problems with the system altogether.
Some of the concepts in this chapter relate to strictly administrative tasks, and some relate more to management of performance and system tuning. Understanding these topics and the correct definition of your system can ensure that your SQL Server installation succeeds.
Checkpoints are a function that SQL Server incorporates to commit changes to a database or configuration option at a known, good point in time. When you configure or modify the server setup, you have to restart it, so you might want to initiate the checkpoint process manually.
When a checkpoint is issued, whether by a manual intervention process or by naturally occurring server-based processes, all dirty pages are saved to disk. A dirty page is one containing updates not yet applied to the database's disk image. Checkpoints normally occur approximately every 60 seconds unless you intervene. The actual time interval depends on server loading, recovery options that you've set, and general performance tuning that SQL Server handles. However, the interval should always be quite close to 60 seconds.
After your SQL Server shuts down unexpectedly, it can take longer to start up the next time. This is because SQL Server rolls back and rolls forward transactions to the last checkpoint. As it does so, SQL Server restores the database to the last known good state--the state recorded when the last checkpoint was issued and successfully executed.
NOTE: You can also shut down the server manually by issuing the SHUTDOWN command. By issuing CHECKPOINT followed by a SHUTDOWN, you can ensure that SQL Server saves all transaction information appropriately.
When shutting down the server, you can avoid longer startup times by manually issuing the CHECKPOINT command. Manually issuing this command has the same effect as letting the server issue the command automatically. The system saves all information to disk, and can simply start up and "turn on" the databases for access by your client applications. Manually issuing CHECKPOINT is helpful if you're shutting down a server quickly, such as when a power failure occurs and the UPS sustaining the server is nearing its life cycle.
NOTE: The CHECKPOINT command is issued at a database level and applied against the current database. If your system has more than one database, you must issue the command against each database. To issue the CHECKPOINT command, you must be the database owner.
Another helpful option is Truncate Log on Checkpoint, which automatically truncates the transaction log whenever a checkpoint is reached. To set this option, select and right-click the database in the SQL Enterprise Manager. Choose Edit and click the Options tab. Figure 19.1 shows the available options.
NOTE: If you select the Truncate Log on Checkpoint option, you cannot back up the transaction log during the course of standard backups. This limitation might not present a problem in your installation, because you can still back up databases. However, you should consider your overall backup plan before selecting this option. See the sections later in this chapter for more information on backup and restore operations.
FIG. 19.1
In the SQL Enterprise Manager's Options page, you can control the interval at which
SQL Server truncates the transaction log.
If you enable Truncate Log on Checkpoint for a database, SQL Server truncates the transaction log up to the point of the last successfully committed transaction, unless you are using replication. If you are using replication, SQL Server truncates the log up to the last successfully replicated transaction and successfully committed transaction. Replication requires the use of the transaction log, so you cannot truncate the log if replication has not been propagated to the subscribers for a given publication. Chapter 17, "Setting Up and Managing Replication," presents more information about replication.
The Database Consistency Checker (DBCC) is a tool that you use to generate detailed information about the database objects that SQL Server manages. Because there are so many different facets to the SQL Server system and its handling of tables, objects, rules, triggers, stored procedures, and more, it is helpful to be able to go into the server and run a "sanity check" to make sure that all is well. The DBCC statement provides this functionality.
Before getting into the use and utility of DBCC, you need to understand two different conditions that are usually in effect whenever you want to use the DBCC statement. First, you should try to ensure that as little activity as possible is affecting SQL Server. If users are accessing the server to make updates or changes, you might receive errors when DBCC runs. Such errors are due to the nature of the calls that DBCC performs. They are very low-level and often require nearly exclusive use of the database.
Second, you often must ensure exclusive access to the database. To do so, use the sp_dboption statement to set the database to single-user mode. The following is the statement's syntax:
sp_dboption <database name>, `single user', True
For example, to run some checks on the Pubs database, you use the following statement:
sp_dboption `pubs', `single user', True
This statement prevents other users from using the system while you perform your checks.
NOTE: Before updating the system's options, you must be in the Master database. Make sure that you issue a Use Master before attempting to set options using sp_dboption. n
After turning on single-user mode, you can perform the checks that you need to ensure that the database is running in top shape. After completing your work with DBCC, you can set the database back to multiuser mode by changing True to False in the sp_dboption command, as in the following example:
sp_dboption `pubs', `single user', False.
DBCC supports many different options. The next few sections discuss the most frequently used options and how they can assist you in administrating your SQL Server implementation. Using DBCC NEWALLOC NEWALLOC replaces CHECKALLOC. With CHECKALLOC, the system process stopped if DBCC found an error, sometimes obscuring other problems with the database. NEWALLOC doesn't stop when DBCC finds an error, but continues and reports all errors that it finds in the database structures. The option has the following syntax:
DBCC NEWALLOC <database name>
If you omit the <database name> parameter, SQL Server checks the current database.
When NEWALLOC runs, it returns detailed information about your system and its database objects. You can use this information to direct you toward any problems that might be occurring on the system. Listing 19.1 shows a portion of a report run against the standard Pubs database. This sample indicates the type of information that you can expect to receive from the NEWALLOC option.
Checking pubs *************************************************************** TABLE: sysobjects OBJID = 1 INDID=1 FIRST=1 ROOT=8 DPAGES=4 SORT=0 Data level: 1. 4 Data Pages in 1 extents. Indid : 1. 1 Index Pages in 1 extents. INDID=2 FIRST=40 ROOT=41 DPAGES=1 SORT=1 Indid : 2. 3 Index Pages in 1 extents. TOTAL # of extents = 3 *************************************************************** TABLE: sysindexes OBJID = 2 INDID=1 FIRST=24 ROOT=32 DPAGES=4 SORT=0 Data level: 1. 4 Data Pages in 1 extents. Indid : 1. 1 Index Pages in 1 extents. TOTAL # of extents = 2 ... ... *************************************************************** TABLE: pub_info OBJID = 864006109 INDID=1 FIRST=568 ROOT=584 DPAGES=1 SORT=0 Data level: 1. 1 Data Pages in 1 extents. Indid : 1. 2 Index Pages in 1 extents. INDID=255 FIRST=560 ROOT=608 DPAGES=0 SORT=0 TOTAL # of extents = 2 *************************************************************** Processed 49 entries in the Sysindexes for dbid 4. Alloc page 0 (# of extent=32 used pages=57 ref pages=57) Alloc page 256 (# of extent=25 used pages=34 ref pages=34) Alloc page 512 (# of extent=15 used pages=39 ref pages=39) Alloc page 768 (# of extent=1 used pages=1 ref pages=1) Alloc page 1024 (# of extent=1 used pages=1 ref pages=1) Alloc page 1280 (# of extent=1 used pages=1 ref pages=1) ... ... Alloc page 31744 (# of extent=1 used pages=1 ref pages=1) Alloc page 32000 (# of extent=1 used pages=1 ref pages=1) Total (# of extent=196 used pages=261 ref pages=254) in this database DBCC execution completed. If DBCC printed error messages, see your System Administrator.
As you can see, NEWALLOC returns a great deal of information. Usually, you must examine the returned report and search only for the problems. Focus on the problems reported, if any, and work with them. The balance of the information confirms database table structures, page allocations, and so on. If you receive an error message, you should also receive specific instructions on what you must do to correct the problem.
NOTE: If removable devices have objects, NEWALLOC might return warning message 2558. You can ignore this warning. It results from a necessary setting for objects residing on removable devices.
Using DBCC CHECKDB When you run CHECKDB, DBCC validates each table and its associated data pages, indexes, and pointers. DBCC tests each to ensure that it properly links to the related information. CHECKDB has the following syntax:
DBCC CHECKDB <database name>
If you omit the <database name> parameter, SQL Server checks the current database.
Listing 19.2 consists of excerpts of output generated by the CHECKDB option.
Checking pubs Checking 1 The total number of data pages in this table is 4. Table has 70 data rows. Checking 2 The total number of data pages in this table is 4. Table has 49 data rows. Checking 3 The total number of data pages in this table is 10. Table has 283 data rows. Checking 4 The total number of data pages in this table is 1. Table has 29 data rows. Checking 5 The total number of data pages in this table is 23. Table has 165 data rows. ... ... Checking 592005140 The total number of data pages in this table is 1. Table has 14 data rows. Checking 688005482 The total number of data pages in this table is 2. Table has 43 data rows. Checking 864006109 The total number of data pages in this table is 1. The total number of TEXT/IMAGE pages in this table is 73. Table has 8 data rows. DBCC execution completed. If DBCC printed error messages, see your System Administrator.
Notice that the CHECKDB option also checks system tables. When you run the command, it checks all tables in all aspects of the database that you specify in the command.
TIP: If you find that checking the entire database takes too long, you can use the CheckTable option. You simply specify CHECKDB as the DBCC option and then list the table name that you want to check. The command examines only the specified table and thus can save you significant time on the analysis steps.
Using DBCC SHRINKDB When you create a database, you often predict a greater disk volume than you actually need to support the database table. After putting the database into production, you can tell how big the database really needs to be. If you've determined that you can make the database smaller and thus free up space for other databases on your system, you can use the SHRINKDB option. Keep in mind that the SHRINKDB option works in pages. Therefore, you have to determine the database's size and divide that size by 2,048 (the size of the database pages) to get the value to specify for this command. The following is the syntax for SHRINKDB:
DBCC SHRINKDB <database name>, <new size>
Shrinking the Master database is a bad idea, however, because this database contains all information about your system. If you find that shrinking this database is necessary, make sure that you back it up first. If you shrink the Master database, you are modifying the database that manages the different recovery mechanisms for the system. If a system failure occurs while shrinking the Master database, you might not be able to recover the database without a backup.
TIP: If you issue the SHRINKDB statement, as shown in the following listing, without a <new size> parameter, SQL Server returns the size of the smallest possible database. You can use this value to determine your database's new size.Current Size of Database Size Database Can be Shrunk To ------------------------ ------------------------------ 32256 17152 (1 row(s) affected) Objects pvnt further shrink Index ------------------------ ----- syslogs data (1 row(s) affected)DBCC execution completed. If DBCC printed error messages, see your System Administrator.
NOTE: You cannot shrink a database to a size that is less than that of the model database.
If you request the smallest possible size, SQL Server shows how far you can shrink the database. In addition, an indicator informs you if you can do something to shrink the database further. In the code example shown in the preceding tip, you can reduce the database in its current configuration to 17,152 pages (about 35M). If you can manipulate the other listed items that prevent further shrinking, you can make the database even smaller.
NOTE: You should rarely use the absolute minimum value when shrinking a database, because any active database is bound to grow over time and with use. If you've decreased the database's size so severely that you limit growth for a given application, this size is at cross-purposes with the design goals of your client/server SQL Server-based application.
SQL Server gains much of its performance from intelligent processing of data stored in the tables. This analysis comes in different forms, but the most significant is the examination of real data in the system to determine the optimal path for retrieving the information.
For example, consider the route to your home. You've likely figured out the best, fastest way to get to your home from your office. After finding the best route, you are of course more likely to take that route, even if someone suggests that you select a different route, because you've taken the time to consider the different roads available and selected the best one possible.
For you to consider seriously a new route home, a new road would have to offer a better route to your home, or there would have to be construction on the existing road that makes it an inefficient route home.
With SQL Server, this analogy holds true. When you implement a stored procedure, SQL Server reviews the SELECT logic of the stored procedure, along with any other data-affecting events. The server determines the best route to take in fulfilling the stored procedure's function. After selecting this route, SQL Server remembers this information so that the next time you run the stored procedure, it's optimized for the data that it affects. SQL Server's "roads" consist of the indexes on the data in the system. The server considers these paths when analyzing the best way to retrieve a given value set.
If you add to the table several rows comprising, for example, more than 20 percent of the table's original size, you should consider updating the statistics associated with the table. The syntax for updating a table is as follows:
update statistics <table name>[.index name]
Consider the following example:
update statistics authors
NOTE: In this syntax, you do not include quotation marks around the table name that you want to update.
You can specify that only a given index be updated or that the information concerning the entire table be updated. If you specify the index, you must specify it as shown in the syntax, indicating the table in which the index can be found.
The final step to implement these updated indexes and make the stored procedures aware of them is to indicate to SQL Server that it should reconsider the route to take when retrieving the information from the database. To do so, you must recompile the different stored procedures affected by the update statistics that you just ran.
Usually, stored procedures are compiled the first time that they're called after you start SQL Server. Stored procedures are automatically recompiled in other instances, but simply issuing update statistics doesn't cause an automatic recompile. An example of the automatic re-compile is when you drop an index that is the basis for a stored procedure. You might have noticed that the first time you call a particular stored procedure, it executes a bit slowly, but that subsequent calls to the stored procedure are often noticeably faster. This is because the optimizer is compiling the stored procedure based on the current statistics of the database tables and any relevant indexes.
You recompile a stored procedure by setting a flag on a table which essentially invalidates any copies of stored procedures that are in procedure cache. This causes SQL Server to reload and recompile the affected stored procedures the next time they are accessed if they reference the table you flagged. To set the flag, use the following command:
sp_recompile <table name>
where <table_name> is the table to be marked for all referencing stored procedures to be recompiled. If successful, the command returns a simple acknowledgment message indicating that the stored procedures will be reloaded and recompiled as shown in the following example:
sp_recompile "authors" go Each stored procedure and trigger that uses table authors will be recompiled the next time it is executed.
TIP: If you don't recompile the stored procedures that update statistics affects, you won't notice any performance gain until after you stop and restart the server, because that will be the next time that SQL Server reloads and compiles the procedures.
You can't hurt anything by starting the update statistics and recompile operations, but you should run them when the fewest users are on the system, particularly if your database sizes are substantial. The time that it takes to update these parameters can take quite a hit on system performance for your system's users. You should make updating statistics part of your regular, ongoing maintenance, perhaps running the process on each of your high-use tables approximately once a month during high database throughput. When you first bring up a system and are adding information at a high rate, you might want to consider frequent calls to this procedure, perhaps as often as once per day of heavy, new data input.
One of the most dreaded questions when you call for help with your application is, "When was your last backup?" Getting caught without a backup can be quite painful.
With SQL Server, backing up is often much more important than it is on a stand-alone system. If you support a database with many users, keeping those users' confidence in the system's information is very important. Data loss might incur costs across departments and might dictate significant data reentry.
SQL Server provides a couple of different ways to protect from data loss, or at least minimize it, should the worst happen. These techniques range from physical duplication of the information to backing up the databases and transaction logs at specific intervals during the course of business.
The upcoming sections explain how to back up the system and the trade-offs to consider in determining your optimum backup scenario.
One of the first questions that a new SQL Server customer ponders is how often he or she needs to back up. The answer is simple, but seems sarcastic when first given: Quite simply, how much data can you afford to lose?
The initial answer is usually "none." This answer is fine, but then you must determine what the effect will be on the system plan to incur no loss of data if a catastrophic system failure occurs. Typically, such a plan involves mirroring databases, which costs money to implement. For more information on mirroring a database on your system, see the section "Understanding and Using Mirroring" later in this chapter.
Because the cost of mirroring a system can be quite steep, the next step is to determine the amount of data that can truly be at risk at any given time. To determine this amount, you must first understand that you have to back up two different components to the system. As you've seen throughout this book, the database contains the different tables and all other objects associated with the database. The transaction log contains an incremental log of the things that happen to a database and change it. Essentially, the transaction log contains a "before and after" picture of the data that changes with each block of work the server performs.
Keep in mind that the database backup is a snapshot in time, and the transaction logs contain all the changes since you took that snapshot. At a minimum, you'll want to back up the database weekly and the transaction log daily.
The best plan is a system of rotating backups. Such a system is the safest way to ensure that you have the correct fault tolerance for any system failures that you might experience. All computer professionals agree on one thing: Any system will fail; it's only a question of when. You should have complete, concise, and accurate backups. They are the only lifeline you have in times of hardware failure.
Table 19.1 shows a suggested backup schedule. If you're backing up to tape, this approach requires 29 tapes over the course of a year. The weekly backup of your system requires 14 tapes, and maintaining the monthly archives requires 13. You use the remaining two tapes as working transaction log backups, and as an initial, baseline backup of the database. Remember, this schedule is to maintain backups, not to retain historical data on the system. You should not count on backups for retention and research on historical information. For those purposes, you must rely on alternative backups. Number the tapes sequentially on a permanent label on each tape cassette.
NOTE: If your database is large, you might need more than one tape per day, week, or month. In such cases, consider the tape-numbering scheme to be a tape set numbering approach.
This backup schedule gives you a full two weeks of backups to fall back on if a problem arises. You should implement a two-week backup cycle because you might not immediately recognize when a problem has arisen. For example, a program change or other system event might cause a problem that isn't detected for a few days. By keeping the two weeks of backups, the odds are good that you will have a clean copy of the information in the recent archives.
Tape Number | Used for | Comments |
1 | Monday backup | Backup for the first Monday of a two-week cycle |
2 | Tuesday backup | |
3 | Wednesday backup | |
4 | Thursday backup | |
5 | Friday backup | |
6 | Saturday backup | |
7 | Sunday backup* | |
8 | Monday backup | Backup for the second Monday of cycle |
9 | Tuesday backup | |
10 | Wednesday backup | |
11 | Thursday backup | |
12 | Friday backup | |
13 | Saturday backup | |
14 | Sunday backup* |
When you back up your databases and transaction logs, you do so by dumping the information in your system to a dump device, which SQL Server recognizes as a repository for information and can be either a disk file or a tape device. If you dump information to a disk-based device, for example, the file that SQL Server creates can be backed up to tape, another server, or some other location where you will manage the backup files.
The best way to manage the backup of information is with the SQL Enterprise Manager. Choose Tools, Database Backup/Restore to start working with the backup subsystem. The main Database Backup/Restore dialog box then displays (see Figure 19.2).
FIG. 19.2
The main control panel for backup and restore operations enables you to select and
create devices and set scheduling options.
Setting Up Backup Devices You first must establish the backup devices to use as the destination for information dumped from the working databases and tables. The Database Backup/Restore dialog box's Backup Devices window lists the devices already known to the system. You can accept one of these devices or create your own. To create a device, choose the New button below the Backup Devices window. The SQL Enterprise Manager then displays the New Backup Device dialog box, shown in Figure 19.3 and prompts you to name the device.
FIG. 19.3
You can create new devices that identify a tape or disk destination for backups.
Notice that as you type the name for the device, the location is updated to include the name that you use.
TIP: You can specify a path that points to another physical system when you indicate the backup devices you want to use. By doing so, you can provide good backup coverage without the need for additional storage media.
Consider creating a new device for each day in the 14-day cycle mentioned previously. Then, when you create a scheduled backup, you don't necessarily have to use a tape to store the information if you point to the device on the remote system and use it as a backup destination. If you set up the system to back up to the devices without the append flag, each of the devices you create will be only as big as needed for a single copy of the database you're backing up.
Also, because both your core SQL Server system and the remote system are very unlikely to experience downtimes simultaneously, you are assured of solid backups for your system.
This approach also has the following disadvantages:
- The remote system must always be accessible to the backup and restore process.
- The remote system must have enough disk space to support the dump devices.
- If both systems are in the same building and that building suffers catastrophic damage such as a fire, you'll lose your backup system. This is a major reason for using offsite tape or other media backups.
Keeping these rules in mind, you can see a solution that offers the best of both worlds combines remote system backups and offsite storage for tape backups made slightly less frequently.
NOTE: Be sure to note the file name if you're backing up to a file on disk rather than tape. When you copy the backup file from the system for storage, you need to use this file name to back up to tape or elsewhere on the system.
After indicating the name of the device that you want to create, simply choose Create. SQL Server then creates the new device and lists it in the Backup Devices window of the Database Backup/Restore dialog box. You're all set to start creating the backups that you'll need to support your SQL Server. Figure 19.4 shows a database with 14 backup devices, one for each day of a two-week backup cycle.
FIG. 19.4
The Master database now has a backup device for each day of a two-week backup cycle.
TIP: The Master database does not have an explicit transaction log because it always has the Truncate Log on Checkpoint option set. For this reason, when you back up this database, you cannot do a transaction log backup. You must do a database dump and backup.
The next sections describe in detail what's involved in setting up the backups on the system.
Running the Backup After creating the devices that you need to support the backups, you must perform the actual backup. You have two options available. The first option is to run backups on demand. Although this option works fine, it requires manual intervention each time you want to back up the system. This approach is more prone than an automated solution to forgetfulness, unexpected meetings, and similar mishaps. The second option is to implement a scheduled backup series and have the system complete the database dumps for you automatically. The following are some general points to keep in mind as you determine your backup strategy:
You must select the type of backup, either manual or scheduled/automatic, that best fits the systems you are supporting.
Completing Manual Backups Manual backups are quite straightforward and are a good way to get a feel for how the system works, the time that it takes to back up, and how much impact a database backup will have on your system if other users are on the system when a backup starts.
NOTE: When you first back up, you must initialize the device. To do so, select the Initialize Device check box in the Options panel of the Database Backup/Restore dialog box's Backup page. If you attempt a backup without initializing the device, the SQL Enterprise Manager displays a warning that the device is either offline or uninitialized.
To complete the backup, select the device that you want to back up to and choose Backup Now. The Backup Volume Labels dialog box appears (see Figure 19.5).
FIG. 19.5
If you select a device that has already had a backup completed, the Backup Volume
Labels dialog box prompts you to pick the label to which to append information.
After you select the device and choose OK, the SQL Enterprise Manager completes the backup. A status indicator shows the backup's progress. Finally, a dialog box confirms that the backup succeeded.
Here are a couple of items to keep in mind as you run your backups of the system:
Scheduling Automated Backups The safest bet, especially for incremental backups, is a regularly scheduled, automated backup. To set up automated backups, indicate the database and the device to which you want to back up and then choose the Schedule button in the Database Backup/Restore dialog box. The Schedule Backup dialog box appears (see Figure 19.6).
FIG. 19.6
Scheduled backups are a way to avoid relying on human intervention for the completion
of this critical job.
If you're doing append-type backups, the SQL Enterprise Manager prompts you to indicate the volume to which you want to append. After indicating the volume, the Task Schedule dialog box appears as shown in Figure 19.7, enabling you to indicate the details for the scheduled backup.
NOTE: For future reference, take note of the SQL Command text box, shown in Figure 19.6. This text box lists the commands you can run from ISQL that will complete the backup manually. You should keep up with the commands that you can enter manually, because you might want to implement them in your own application later.
FIG. 19.7
You have complete flexibility in setting the times for your backups, the recurring
intervals, and more. In this example, the user is setting up Monday, Wednesday, and
Friday backups.
The dialog box's first two options (refer to Figure 19.6), Immediately and One Time, enable you to do a single backup at some predetermined time. These are good first steps. Your first backups should be scheduled attended backups so you can ensure that all is working well, the files are created correctly, and your backup process works in general.
After determining that the backup process has succeeded, you can change the allotted times, create recurring backups, and more. To change the times at which the backup occurs, first choose the Recurring option button, then choose Change, as shown in Figure 19.6. The Task Schedule dialog box then appears as shown in Figure 19.7. This dialog box indicates the different options that you have for determining the times at which your backups are to run.
The final step is to set up any options you want to enable. To do so, choose the Options button in the Schedule Backup dialog box. The TSQL Task Options dialog box then appears, presenting several options (see Figure 19.8). The option that you are most likely to select is Write to Windows NT Application Event Log. You should select both the On Success and On Failure options to record to the log successes as well as failures. Together, these options give you a definitive mechanism for checking whether the backup succeeded or failed. Figure 19.8 shows these options set. You can use the TSQL Task Options dialog box's remaining options for e-mail notification if you're using SQL Server's mail services, and for establishing retry parameters. If you are experiencing failures because of network traffic to a remote server that maintains your device, you might want to increase the number of retries.
FIG. 19.8
Be sure to check the option to log on to the event log for successes as well as failures.
This mechanism is useful for following up on your automated processes.
CAUTION: If you need to modify your system's retry parameters to address a problem, you probably should investigate the cause of the problem before you implement retries. If you have a less reliable connection to the server containing the device that you use for backup, consider moving the device.Backups are too important to leave open to possible problems, such as network traffic, that force you to modify the retry parameters.
Then you choose OK to close the dialog boxes that enable you to define the backup schedule, and you return to the Database Backup/Restore dialog box. You won't immediately notice any indication that the system has just created a successful backup. This is a bit disconcerting, but take heart. The information about the backup is now part of a scheduled background task. For this reason, you must review it from the task-scheduling portion of SQL Enterprise Manager rather than the backup and restore utility dialog boxes. Choose Server, Scheduled Tasks. The Manage Scheduling Tasks dialog box appears as shown in Figure 19.9, listing your backup job as one of the first items in the list of things to do.
CAUTION: If you are at all uncertain about which tasks you have created to handle the backup job, make sure that you determine which backup jobs you need before you start working with tasks. Because all background activity is managed through the Manage Scheduled Tasks dialog box's Tasks List page, removing or modifying an incorrect item can damage processes that control replication, general cleanup, and many other system jobs.
FIG. 19.9
SQL Enterprise Manager creates scheduled backups as background tasks that you
can manage from the SQL Enterprise Manager's Tasks utility.
You can also use the Task List page to check the status of backups, which is another way to check whether any problems are occurring. To determine the cause of any problems that you might encounter during the backup process, you should use a combination of the Task List page and the Windows NT event log.
Don't forget that you must be absolutely certain you've backed up all the following:
After creating the backups and continuing to back up your database faithfully, how do you recover the database if something does go wrong? This is where you'll be glad that you put into place a formal backup plan, because such a plan will guide you toward restoring your system to full functionality as quickly as possible.
The following are the steps for restoring your system:
After completing these steps, you'll have a fully functional system that is up-to-date as of the last transaction log backup.
You've already learned about installing SQL Server and creating devices, but if you want more information on this topic, see Chapter 5, "Creating Devices, Databases, and Transaction Logs," and Appendix H, "What's on the CD?"
TIP: When creating a new database, use the Create for Load option, available on the Create Database dialog box. This option causes SQL Server to create the database without the initial sets of pointers in the database. You already load this information from the backup anyway, so you have no reason to have the work done twice. The time saved in creating the new database can be nearly 50 percent of the time required to create an entirely new database with the option deselected.
When you choose Tools, Database Backup/Restore and then click the Restore tab, the SQL Enterprise Manager first presents a dialog box containing all the valid backups available. This listing includes the date and time of the backup, the type of backup, the database or transaction log, and so on. Figure 19.10 shows the options available on the Restore page.
SQL Server automatically presents the list of databases for which a database backup has been completed. From this list, you can select the different backups that you want to apply to the database you designate in the drop-down list box.
If you know the information on a given backup is good until a certain time, you can indicate that time in the Until Time text box. Note that any transactions pending at the specified time are rolled back and it will appear as though they never occurred. In most cases, you must restore the database and transaction logs in their entirety, so you can accept the default options.
Choose Restore Now to restore the database and transaction logs.
FIG. 19.10
The Restore page's options enable you to designate the source and destination for
restoration efforts.
If you created the new database with the Create for Load option, you must update the database options before anyone else can use the database. Until you do so, you'll notice that the Enterprise Manager's display flags the database with which you're working as (loading). During this time, no user except the DBO can use the database. To update the options and give other users access to the database, choose Manage, Databases. The Manage Databases dialog box then appears as shown in Figure 19.11.
FIG. 19.11
You must remember to update the options for any database created explicitly for loading.
Otherwise, users cannot access the database.
In the Manage Databases dialog box, double-click the bar representing the database you need to update. The familiar Edit Database dialog box appears. Click the Options tab and deselect the DBO Use Only check box (see Figure 19.12).
After completing this step, you're restored to the point of the most recent transaction log backup and should be ready to begin some quick testing to see whether everything restored correctly (and to boost your confidence in the process). At this point, the data and all objects in the database are restored. Users should not notice any change in operation.
FIG. 19.12
By selecting the DBO Use Only check box, you indicate that the
database is being loaded and is not ready for production use. Deselect the option
to "turn on" the database for other users.
When you are recovering from a system failure, practice will certainly play a key role. As a test of this process, consider using the Pubs database as a Guinea pig. Follow these steps:
You can and should experiment with different scenarios using the test database approach. Then, when you have to restore your database in an actual crisis, you can rest assured that all will go smoothly. Think of these tests as the fire drill for database recovery.
In addition to the backup and restore techniques covered in the previous sections, SQL Server also supports an additional fault-tolerant option: mirroring. Because hard drives are the most likely suspect for a system failure that affects data, you might want to consider this option to protect your databases.
NOTE: Windows NT also supports several fault-tolerant options that are enforced at the operating-system level. For more information, refer to Que's Special Edition Using Microsoft Windows NT Server and see the information regarding striped disk storage.
Mirroring takes place at the device level and is transparent to your applications. When you create a device and indicate that it should be mirrored, you simply tell SQL Server where you want to locate the mirror. Be careful when selecting the mirror's location. You do yourself little or no good if you place the mirror device on the same physical drive as the source device. If you do, and the drive fails, you will lose both the source device and the mirror.
Mirroring is done when SQL Server writes changes to a device to two locations at once. The primary location is your standard device. This is the database that you normally use and have incorporated in your system's applications. Because both devices are identical, if a switch is required from the main device to the secondary device, no data loss occurs and your system's users are not interrupted as they use the system.
If SQL Server is using a device and can no longer access the main device, the server automatically switches to the mirror device, making all future changes to the mirror. When you've recovered the drives containing the problem device, you can move the information back to the original device and restart SQL Server using the corrected devices as the default once again.
While you create a device or after you create it, if you edit the device, you'll notice an option to establish mirroring. Figure 19.13 shows the options that enable you to work with a device.
FIG. 19.13
Select the Mirroring option to set up mirroring parameters.
Choose Mirroring to display the Mirror Database Device dialog box, which enables you to indicate the mirror file's location (see Figure 19.14). If you're working from a workstation, keep in mind that the paths you designate are relative to the server. You can select a location that is either on the current server or on a fully qualified network path.
FIG. 19.14
When you select the mirror file's location, SQL Server mirrors the requested database.
After the process is complete, SQL Server mirrors the device and takes care of the rest. Thereafter, the mirror device reflects any informational changes to the primary device.
CAUTION: If you implement mirroring for a device, make sure that you also mirror the Master database. This database is responsible for maintaining the system information necessary to continue processing if mirroring becomes necessary for any device. By mirroring the Master database, you can enable SQL Server to recover to the mirrored devices that have failed.If you do not mirror the Master database, you might find that you cannot utilize the mirrors for other devices. The Master database maintains information about the mirrored devices that SQL Server manages, and that information could be lost.
The final step is to tell SQL Server about the mirrored device, especially true in the case of the Master database. Because the Master database helps start the server, you need to indicate to the server where it can find the mirror if necessary. Other devices are managed automatically, so this step is required only for the Master database.
When SQL Server starts up, two required parameters control the Master database's mirrored state: -r and -d. Choose Server, S_QL Server, Configure to open the SQL Server Configuration/Options dialog box shown in Figure 19.15. Choose the Parameters button on the Server Options page and then add the startup parameters. The Server Parameters dialog box then appears as shown in Figure 19.16. In this dialog box, you can indicate the startup options.
FIG. 19.15
In the SQL Server configuration subsystem, you can set startup options that enable
you to mirror the Master database.
FIG. 19.16
Be sure to specify both -r and -d options for the Master device.
Unless you specify both, the mirroring will not work correctly.
Add a new parameter, -r<mirror location>, to indicate the location of the master device's mirror. Even after SQL switches to the mirror device during normal use, when you restart the system, it still must first try the primary Master device. Therefore, you must leave the -d specification that is showing the location of the "normal" Master device. When you add the -r option, you're specifying the fallback position that SQL should use when it has determined that the master device has become unusable.
Choose the Add button to create a new parameter. Be sure to specify the entire path and file name to the mirror device file when you create the -r parameter.
When SQL Server has switched to a mirror, it automatically suspends mirroring with the original device. This frees up the device to be replaced or otherwise corrected. When the mirroring is active, your system's users will not notice any difference in the functioning of their applications. They can continue working just as they did before the problem was found with the original device.
After replacing the original device or otherwise correcting the detected problem, you can remirror the active device back to the primary device. Remirroring is identical to the original job of mirroring a device. You simply specify the file name to contain the mirror, and SQL Server takes care of the rest.
If you want to switch from a current device to the mirror, as might be the case if you've just brought online a replacement for a failed device, follow these steps, from the creation of the original mirror, to failure of the device, to replacement of the device, to remirroring to return the device to its original state:
At this point, you're back where you started with the original device being the active device and the mirror standing by as needed to back up the active device.
Mirroring your devices can provide powerful backup capabilities to your key production systems. For more information about backups, be sure to review the backup and restore options presented earlier in this chapter.
The SQL Transfer Object utility provides an excellent way to move information between your system and databases. To start the Transfer Object utility, select the source database. Next, choose Object, Transfer, or right-click the database. Figure 19.17 shows the initial Transfer Objects dialog box.
FIG. 19.17
Select the options carefully when you use the Transfer Object utility. They control
the objects that are copied to the new database.
By default, copying an entire database to a new location is as simple as selecting the source and destination database and choosing the Start Transfer button shown in Figure 19.17. SQL Server copies all the different objects to the new database and sets it up to match the source database.
You can also initiate the copy process at a scheduled time, such as when server traffic is lighter and the copy process will not affect the system's users quite as adversely. By choosing Schedule, you can indicate the time at which you want the copy to occur. As shown in Figure 19.18, the scheduling capabilities offered by the Schedule Transfer dialog box are nearly identical to those that you use to set up backups for your server.
FIG. 19.18
You can use the Transfer Object utility to make periodic copies of key information
in your system.
If necessary, you can even use the Transfer Object utility to make rudimentary backups of your system by copying databases to backup devices and databases possibly located on a different server altogether. You should, however, avoid using the Transfer Object Utility as a production solution of backing up your system. The process isn't meant to be a total solution to your system's backup needs and will not support the same level of logging or critical processing checks as the backup process affords. The process, for example, provides no way to transfer only a transaction log for a given database.
If you deselect the Transfer All Objects check box in the Transfer Objects dialog box, as shown in Figure 19.19, you can specify exactly which objects to include in the copy by choosing the Choose Objects button. The Choose Objects to Be Transferred dialog box appears as shown in Figure 19.20. The dialog box lists the different objects available with a typical copy.
You can also indicate the different options that are transferred as part of the scripting process. While putting information from your source database into the destination system, the Transfer Object utility scripts the creation of tables, logon IDs, and other such items that are related to structure or configuration, not based on information or content. You can control these items by deselecting the Use Default Scriptin_g Options check box in the Transfer Objects dialog box and choosing the Scripting O_ptions button. The Transfer Scripting Options dialog box appears as shown in Figure 19.21.
FIG. 19.19
You can select several options that control how objects are transferred during this
process.
FIG. 19.20
You can selectively copy specific objects or attributes for your database. You
can use this feature to update another system to keep it in sync.
FIG. 19.21
You have complete control over the objects and configurations to include in the transfer.
You can deselect logons if you've already set up security on the destination device,
for example.
The Transfer Object utility is a good companion to the backup and mirroring features that you've seen earlier in this chapter. By putting these features to use in your installation, you can ensure a maximum amount of uptime for your system's users. Use the Transfer Object utility to make copies of critical system components, then use mirroring and backups for ongoing protection against catastrophic system failures.
In real life, you have to weigh the costs involved with running backups, monitoring your system, and providing the best possible performance for your system's users. Not running backups also entails a cost, however. First, recovery from a catastrophic failure will cost you a lot--possibly your job. If you run a large production system and have not planned for recovering it when something in the database breaks, you'll be in hot water.
Second, depending on how you set up your database and the logging associated with it, not backing up your system requires additional disk space. Your transaction log continues to grow and requires additional disk space. You'll have to decide how to balance frequent backups with physical hardware requirements.
It's not a question of if but when you'll need to restore to address a database problem. Hard drives fail, bugs in software exist, and people want to revert from that latest and greatest change that you installed. Figure out your backup schedule and stick to it. You might add the dump process as a scheduled task, so that the dump file is always available on the system. Afterward, from a workstation designated to perform the backups, you can run consistent, scheduled backups and can always get the dump files for your SQL Server systems by pointing your backup software to your dump files' location.
Put simply, there is no excuse for not having a backup. When your system fails, all that you should have to tell the system's users is the time frame between backups, so that the users know how much information they need to reenter into the system.
One common tendency regarding database sizing is to make the device and associated databases huge, often as large as you can make them. Try to avoid this temptation. Keep in mind that you cannot easily shrink a database, but you can pretty easily extend it to make it larger.
This chapter explored some important administrative utilities that provide the support for good, solid, fault-tolerant system design for your users. The time to implement these items is now, not later. If you wait until you need them, it will be too late. Keep in mind the old computer-support adage mentioned earlier: Components will fail, it's only a matter of time. Plan for the failure; be ready for it. Then recovery will be far less painful and difficult.
To learn more about some of the topics covered in this chapter, see the following additional sources that will help you administrate SQL Server:
© Copyright, Macmillan Computer Publishing. All rights reserved.