The Internet Information Server (IIS) can serve multiple client requests simultaneously. If the IIS were not able to handle multiple clients simultaneously, users of your Web site would have severe delays when accessing the information they need.
ISAPI extensions must also be able to handle requests from multiple clients simultaneously. This allows clients faster access to information and prevents user frustration. IIS and other information servers that allow ISAPI extensions can handle multiple clients in part because of the multithreading capabilities of Windows NT.
At this point you may be asking yourself, why do I need to know about threads if I am writing an ISAPI extension? After all, the information server simply calls my extension with input data and I return the correct output.
The answer is simple: you don't know how an information server handles client requests. So you must protect common resources and information from being changed or accessed by simultaneous requests.
An information server can handle separate client requests by different threads of execution. This means that your extension may be called from multiple places at the same time. Your extension must provide safeguards for thread synchronization to prevent data corruption.
To illustrate the use of thread synchronization and creating multiple threads in an ISAPI extension, we look at the Lottery extension provided as part of the ISAPI developers SDK. The complete source code and make files for the extension are on the CD-ROM in the CHAP19 directory.
The Lottery extension provides clients with a lucky lottery number. This example shows how to create a thread, and how to use critical sections and semaphores to provide synchronized access to the extension's resources.
![]()
A thread is a path of code execution in a process. When a process such as an information server is initialized, the operating system creates a primary thread for the process. The primary thread continues until the process ends.
A process is an executable program. This is typically an *.exe file. Although tasking in Windows NT is controlled at the thread level, the process has a main thread of execution when started. This main thread can then start other threads as needed.
![]()
A process may need to start other threads that handle specific tasks to increase the performance of the application. For example, an information server might handle each client request in a separate thread. This allows each client to access the Web sites with out being affected by requests from other clients.
Threads are an important concept because the Windows NT operating system schedules code for execution on a thread basis. Each thread is scheduled for execution according to its class and priority.
In a preemptive operating system like Windows NT, each thread is guaranteed CPU time. This is in direct contrast to Windows 3.xx in which the multitasking capabilities depend on the cooperation of each running process.
Threads can increase the performance of your applications and the use of your CPU. But you need a through understanding of threads and the new errors that can result from faulty design. Otherwise, threads can make a software engineer's life very hard.
Windows NT schedules threads based on priority levels. Each thread is assigned a priority level. Priority levels range from 0 (the lowest) to 31 (highest).
A thread cannot be assigned a priority level of 0, even though 0 is a valid priority level. This is because priority level 0 is assigned to a special thread called the zero page thread.
The zero page thread is responsible for zeroing free pages in memory. Since the zero page thread is at priority level 0, it only runs when no other threads are executing.
![]()
Figure 18.1 illustrates these concepts.
Fig. 18.1
The Process Viewer can be used to show the relative thread class and thread priorities of any thread in a Win32 process.
NT segments each thread according to its priority level. All threads in a priority level are treated equally. The Windows NT scheduler starts by assigning all level-31 threads to the CPU.
After each level-31 thread executes, if there are no more level- 31 threads waiting to execute, the level-30 threads are assigned. This assignment continues through all thread levels until the scheduler reaches level 0. Then the process starts back at level 31.
If during the progression down the thread priority levels a higher-priority thread than the one currently running needs to run, it immediately interrupts the currently executing thread and starts running. For example, if a level-20 thread is executing but a level-25 thread needs to execute, the level-20 thread is suspended and the level-25 thread starts executing.
At first the scheduling algorithm may seem unfair. In fact, it may even look as if thread starvation could set in. But the reality is that threads do not need to run very often.
For example, an information server may be sitting idle waiting for client requests most of the time. User interface threads sit idle unless messages are placed in the processes queue.
Windows NT also places applications in a sleep state. For example, when a application calls GetMessage to get its messages, if the message queue is empty, the application is put in an efficient sleep state.
Thread starvation occurs when a thread never gets a time slice from the processor. In other words, the thread never executes. Windows NT scheduling guarantees that starvation never occurs.
![]()
Thread Priority Classes
Windows NT does not allow you to change a thread's priority based on the priority levels discussed in the previous section. Instead, a thread's priority level is set by a two-step classification.
The first step is to assign a priority class to a process. Windows NT compares the priority class of each process in the system. The second step is to assign relative priority levels to each thread owned by the process.
The Win32 API allows four priority classes: Idle, Normal, High, and Realtime. Table 18.1 shows the priority classes and the default thread priority level for each class.
Table 18.1 Win32 Process Priority Class Default Thread Priority Level
Priority Class | Priority Level |
Idle | 4 |
Normal | 7 - 9 |
High | 13 |
Realtime | 24 |
The default thread priority level is assigned to each thread created in the process.
The Normal priority class has a default priority level shown as 7 to 9 rather than a definite level like the other priority classes. This is because the priority class for Normal processes can be changed depending on the tasking mode of the NT computer. As shown in Figure 18.2, the user can set the tasking mode of normal processes.
To change the performance of Windows NT applications, do the following steps:
Changing the thread priority level of normal Windows NT processes.
Use these guidelines to change the thread priority level of normal Windows NT processes:
Most processes, including the IIS, run at the Normal priority level. The Normal priority level provides the best overall results for system performance.
A process can change its priority class through the SetPriorityClass function. The prototype for this function is shown in Listing 18.1.
Listing 18.1 Prototype for the SetPriorityClass Function
#define NORMAL_PRIORITY_CLASS 0x00000020
#define IDLE_PRIORITY_CLASS 0x00000040
#define HIGH_PRIORITY_CLASS 0x00000080
#define REALTIME_PRIORITY_CLASS 0x00000100
BOOL SetPriorityClass( HANDLE hProcess, DWORD fdwPriority );
The SetPriorityClass function takes two parameters. The first is the handle of the process whose priority you are going to change. The second is the priority class that the process is being changed to. All the priority class defines are shown in the code listing.
Thread Priority Levels
When a thread is created, it is assigned the default priority level in the class it is assigned to. For a process in the Idle priority class, the default priority level is 4. You can change the priority level of a thread through the SetThreadPriority function. The prototype of this function is shown below.
BOOL SetThreadPriority( HANDLE hThread, int nPriority);
The first parameter, hThread, is the handle of the thread to be changed. The second parameter, nPriority, can be one of the values shown in Table 18.2.
Table 18.2
nPriority Definition | Description |
THREAD_PRIORITY_LOWEST | Change the thread's priority to 2 less than the priority class default. |
THREAD_PRIORITY_BELOW_NORMAL | Change the thread's priority to 1 less than the priority class default. |
THREAD_PRIORITY_NORMAL | Change the thread's priority to the priority class default. |
THREAD_PRIORITY_ABOVE_NORMAL | Change the thread's priority level to 1 more than the priority class default. |
THREAD_PRIORITY_HIGHEST | Change the thread's priority level to 2 more than the priority class default. |
THREAD_PRIORITY_IDLE | Change the thread's priority level to 1 unless the process priority class is Realtime. If the process priority class is Realtime, the thread's priority level is set to 16. |
THREAD_PRIORITY_TIME_CRITICAL | Change the thread's priority level to 15 unless the process priority class is Realtime. If the process priority class is Realtime, the thread's priority level is set to 31. |
You are probably wondering how Windows NT uses priority levels when the Win32 API deals with process priority classes and relative thread priority levels. Table 18.3 shows a mapping of the process priority class and relative thread priority level to the Windows NT priority level.
Table 18.3 How Windows NT Determines a Thread's Base Priority Level
Relative thread priority | Idle | Normal, in background | Normal, in foreground (Boost+1) | Normal, in high foreground (Boost+2) | High | Real-time |
Time-
Critical | 15 | 15 | 15 | 15 | 15 | 31 |
Highest | 6 | 9 | 10 | 11 | 15 | 26 |
Above Normal | 5 | 8 | 9 | 10 | 14 | 25 |
Normal | 4 | 7 | 8 | 9 | 13 | 24 |
Below Normal | 3 | 6 | 7 | 8 | 12 | 23 |
Lowest | 2 | 5 | 6 | 7 | 11 | 22 |
Idle | 1 | 1 | 1 | 1 | 1 | 16 |
Rather than limiting a thread to the 32 priority levels, the priority class/level mechanism provides logical performance barriers that can be used to affect the performance of applications and threads.
From Table 18.3, we can see that the process priority class directly affects the absolute priority level of the thread priority levels. For a thread running at the Normal priority level, the absolute priority level changes as the process class changes from Idle to Realtime. The two-step mechanism allows for across-the-board changes to all threads of a process, as well as to individual thread control.
When to Change a Thread's Priority Level
There are a few reasons for increasing a thread's priority level in an ISAPI extension. You may decide that the performance of an ISAPI extension could be enhanced by changing the priority level of the thread that is calling your extension.
You may choose to boost the priority level of your extension when processing an information server request. This improves the response time to the waiting client if a time-consuming operation is occurring. When the results are being returned, the extension can reset the priority level of the calling thread back to its original setting.
If your extension creates its own thread, it may improve the performance of the extension and system to increase or decrease the thread's priority, depending on the function of the new thread.
Since your extension is used as a DLL, it would not be good practice to change the priority class of the process. Changing the priority class of a process could adversely effect the overall performance of your system.
![]()
If you are increasing the performance of your thread, avoid the Time Critical priority level. It is rarely necessary to raise a priority level much.
Programming in a multithreaded environment is vastly different from programming in an environment that is not multithreaded. For example, several HTTP page requests may be in your ISAPI extension at the same time. If the requests are accessing shared data, some sort of thread synchronization must be provided to protect the data from corruption.
If your extension accesses data global to the extension, this must be protected by a synchronization object. Likewise, if your extension shares resources such as an open database connectivity (ODBC) database access handle, these resources must be protected so that multiple threads are not accessing the resource simultaneously.
Fortunately, Windows NT provides may forms of thread synchronization objects, such as critical sections, mutexes, semaphores, and events. Each of these objects is used for different purposes and is explained in the following sections.
Creating a new thread of execution is relatively easy. The two steps are
Let's take a look at the lottery extension. In the lottery extension, a separate thread is created to receive incoming requests for new lottery numbers. As the requests are received, a new lucky lottery number is generated and returned to the client. The function shown in code Listing 18.2 is called PoolThread and can be found in the lottery.c file.
Listing 18.2 The PoolThread Function
DWORD WINAPI PoolThread( LPDWORD lpParams )
{
WORK_QUEUE_ITEM * pwqi;
DWORD res;
while ( TRUE )
{
res = WaitForSingleObject( hWorkSem, INFINITE );
if ( res == WAIT_OBJECT_0 )
{
//
// There's work to do, grab the queue lock and get the next
// work item
//
EnterCriticalSection( &csQueueLock );
if ( WorkQueueList.Flink != &WorkQueueList )
{
pwqi = CONTAINING_RECORD( WorkQueueList.Flink,
WORK_QUEUE_ITEM,
ListEntry );
pwqi->ListEntry.Flink->Blink = &WorkQueueList;
WorkQueueList.Flink = pwqi->ListEntry.Flink;
cQueueItems--;
}
else
{
pwqi = NULL;
}
LeaveCriticalSection( &csQueueLock );
if ( !pwqi )
continue;
//
// Impersonate the specified user so security is maintained
// accessing system resources
//
ImpersonateLoggedOnUser( pwqi->hImpersonationToken );
SendLotteryNumber( pwqi->pecb );
RevertToSelf();
//
// Cleanup this work item
//
pwqi->pecb->ServerSupportFunction( pwqi->pecb->ConnID,
HSE_REQ_DONE_WITH_SESSION,
NULL,
NULL,
NULL );
CloseHandle( pwqi->hImpersonationToken );
FreeWorkItem( pwqi );
}
}
return 0;
}
Once the thread function is created, the next step is to call the CreateThread function.
HANDLE CreateThread( LPSECURITY_ATTRIBUTES lpThreadAttributes,
DWORD dwStackSize,
LPTHREAD_START_ROUTINE lpStartAddress,
LPVOID lpParameter,
DWORD dwCreationFlags,
LPDWORD lpThreadId
);
The lottery extension creates all its worker threads during the extension initialization, as shown in code Listing 18.3. The lottery extension creates two instances of the PoolThread for each processor on the computer, with a maximum of eight threads allowed.
When the lottery extension calls CreateThread, it passes in the following parameters. NULL is passed in for the first parameter indicating thread security attributes. NULL means that the thread will inherit the security parameters of the parent process.
dwStackSize is set to zero, indicating use of the system default stack size for the thread. The address of the thread function PoolThread is the third parameter. The fourth parameter is a pointer to any parameters that are to be passed into the thread.
Lottery sets this to NULL, indicating no parameters. The dwCreationFlags parameter is set to 0, and the last parameter is the address of a DWORD that gets the thread ID of the newly created thread.
Listing 18.3 DllMain Function-Entry Point for All DLLs
BOOL WINAPI DllMain( IN HINSTANCE hinstDll,
IN DWORD fdwReason, IN LPVOID lpvContext OPTIONAL
)
/*++
Routine Description:
This function DllLibMain() is the main initialization function for
this DLL. It initializes local variables and prepares it to be invoked
subsequently.
Arguments:
hinstDll Instance Handle of the DLL
fdwReason Reason why NT called this DLL
lpvReserved Reserved parameter for future use.
Return Value:
Returns TRUE is successful; otherwise FALSE is returned.
--*/
{
BOOL fReturn = TRUE;
SYSTEM_INFO si;
DWORD i;
DWORD dwThreadId;
switch (fdwReason )
{
case DLL_PROCESS_ATTACH:
//
// Initialize various data and modules.
//
if ( !InitializeLottery() )
{
fReturn = FALSE;
break;
}
WorkQueueList.Flink = WorkQueueList.Blink = &WorkQueueList;
FreeQueueList.Flink = FreeQueueList.Blink = &FreeQueueList;
hWorkSem = CreateSemaphore( NULL,
0, // Not signalled initially
0x7fffffff, // Max reference count
NULL );
if ( !hWorkSem )
{
return FALSE;
}
InitializeCriticalSection( &csQueueLock );
//
// We don't care about thread attach/detach notifications
//
DisableThreadLibraryCalls( hinstDll );
//
// Do an extra LoadLibrary on ourselves so we get terminated when
// the process gets terminated (avoids worrying about thread cleanup
// issues on dll detach).
//
LoadLibrary( MODULE_NAME );
//
// Create our thread pool, two times the number of processors
//
GetSystemInfo( &si );
for ( i = 0;
i < THREADS_PER_PROCESSOR * si.dwNumberOfProcessors &&
i < MAX_THREADS;
i++ )
{
HANDLE hThread;
hThread = CreateThread( NULL,
0,
(LPTHREAD_START_ROUTINE) PoolThread,
NULL,
0,
&dwThreadId );
if ( !hThread )
{
CloseHandle( hWorkSem );
DeleteCriticalSection( &csQueueLock );
return FALSE;
}
//
// We don't use the thread handle so close it
//
CloseHandle( hThread );
}
break;
case DLL_PROCESS_DETACH:
{
//
// Note we should never get called because we did an extra
// LoadLibrary in our initialization
//
if ( lpvContext != NULL)
{
TerminateLottery();
DeleteCriticalSection( &csQueueLock );
CloseHandle( hWorkSem );
}
break;
} /* case DLL_PROCESS_DETACH */
default:
break;
} /* switch */
return ( fReturn);
} /* DllLibMain() */
One of the simplest yet most effective forms of thread synchronization is the critical section. A critical section is a small section of code that needs exclusive access to a shared data object or resource before the code can execute. A critical section allows only one thread at a time to gain access to a shared resource.
A critical section can only be used to synchronize threads in a single process. Since ISAPI extensions are used as DLLs that are loaded into the information server's process space, the critical section works very well for thread synchronization.
Creating a critical section is easy enough. The first step is to allocate a CRITICAL_SECTION data structure globally in the ISAPI extension. This allows access to the critical section by the different threads calling the extension.
Typically, the CRITICAL_SECTION data structure is declared as a global variable. The complete code for the lottery extension is shown in Listing 18.4 (lottery.c) and in Listing 18.5 (worker.c). These examples illustrate the declaration of the critical section csQueueLock as a global variable for the extension.
The CRICITAL_SECTION data structure has member variables within the structure. These variables are initialized and used by Windows NT, and should not be accessed or changed by your extension.
![]()
Once the CRITICAL_SECTION data structure is declared, the critical section must be initialized before it can be used by your extension. Since the data structure is global, it should be initialized in the DllMain function for your extension.
DllMain is the function called by the Windows NT operating system whenever it loads, initializes, or unloads a DLL from memory. Since ISAPI extensions are used as DLLs, this is the logical place for initializing or destroying all variables global to the extension.
![]()
Listing 18.4 Code for the Lottery Extension
#include <windows.h>
#include <httpext.h>
#include "worker.h"
//
// Constants
//
//
// This is the maximum number of threads we'll allow in the pool
//
#define MAX_THREADS 8
//
// This is the number of threads per processor to create. If the threads
// are heavily IO bound (waiting on network connections for example), a higher
// number might be appropriate. If the threads are CPU bound, a lower number
// would be appropriate.
//
#define THREADS_PER_PROCESSOR 2
//
// This is the maximum number of items we'll allow on the work queue. If
// this number is exceeded we send a message to the client indicating
// there are too many users currently and they should try again later.
//
#define MAX_WORK_QUEUE_ITEMS 100
//
// The text to display when there are too many outstanding work items
//
#define SERVER_TOO_BUSY_TEXT "<head><title>Server too busy</title></head>" \
"<body><h2>The server is too busy to give" \
"your lucky lottery number right now." \
"Please try again later.\n</body>"
//
// Must be the external .dll name of this module
//
#define MODULE_NAME "LOTTERY.DLL"
//
// Definitions
//
//
// This is the structure of a work queue item
//
typedef struct _WORK_QUEUE_ITEM
{
LIST_ENTRY ListEntry;
HANDLE hImpersonationToken;
EXTENSION_CONTROL_BLOCK * pecb;
} WORK_QUEUE_ITEM;
//
// Globals
//
//
// Protects the work queue and free queue
//
CRITICAL_SECTION csQueueLock;
//
// List of work items in a doubly linked circular list
//
LIST_ENTRY WorkQueueList;
//
// List of free WORK_QUEUE_ITEM structures
//
LIST_ENTRY FreeQueueList;
//
// Number of items on the work queue
//
DWORD cQueueItems = 0;
//
// Use a semaphore to indicate there's work to be performed. We use a
// semaphore rather then an event because a semaphore tracks how many times
// it has been signalled
//
HANDLE hWorkSem = NULL;
//
// Functions
//
BOOL WINAPI DllMain(
IN HINSTANCE hinstDll,
IN DWORD fdwReason,
IN LPVOID lpvContext OPTIONAL
)
/*++
Routine Description:
This function DllLibMain() is the main initialization function for
this DLL. It initializes local variables and prepares it to be invoked
subsequently.
Arguments:
hinstDll Instance Handle of the DLL
fdwReason Reason why NT called this DLL
lpvReserved Reserved parameter for future use.
Return Value:
Returns TRUE is successful; otherwise FALSE is returned.
--*/
{
BOOL fReturn = TRUE;
SYSTEM_INFO si;
DWORD i;
DWORD dwThreadId;
switch (fdwReason )
{
case DLL_PROCESS_ATTACH:
//
// Initialize various data and modules.
//
if ( !InitializeLottery() )
{
fReturn = FALSE;
break;
}
WorkQueueList.Flink = WorkQueueList.Blink = &WorkQueueList;
FreeQueueList.Flink = FreeQueueList.Blink = &FreeQueueList;
hWorkSem = CreateSemaphore( NULL,
0, // Not signalled initially
0x7fffffff, // Max reference count
NULL );
if ( !hWorkSem )
{
return FALSE;
}
InitializeCriticalSection( &csQueueLock );
//
// We don't care about thread attach/detach notifications
//
DisableThreadLibraryCalls( hinstDll );
//
// Do an extra LoadLibrary on ourselves so we get terminated when
// the process gets terminated (avoids worrying about thread cleanup
// issues on dll detach).
//
LoadLibrary( MODULE_NAME );
//
// Create our thread pool, two times the number of processors
//
GetSystemInfo( &si );
for ( i = 0;
i < THREADS_PER_PROCESSOR * si.dwNumberOfProcessors &&
i < MAX_THREADS;
i++ )
{
HANDLE hThread;
hThread = CreateThread( NULL,
0,
(LPTHREAD_START_ROUTINE) PoolThread,
NULL,
0,
&dwThreadId );
if ( !hThread )
{
CloseHandle( hWorkSem );
DeleteCriticalSection( &csQueueLock );
return FALSE;
}
//
// We don't use the thread handle so close it
//
CloseHandle( hThread );
}
break;
case DLL_PROCESS_DETACH:
{
//
// Note we should never get called because we did an extra
// LoadLibrary in our initialization
//
if ( lpvContext != NULL)
{
TerminateLottery();
DeleteCriticalSection( &csQueueLock );
CloseHandle( hWorkSem );
}
break;
} /* case DLL_PROCESS_DETACH */
default:
break;
} /* switch */
return ( fReturn);
} /* DllLibMain() */
BOOL WINAPI GetExtensionVersion ( HSE_VERSION_INFO * pver )
/*++
Routine Description:
This is the first function that is called when this ISAPI DLL is loaded.
We should fill in the version information in the structure passed in.
Arguments:
pVer - pointer to Server Extension Version Information structure.
Returns:
TRUE for success and FALSE for failure.
On success the valid version information is stored in *pVer.
--*/
{
pver->dwExtensionVersion = MAKELONG( HSE_VERSION_MINOR, HSE_VERSION_MAJOR );
strcpy( pver->lpszExtensionDesc,
"Multi-threaded ISAPI Application example, v 1.0" );
return TRUE;
}
DWORD WINAPI HttpExtensionProc( EXTENSION_CONTROL_BLOCK * pecb )
/*++
Routine Description:
This is the main function that is called for this ISAPI Extension.
This function processes the request and sends out appropriate response.
Arguments:
pecb - pointer to EXTENSION_CONTROL_BLOCK, which contains most of the
required variables for the extension called. In addition,
it contains the various callbacks as appropriate.
Returns:
HSE_STATUS code indicating the success/failure of this call.
--*/
{
WORK_QUEUE_ITEM * pwqi;
DWORD cb;
BOOL fRet;
HANDLE hImpersonationToken;
//
// Is the list too long? If so, tell the user to come back later
//
if ( cQueueItems + 1 > MAX_WORK_QUEUE_ITEMS )
{
//
// Send a message back to client indicating we're too busy, they
// should try again later.
//
fRet = SendError( pecb,
"503 Server too busy",
SERVER_TOO_BUSY_TEXT );
pecb->dwHttpStatusCode = 503;
return fRet ? HSE_STATUS_SUCCESS : HSE_STATUS_ERROR;
}
//
// Capture the current impersonation token so we can impersonate this
// user in the other thread
//
if ( !OpenThreadToken( GetCurrentThread(),
TOKEN_QUERY | TOKEN_IMPERSONATE,
TRUE, // Open in unimpersonated context
&hImpersonationToken ))
{
fRet = SendError( pecb,
"500 Failed to open thread token",
"Failed to open thread token" );
pecb->dwHttpStatusCode = 500;
return fRet ? HSE_STATUS_SUCCESS : HSE_STATUS_ERROR;
}
//
// Take the queue lock, get a queue item and put it on the queue
//
EnterCriticalSection( &csQueueLock );
pwqi = AllocateWorkItem();
if ( !pwqi )
{
fRet = SendError( pecb,
"500 Not enough memory",
"Failed to allocate work queue item" );
pecb->dwHttpStatusCode = 500;
LeaveCriticalSection( &csQueueLock );
CloseHandle( hImpersonationToken );
return fRet ? HSE_STATUS_SUCCESS : HSE_STATUS_ERROR;
}
//
// Initialize the work queue item and put it at the end of the list
//
pwqi->pecb = pecb;
pwqi->hImpersonationToken = hImpersonationToken;
pwqi->ListEntry.Flink = &WorkQueueList;
pwqi->ListEntry.Blink = WorkQueueList.Blink;
WorkQueueList.Blink->Flink = &pwqi->ListEntry;
WorkQueueList.Blink = &pwqi->ListEntry;
cQueueItems++;
LeaveCriticalSection( &csQueueLock );
//
// Signal the pool threads there is work to be done
//
ReleaseSemaphore( hWorkSem, 1, NULL );
return HSE_STATUS_PENDING;
}
DWORD WINAPI PoolThread( LPDWORD lpParams )
/*++
Routine Description:
This is an ISAPI pool thread
--*/
{
WORK_QUEUE_ITEM * pwqi;
DWORD res;
while ( TRUE )
{
res = WaitForSingleObject( hWorkSem, INFINITE );
if ( res == WAIT_OBJECT_0 )
{
//
// There's work to do, grab the queue lock and get the next
// work item
//
EnterCriticalSection( &csQueueLock );
if ( WorkQueueList.Flink != &WorkQueueList )
{
pwqi = CONTAINING_RECORD( WorkQueueList.Flink,
WORK_QUEUE_ITEM,
ListEntry );
pwqi->ListEntry.Flink->Blink = &WorkQueueList;
WorkQueueList.Flink = pwqi->ListEntry.Flink;
cQueueItems--;
}
else
{
pwqi = NULL;
}
LeaveCriticalSection( &csQueueLock );
if ( !pwqi )
continue;
//
// Impersonate the specified user so security is maintained
// accessing system resources
//
ImpersonateLoggedOnUser( pwqi->hImpersonationToken );
SendLotteryNumber( pwqi->pecb );
RevertToSelf();
//
// Cleanup this work item
//
pwqi->pecb->ServerSupportFunction( pwqi->pecb->ConnID,
HSE_REQ_DONE_WITH_SESSION,
NULL,
NULL,
NULL );
CloseHandle( pwqi->hImpersonationToken );
FreeWorkItem( pwqi );
}
}
return 0;
}
BOOL SendError(
EXTENSION_CONTROL_BLOCK * pecb,
CHAR * pszStatus,
CHAR * pszMessage
)
/*++
Routine Description:
Sends the specified error to the client
Arguments:
pecb - pointer to EXTENSION_CONTROL_BLOCK
pszStatus - Status line of response ("501 Server busy")
pszMessage - HTML message explaining the failure
Returns:
TRUE on success, FALSE on failure
--*/
{
BOOL fRet;
DWORD cb;
//
// Send the headers
//
fRet = pecb->ServerSupportFunction( pecb->ConnID,
HSE_REQ_SEND_RESPONSE_HEADER,
pszStatus,
NULL,
(LPDWORD) "Content-Type: text/html\r\n\r\n" );
//
// If that succeeded, send the message
//
if ( fRet )
{
cb = strlen( pszMessage );
fRet = pecb->WriteClient( pecb->ConnID,
pszMessage,
&cb,
0 );
}
return fRet;
}
WORK_QUEUE_ITEM * AllocateWorkItem( VOID )
/*++
Routine Description:
Allocates a work queue item by either retrieving one from the free list
or allocating it from the heap.
Note: THE QUEUE LOCK MUST BE TAKEN BEFORE CALLING THIS ROUTINE!
Returns:
Work queue item on success, NULL on failure
--*/
{
WORK_QUEUE_ITEM * pwqi;
//
// If the list is not empty, take a work item off the list
//
if ( FreeQueueList.Flink != &FreeQueueList )
{
pwqi = CONTAINING_RECORD( FreeQueueList.Flink,
WORK_QUEUE_ITEM,
ListEntry );
pwqi->ListEntry.Flink->Blink = &FreeQueueList;
FreeQueueList.Flink = pwqi->ListEntry.Flink;
}
else
{
pwqi = LocalAlloc( LPTR, sizeof( WORK_QUEUE_ITEM ));
}
return pwqi;
}
VOID FreeWorkItem( WORK_QUEUE_ITEM * pwqi )
/*++
Routine Description:
Frees the passed work queue item to the free list
Note: This routine takes the queue lock.
Arguments:
pwqi - Work queue item to free
--*/
{
//
// Take the queue lock and put on the free list
//
EnterCriticalSection( &csQueueLock );
pwqi->ListEntry.Flink = FreeQueueList.Flink;
pwqi->ListEntry.Blink = &FreeQueueList;
FreeQueueList.Flink->Blink = &pwqi->ListEntry;
FreeQueueList.Flink = &pwqi->ListEntry;
LeaveCriticalSection( &csQueueLock );
}
Listing 18.5 Code for the Lottery Extension
#include <windows.h>
#include <httpext.h>
#include "worker.h"
#include <time.h>
#include <stdlib.h>
//
// Constants
//
//
// The set of response headers we want to include with the servers. Note
// this includes the header terminator
//
#define RESPONSE_HEADERS "Content-Type: text/html\r\n\r\n"
//
// Globals
//
//
// This global variable maintains the current state about the
// the lottery number generated.
//
// The lottery number is generated using a combination
// of the sequence number and a random number generated on the fly.
//
DWORD g_dwLotteryNumberSequence = 0;
//
// Critical section to protect the global counter.
//
CRITICAL_SECTION g_csGlobal;
//
// Prototypes
//
VOID
GenerateLotteryNumber(
LPDWORD pLotNum1,
LPDWORD pLotNum2
);
//
// Functions
//
BOOL
InitializeLottery(
VOID
)
/*++
Routine Description:
Sets up the initial state for the lottery number generator
Returns:
TRUE on success, FALSE on failure
--*/
{
time_t pTime;
//
// Seed the random number generator
//
srand(time(&pTime));
g_dwLotteryNumberSequence = rand();
InitializeCriticalSection( &g_csGlobal );
return TRUE;
}
BOOL
SendLotteryNumber(
EXTENSION_CONTROL_BLOCK * pecb
)
/*++
Routine Description:
This function sends a randomly generated lottery number back to the client
Arguments:
pecb - pointer to EXTENSION_CONTROL_BLOCK for this request
Returns:
TRUE on success, FALSE on failure
--*/
{
BOOL fRet;
char rgBuff[2048];
//
// Send the response headers and status code
//
fRet = pecb->ServerSupportFunction(
pecb->ConnID, /* ConnID */
HSE_REQ_SEND_RESPONSE_HEADER, /* dwHSERRequest */
"200 OK", /* lpvBuffer */
NULL, /* lpdwSize. NULL=> send string */
(LPDWORD ) RESPONSE_HEADERS); /* header contents */
if ( fRet )
{
CHAR rgchLuckyNumber[40];
DWORD dwLotNum1, dwLotNum2;
DWORD cb;
CHAR rgchClientHost[200] = "LT";
DWORD cbClientHost = 200;
if ( !pecb->GetServerVariable(pecb->ConnID,
"REMOTE_HOST",
rgchClientHost,
&cbClientHost))
{
// No host name is available.
// Make up one
strcpy(rgchClientHost, "RH");
}
else
{
// terminate with just two characters
rgchClientHost[2] = '\0';
}
//
// Generate a lottery number, generate the contents of body and
// send the body to client.
//
GenerateLotteryNumber( &dwLotNum1, &dwLotNum2);
// Lottery Number format is: Number-2letters-Number.
wsprintf( rgchLuckyNumber, "%03d-%s-%05d",
dwLotNum1,
rgchClientHost,
dwLotNum2);
//
// Body of the message sent back.
//
cb = wsprintf( rgBuff,
"<head><title>Lucky Number</title></head>\n"
"<body><center><h1>Lucky Corner </h1></center><hr>"
"<h2>Your lottery number is: "
" <i> %s </i></h2>\n"
"<p><hr></body>",
rgchLuckyNumber);
fRet = pecb->WriteClient (pecb->ConnID, /* ConnID */
(LPVOID ) rgBuff, /* message */
&cb, /* lpdwBytes */
0 ); /* reserved */
}
return ( fRet ? HSE_STATUS_SUCCESS : HSE_STATUS_ERROR);
} /* SendLotteryNumber */
VOID
TerminateLottery(
VOID
)
{
DeleteCriticalSection( &g_csGlobal );
}
VOID
GenerateLotteryNumber(
LPDWORD pLotNum1,
LPDWORD pLotNum2
)
{
DWORD dwLotteryNum;
DWORD dwModulo;
//
// Obtain the current lottery number an increment the counter
// To keep this multi-thread safe use critical section around it
//
EnterCriticalSection( &g_csGlobal);
dwLotteryNum = g_dwLotteryNumberSequence++;
LeaveCriticalSection( &g_csGlobal);
// obtain a non-zero modulo value
do {
dwModulo = rand();
} while ( dwModulo == 0);
// split the lottery number into two parts.
*pLotNum1 = (dwLotteryNum / dwModulo);
*pLotNum2 = (dwLotteryNum % dwModulo);
return;
} // GenerateLotteryNumber()
For developers using the Microsoft foundation classes (MFC) to build extensions, MFC does not provide direct access to the DllMain function. Instead, the Application object of the extension has a virtual method that can be overwritten. This method is MyApp::InitAplication.
The InitApplication method is called when the extension is first loaded into memory. The CriticalSection object can be initialized in this method.
To initialize a critical section, call the InitializeCriticalSection function. The prototype for this function is shown below.
VOID InitializeCriticalSection( LPCRITICAL_SECTION lpCriticalSection );
As shown in the prototype, a pointer to a CRITICAL_SECTION structure is passed in as the only parameter to the function.
Once the critical section is initialized, it is easy to protect your code with a critical section. Two functions control the entrance and exit to a block of code protected by a critical section: EnterCriticalSection and LeaveCriticalSection. The prototypes for these functions are shown below.
VOID LeaveCriticalSection(LPCRITICAL_SECTION lpCriticalSection );
VOID EnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection );
Both LeaveCriticalSection and EnterCriticalSection take only one parameter, a pointer to a CRITICAL_SECTION object.
Only destroy a critical section in the exit routine of the extension. This ensures that the process that loaded the extension has finished using the extension. It also ensures that there are no threads from the process using the variables in the extension.
Destroy a critical section object in the DllMain function. This function is called when a process is finished using the extension with the parameter of DLL_PROCESS_DETACH.
For MFC users, the application framework provides a virtual method, MyApp::ExitApplication, that can be overwritten by the extension developer. This method is called when the extension is unloaded from memory.
![]()
You destroy a critical section by calling the DeleteCriticalSection function. The prototype for this function is
VOID DeleteCriticalSection(LPCRITICAL_SECTION lpCriticalSection );
Like the initialization function, DeleteCriticalSection takes a pointer to a CRITICAL_SECTION structure as its only parameter. Once the critical section is destroyed, it should not be used again with out reinitialization.
In the previous sections, you have seen how to protect a block of code using a critical section. What if there are multiple blocks of code to be protected from simultaneous access? Two options are available: create another critical section or use an existing critical section.
If you create another critical section object, you must follow the procedures under "Protecting a Block of Code with a Critical Section" earlier in this chapter. Another useful is to use the came critical section for protecting multiple blocks of code.
Let's look at the sample code in Listing 18.6 to see the use of a single critical section to protect multiple code blocks.
Listing 18.6 Single Critical Section Protecting Multiple Blocks
CRITICAL_SECTION gh_GlobalCriticalSection;
long gl_GlobalTime[20];
.
.
.
DWORD WINAPI ExampleThread( LPVOID lpvParameter )
{
int li_Index = (int)lpvParameter;
EnterCriticalSection(&gh_GlobalCriticalSection);
if ( gl_GlobalTime[li_Index] < time(NULL)
UpdateTimeSlot(li_Index);
LeaveCriticalSection(&gh_GlobalCriticalSection);
return(0);
}
void UpdateTimeSlot(int vi_Index )
{
EnterCriticalSection(&gh_GlobalCriticalSection);
// increment the time index by one hour
gl_GlobalTime[vi_Index] = time(NULL) + (60 * 60);
LeaveCriticalSection(&gh_GlobalCriticalSection);
}
In Listing 18.6 above, the ExampleThread function gets the critical section when it first starts to execute. After it gets the critical section, the thread tests the value of the time stored in an index of time values against the current time. In this example, the array of time indexes is protected by the critical section. This way, no value in the array can be updated while the thread is testing the value.
If the current time is equal to the value in the index being tested, the UpdateTimeSlot function is called. UpdateTimeSlot is an independent function that can be called without knowledge of what functions call it. The global critical section is used to protect the array while the function updates a slot in the array.
In Listing 18.6, ExampleThread is calling UpdateTimeSlot while the thread is in possession of the critical section. This is a perfectly legal operation.
Windows NT knows that the thread that is in possession of the critical section is again requesting the critical section. So NT simply increments the internal reference count of the critical section and the function proceeds.
The thread only relinquishes possession of the critical section after LeaveCriticalSection has been called twice, once for the UpdateTimeSlow function and once for the ExampleThread function.
Thus multiple code blocks are protected using the same critical section.
For ISAPI extensions, critical section objects are the most logical choices for protecting resources from simultaneous access. Critical sections provide fast access for serializing data access in a single process.
However, depending on the function of the extension, you may need to use other synchronization objects. Windows NT offers the developer many synchronization objects that can be used for different circumstances. Windows NT also offers events, semaphores, and mutexes as other choices for thread and process synchronization.
Events, semaphores, and mutexes all run on the same premise: each object can be in one of two states at any time. Valid object states are signaled and nonsignaled.
An object can be accessed by a thread when in the signaled state. A thread is put into an efficient wait state while waiting for an object to be signaled. Once the object is signaled, the thread begins to execute.
Two functions are used by threads to wait for an object to be signaled: WaitForSingleObject and WaitForMultipleObjects. These two functions are polymorphic in nature because they work with all of the Win32 synchronization objects. The prototypes for these functions are shown below.
DWORD WaitForSingleObject( HANDLE hObject, DWORD dwTimeout);
DWORD WaitForMultipleObjects( DWORD dwObjects, LPHANDLE lpHandles,
BOOL bWaitAll, DWORD dwTimeout);
The first function, WaitForSingleObject, takes two parameters. The first parameter, hObject, is a handle to a synchronization object that could be an event, a mutex, or a semaphore. The second parameter, dwTimeout, specifies, in milliseconds, how long to wait for the object to become signaled.
When a thread calls WaitForSingleObject, the thread waits for the object to be signaled. Once the object state is signaled or if the wait time expires, the function returns and the thread continues to execute.
The action that the thread takes should be based on the return value. WaitForSingleObject returns one of the following values:
Return Value | Description |
WAIT_OBJECT_0 | The object's state is signaled. |
WAIT_TIMEOUT | The object did not reach the signaled state in the time specified in the timeout parameter. |
WAIT_ABANDONED | This is returned for mutex objects. This return value specifies that the mutex was abandoned by another thread so its state is signaled. |
WAIT_FAILED | This value indicates that an error has occurred. The GetLastError function can be called to retrieve additional information. |
Two special values can be passed in as the timeout parameter to WaitForSingleObject. A value of 0 specifies that the operating system should return the current state of the object without waiting.
A return value of WAIT_OBJECT_0 indicates that the object's state is signaled. A return value of WAIT_FAILED indicates that the object's state is not signaled.
Conversely, a value of INFINITE can be passed in as the timeout parameter. This value indicates that the operating system should wait forever until the object's state is signaled. If the object never reaches a signaled state, the thread remains suspended indefinitely until the process exits.
The second function, WaitForMultipleObjects, is like WaitForSingleObject except that it can wait for one or more objects to be signaled. WaitForMultipleObjects takes four parameters.
The first parameter, dwObjects, specifies how many objects are being passed into the function. This parameter cannot exceed MAXIMUM_WAIT_OBJECTS, which is 64. The second parameter, lpHandles, is pointer to an array of object handles.
An error occurs if WaitForMultipleObjects is called with the same object appearing more than once in the list. This error occurs even if the same object is referenced through different handles.
![]()
The third parameter, bWaitAll, indicates whether the function should wait for all objects in the list to be signaled (TRUE) or wait for any object in the list to be signaled (FALSE). Like WaitForSingleObject, the last parameter, dwTimeout, specifies how long to wait for the list or a single object to be signaled.
For mutex, semaphore, and event objects, one function call destroys or invalidates the object: CloseHandle.
BOOL CloseHandle( HANDLE hObject );
CloseHandle is a generic function that closes any Win32 HANDLE such as mutexes, events, semaphores, and communication ports.
Mutex objects are like critical sections except that mutexes can be used to synchronize resource access across multiple processes. Unlike critical sections, which are declared variables that are initialized, mutex objects must be created by the operating system.
To create a mutex object you call the CreateMutex function. The prototype is shown below.
HANDLE CreateMutex(LPSECURITY_ATTRIBUTES lpSecurity,
BOOL bInitialOwner, LPTSTR lpzMutexName );
When the CreateMutex function is called successfully, a handle of a mutex object is returned. The first parameter to CreateMutex, lpSecurity, is a pointer to a SECURITY_ATTRIBUTES structure.
Passing NULL in creates a mutex with the default security settings of the current user. The second parameter, bInitialOwner, indicates if the thread creating the mutex should gain ownership-place the object in a nonsignaled state so other threads cannot access it.
If TRUE is passed in, the thread creating the mutex has ownership, which means that the mutex is in a nonsignaled state. If FALSE is passed in, the mutex is not owned and its state is signaled.
The third parameter, lpzMutexName, is a pointer to a string used for naming the mutex. The mutex name is used to identify a common mutex object to be shared between multiple threads or processes.
The most common method for using a mutex is for each thread that needs access to the mutex to call CreateMutex, passing the same string in the lpzMutexName parameter. The first thread calling CreateMutex creates the mutex object.
As subsequent calls are made to CreateMutex, the operating system determines that a mutex with an identical name already exists. If the mutex already exists, the operating system creates a new handle to the existing mutex object.
To use the mutex, the calling thread calls WaitForSingleObject, passing in the handle to the mutex returned from CreateMutex. When WaitForSingleObject sees that the mutex has reached a signaled state, the waiting thread immediately gets ownership of the mutex. The mutex is placed back in a nonsignaled state, and the thread continues to execute.
When a thread is finished accessing the resources protected by the mutex, it can release the mutex. Releasing the mutex puts the mutex back into a signaled state. The mutex can be released by calling the ReleaseMutex function. The prototype for ReleaseMutex is shown below.
BOOL ReleaseMutex( HANDLE hMutex );
The ReleaseMutex function takes a handle to the mutex as its only parameter.
Mutex objects have different characteristics from other synchronization objects. Mutex objects are owned by a thread, not a process.
Mutex objects have two states, signaled or nonsignaled. But they also remember which thread has ownership of the mutex. Other synchronization objects such as events or semaphores also have two states, signaled or nonsignaled.
This is significant because a thread can gain ownership of a mutex, and then terminate either normally or abnormally without releasing the mutes. Such a mutex is assumed to be abandoned.
When the operating systems sees an abandoned mutex, it automatically switches the state of the mutex to signaled, thus allowing other threads waiting for the mutex to continue executing.
Like events or semaphores, mutexes are destroyed by calling the CloseHandle function.
Semaphores represent a different type of synchronization object from events, critical sections, or mutexes. Semaphores are used to keep track of the number of available resources.
A semaphore allows a thread to query for the availability of a resource. If the resource is available, the count of available resources is decremented.
A semaphore is created by calling the CreateSemaphore function.
HANDLE CreateSemaphore( LPSECURITY_ATTRIBUTE lpSecurity,
LONG cInitialCount, LONG cMaxCount, LPTSTR lpszSemName);
CreateSemaphore takes four parameters. The first parameter, lpSecurity, is a pointer to a SECURITY_ATTRIBUTE structure. NULL can be used to specify the default security attributes of the current user.
The second parameter, cInitialCount, specifies the initial number of resources available. Typically, this is set to cMaxCount. cInitialCount indicates how many of the resources are currently available.
cMaxCount specifies the maximum number of resources available. This parameter indicates how many threads can simultaneously access the resource(s) associated with the semaphore.
For example, if your extension maintains two ODBC database handles that can be used to process hypertext transport protocol (HTTP) requests, 2 is passed in as the value of CInitialCount. This indicates that two handles are available and that 2 is the value for cMaxCount.
lpszSemName is a string used to name the semaphore. The name is identifies the semaphore so that other processes or threads can get a handle to the semaphore.
When a semaphore is created, a process calls WaitForSingleObject, passing in the handle of the semaphore to determine if the resource protected by the semaphore is protected. If WaitForSingleObject returns a code of WAIT_OBJECT_0, the resource is available. WaitForSingleObject decrements the semaphore count before it returns to the calling thread.
The operating system works on a semaphore without interruption. When a resource is requested from a semaphore, the operating system tests to see if the resource is available.
If so, the operating system decrements the count without letting another thread use the semaphore. Only after the count is decremented can another thread access the semaphore.
A semaphore can be released (count incremented) by calling the ReleaseSemaphore function.
BOOL ReleaseSemaphore( HANDLE hSemaphore, LONG dwRelease, LPLONG lpPrevious);
The first parameter, hSemaphore, is a handle to a semaphore. The second parameter, dwRelease, is a count that indicates how much to increment the semaphore count by.
This is typically set to 1. But if WaitForSingleObject is called multiple times, the semaphore can be reset by setting dwRelease to a number equal to the number of times the WaitForSingleObject is called. This feature allows ReleaseSemaphore to be called once.
The last parameter, lpPrevious, is a pointer to a long integer. ReleaseSemaphore fills this parameter with the resource count of the semaphore before adding dwRelease to it.
Unlike a mutex or a critical section, a semaphore is not owned by a thread. This means that a semaphore can be released by any thread, as long as the thread has a valid handle to the semaphore.
Even though any thread can release a semaphore, it is not good programming for a thread other than the calling thread to release the semaphore.
![]()
Another form of synchronization objects, events, are different from mutex or semaphores. Events are generally used to indicate that an operation has completed. This is in contrast to semaphores, mutexes, and critical sections, which are generally used to control resource access.
The two kinds of events are manual-reset and auto-reset. Manual-reset events are typically used to signal several threads simultaneously that an operation has completed. Auto-reset events are typically used to signal a single thread that an operation has completed.
Events are most often used in situations where one thread does some kind of initialization work. Then one or more threads can do some work after the initialization is completed.
For example, if your extension needs to read in a large file or get database information before any requests can be accepted, the extension can start a thread to get the necessary data. The initialization thread sets the event to a nonsignaled state.
Other threads trying to execute are blocked when waiting for the event to be signaled. After the data is obtained, the initialization thread sets the event to a signaled state, thus releasing any threads waiting for the event.
To create an event, the CreateEvent function is called.
HANDLE CreateEvent( LPSECURITY_ATTRIBUTES lpSecurity,
BOOL bManualReset, BOOL bInitialState, LPTSTR lpszEventName);
The first parameter, lpSecurity, is a pointer to a SECURITY_ATTRIBUTES structure. NULL can be passed in to get the security attributes of the current user.
The second parameter, bManualReset, indicates if the event is to be reset manually (TRUE) or if the event is auto-reset (FALSE). The third parameter, bInitialState, can set the initial state of the event to signaled (TRUE) or nonsignaled (FALSE).
The fourth parameter, lpszEventName, is a pointer to a string that can be used to name the event. If the event is named, other threads and other processes can get a handle to the event by passing in the same string to CreateEvent.
Like mutexes and semaphores, threads waiting for an event to be signaled must call either the WaitForSingleObject function or the WaitForMultipleObjects function. The effect these functions have on an event varies, depending on whether the event is manual or auto-reset.
Using Manual-Reset Events
When a thread calls WaitForSingleObject or WaitForMultipleObjects on a manual-reset event, the event is not automatically reset to a nonsignaled state. This action is important because if multiple threads are waiting for the event to occur, all of the threads can execute when the event occurs. In other words, when a manual-reset event is signaled, all threads waiting on the event are allowed to run.
A thread sets an event object to signaled by calling the SetEvent function.
BOOL SetEvent( HANDLE hEvent );
This function takes a handle of an event as its only parameter and returns TRUE if successful. When a manual-reset event is signaled, it remains signaled until the event is explicitly reset to a nonsignaled state. An event can be reset by calling the ResetEvent function.
BOOL ResetEvent(HANDLE hEvent );
One more function that is useful when using manual-reset events is PulseEvent.
BOOL PulseEvent( HANDLE hEvent );
PulseEvent has an equivalent function to calling SetEvent to release any waiting threads, then calling ResetEvent to reset the event to a nonsignaled state.
Using Auto-Reset Events
An auto-reset event behaves differently from manual-reset events. If multiple threads are waiting for an event to occur, only one thread is released when the event is set to a signaled state.
When using auto-reset events, both WaitForSingleObject and WaitForMultipleObjects reset the event back to a signaled state before returning to its calling thread. This action allows the operating system to allow only one thread to execute when the event is signaled.
In other words, after a thread calls SetEvent to set the event to a signaled state, the thread does not have to call ResetEvent. The event is reset by either WaitForSingleObject or WaitForMultipleObjects before the function returns to the calling thread.
In this chapter we learn how Windows NT schedules threads, how to add multithreaded capabilities to ISAPI extensions, and how to provide synchronization to resources in ISAPI extensions.