Programming Microsoft Windows Ce Net 3Rd [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Programming Microsoft Windows Ce Net 3Rd [Electronic resources] - نسخه متنی

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید






Synchronization

With multiple threads running around the system, you need to coordinate the activities. Fortunately, Windows CE supports almost the entire extensive set of standard Win32 synchronization objects. The concept of synchronization objects is fairly simple. A thread waits on a synchronization object. When the object is signaled, the waiting thread is unblocked and is scheduled (according to the rules governing the thread's priority) to run.

Windows CE doesn't support some of the synchronization primitives supported by Windows XP. These unsupported elements include file change notifications and waitable timers. The lack of waitable timer support can easily be worked around using other synchronization objects or, for longer-period timeouts, the more flexible Notification API, unique to Windows CE.

One aspect of Windows CE unique to it is that the different synchronization objects don't share the same namespace. This means that if you have an event named Bob, you can also have a mutex named Bob. (I'll talk about mutexes later in this chapter.) This naming convention is different from Windows XP's rule, where all kernel objects (of which synchronization objects are a part) share the same namespace. While having the same names in Windows CE is possible, it's not advisable. Not only does the practice make your code incompatible with Windows XP, there's no telling whether a redesign of the internals of Windows CE might just enforce this restriction in the future.


Events


The first synchronization primitive I'll describe is the event object. An event object is a synchronization object that can be in a signaled or nonsignaled state. Events are useful to a thread to let it be known that, well, an event has occurred. Event objects can either be created to automatically reset from a signaled state to a nonsignaled state or require a manual reset to return the object to its nonsignaled state. Events can be named and therefore shared across different processes allowing interprocess synchronization.

An event is created by means of this function:

HANDLE CreateEvent (LPSECURITY_ATTRIBUTES lpEventAttributes, 
BOOL bManualReset, BOOL bInitialState,
LPTSTR lpName);

As with all calls in Windows CE, the security attributes parameter, lpEventAttributes, should be set to NULL. The second parameter indicates whether the event being created requires a manual reset or will automatically reset to a nonsignaled state immediately after being signaled. Setting bManualReset to TRUE creates an event that must be manually reset. The bInitialState parameter specifies whether the event object is initially created in the signaled or nonsignaled state. Finally, the lpName parameter points to an optional string that names the event. Events that are named can be shared across processes. If two processes create event objects of the same name, the processes actually share the same object. This allows one process to signal the other process using event objects. If you don't want a named event, the lpname parameter can be set to NULL.

To share an event object across processes, each process must individually create the event object. You shouldn't just create the event in one process and send the handle of that event to another process. To determine whether a call to CreateEvent created a new event object or opened an already created object, you can call GetLastError immediately following the call to CreateEvent. If GetLastError returns ERROR_ALREADY_EXISTS, the call opened an existing event.

Once you have an event object, you'll need to be able to signal the event. You accomplish this using either of the following two functions:

BOOL SetEvent (HANDLE hEvent);

or

BOOL PulseEvent (HANDLE hEvent);

The difference between these two functions is that SetEvent doesn't automatically reset the event object to a nonsignaled state. For autoreset events, SetEvent is all you need because the event is automatically reset once a thread unblocks on the event. For manual reset events, you must manually reset the event with this function:

BOOL ResetEvent (HANDLE hEvent);

These event functions sound like they overlap, so let's review. An event object can be created to reset itself or require a manual reset. If it can reset itself, a call to SetEvent signals the event object. The event is then automatically reset to the nonsignaled state when one thread is unblocked after waiting on that event. An event that resets itself doesn't need PulseEvent or ResetEvent. If, however, the event object was created requiring a manual reset, the need for ResetEvent is obvious.

PulseEvent signals the event and then resets the event, which allows all threads waiting on that event to be unblocked. So the difference between PulseEvent on a manually resetting event and SetEvent on an automatic resetting event is that using SetEvent on an automatic resetting event frees only one thread to run, even if many threads are waiting on that event. PulseEvent frees all threads waiting on that event.

An application can associate a single DWORD value with an event by calling

BOOL SetEventData (HANDLE hEvent, DWORD dwData);

The parameters are the handle of the event and the data to associate with that event. Any application can retrieve the data by calling

DWORD GetEventData (HANDLE hEvent);

The single parameter is the handle to the event. The return value is the data previously associated with the event.

You destroy event objects by calling CloseHandle. If the event object is named, Windows maintains a use count on the object, so one call to CloseHandle must be made for every call to CreateEvent.


Waiting...


It's all well and good to have event objects; the question is how to use them. Threads wait on events, as well as on the soon to be described semaphore and mutex, using one of the following functions: WaitForSingleObject, WaitForMultipleObjects, MsgWaitForMultipleObjects, or MsgWaitForMultipleObjectsEx. Under Windows CE, the WaitForMultiple functions are limited in that they can't wait for all objects of a set of objects to be signaled. These functions support waiting for one object in a set of objects being signaled. Whatever the limitations of waiting, I can't emphasize enough that waiting is good. While a thread is blocked with one of these functions, the thread enters an extremely efficient state that takes very little CPU processing power and battery power.

Another point to remember is that the thread responsible for handling a message loop in your application (usually the application's primary thread) shouldn't be blocked by WaitForSingleObject or WaitForMultipleObjects because the thread can't be retrieving and dispatching messages in the message loop if it's blocked waiting on an object. The function MsgWaitForMultipleObjects gives you a way around this problem, but in a multithreaded environment, it's usually easier to let the primary thread handle the message loop and secondary threads handle the shared resources that require blocking on events.

Waiting on a Single Object


A thread can wait on a synchronization object with the function

DWORD WaitForSingleObject (HANDLE hHandle, DWORD dwMilliseconds);

The function takes two parameters: the handle to the object being waited on and a timeout value. If you don't want the wait to time out, you can pass the value INFINITE in the dwMilliseconds parameter. The function returns a value that indicates why the function returned. Calling WaitForSingleObject blocks the thread until the event is signaled, the synchronization object is abandoned, or the timeout value is reached.

WaitForSingleObject returns one of the following values:



WAIT_OBJECT_0 The specified object was signaled.



WAIT_TIMEOUT The timeout interval elapsed, and the object's state remains nonsignaled.



WAIT_ABANDONED The thread that owned a mutex object being waited on ended without freeing the object.



WAIT_FAILED The handle of the synchronization object was invalid.



You must check the return code from WaitForSingleObject to determine whether the event was signaled or simply that the timeout had expired. (The WAIT_ABANDONED return value will be relevant when I talk about mutexes soon.)

Waiting on Processes and Threads


I've talked about waiting on events, but you can also wait on handles to processes and threads. These handles are signaled when their processes or threads terminate. This allows a process to monitor another process (or thread) and perform some action when the process terminates. One common use for this feature is for one process to launch another and then, by blocking on the handle to the newly created process, wait until that process terminates.

The rather irritating routine on the next page is a thread that demonstrates this technique by launching an application, blocking until that application closes, and then relaunching the application:

DWORD WINAPI KeepRunning (PVOID pArg) {
PROCESS_INFORMATION pi;
TCHAR szFileName[MAX_PATH];
int rc = 0;
// Copy the filename.
lstrcpy (szFileName, (LPTSTR)pArg);
while (1) {
// Launch the application.
rc = CreateProcess (szFileName, NULL, NULL, NULL, FALSE,
0, NULL, NULL, NULL, &pi);
// If the application didn't start, terminate thread.
if (!rc)
return -1;
// Close the new process's primary thread handle.
CloseHandle (pi.hThread);

// Wait for user to close the application.
rc = WaitForSingleObject (pi.hProcess, INFINITE);
// Close the old process handle.
CloseHandle (pi.hProcess);
// Make sure we returned from the wait correctly.
if (rc != WAIT_OBJECT_0)
return -2;
}
return 0; //This should never get executed.
}

This code simply launches the application using CreateProcess and waits on the process handle returned in the PROCESS_INFORMATION structure. Notice that the thread closes the child process's primary thread handle and, after the wait, the handle to the child process itself.

Waiting on Multiple Objects


A thread can also wait on a number of events. The wait can end when any one of the events is signaled. The function that enables a thread to wait on multiple objects is this one:

DWORD WaitForMultipleObjects (DWORD nCount, CONST HANDLE *lpHandles, 
BOOL bWaitAll, DWORD dwMilliseconds);

The first two parameters are a count of the number of events or mutexes to wait on and a pointer to an array of handles to these events. The bWaitAll parameter must be set to FALSE to indicate that the function should return if any of the events are signaled. The final parameter is a timeout value, in milliseconds. As with WaitForSingleObject, passing INFINITE in the timeout parameter disables the timeout. Windows CE doesn't support the use of WaitForMultipleObjects to enable waiting for all events in the array to be signaled before returning.

Like WaitForSingleObject, WaitForMultipleObjects returns a code that indicates why the function returned. If the function returned because of a synchronization object being signaled, the return value will be WAIT_OBJECT_0 plus an index into the handle array that was passed in the lpHandles parameter. For example, if the first handle in the array unblocked the thread, the return code would be WAIT_OBJECT_0; if the second handle was the cause, the return code would be WAIT_OBJECT_0 + 1. The other return codes used by WaitForSingleObjectWAIT_TIMEOUT, WAIT_ABANDONED, and WAIT_FAILED—are also returned by WaitForMultipleObjects for the same reasons.

Waiting While Dealing with Messages


The Win32 API provides other functions that allow you to wait on a set of objects as well as messages: MsgWaitForMultipleObjects and MsgWaitForMultipleObjectsEx. Under Windows CE, these functions act identically, so I'll describe only MsgWaitForMultipleObjects. This function essentially combines the wait function, MsgWaitForMultipleObjects, with an additional check into the message queue so that the function returns if any of the selected categories of messages are received during the wait. The prototype for this function is the following:

DWORD MsgWaitForMultipleObjectsEx (DWORD nCount, LPHANDLE pHandles, 
BOOL fWaitAll, DWORD dwMilliseconds,
DWORD dwWakeMasks);

This function has a number of limitations under Windows CE. As with WaitForMultipleObjects, MsgWaitForMultipleObjectsEx can't wait for all objects to be signaled. Nor are all the dwWakeMask flags supported by Windows CE. Windows CE supports the following flags in dwWakeMask. Each flag indicates a category of messages that, when received in the message queue of the thread, causes the function to return.



QS_ALLINPUT Any message has been received.



QS_INPUT An input message has been received.



QS_KEYA key up, key down, or syskey up or down message has been received.



QS_MOUSEA mouse move or mouse click message has been received.



QS_MOUSEBUTTON A mouse click message has been received.



QS_MOUSEMOVE A mouse move message has been received.



QS_PAINTA WM_PAINT message has been received.



QS_POSTMESSAGE A posted message, other than those in this list, has been received.



QS_SENDMESSAGE A sent message, other than those in this list, has been received.



QS_TIMER A WM_TIMER message has been received.



The function is used inside the message loop so that an action or actions can take place in response to the signaling of a synchronization object while your program is still processing messages.

The return value is WAIT_OBJECT_0 up to WAIT_OBJECT_0 + nCount - 1 for the objects in the handle array. If a message causes the function to return, the return value is WAIT_OBJECT_0 + nCount. An example of how this function might be used follows. In this code, the handle array has only one entry, hSyncHandle.

fContinue = TRUE;
while (fContinue) {
rc = MsgWaitForMultipleObjects (1, &hSyncHandle, FALSE,
INFINITE, QS_ALLINPUT);
if (rc == WAIT_OBJECT_0) {
//
// Do work as a result of sync object.
//
} else if (rc == WAIT_OBJECT_0 + 1) {
// It's a message; process it.
PeekMessage (&msg, hWnd, 0, 0, PM_REMOVE);
if (msg.message == WM_QUIT)
fContinue = FALSE;
else {
TranslateMessage (&msg);
DispatchMessage (&msg);
}
}
}


Semaphores


Earlier I described the event object. That object resides in either a signaled or a nonsignaled state. Events are synchronization objects that are not all or nothing, signaled or nonsignaled. Semaphores, on the other hand, maintain a count. As long as that count is above 0, the semaphore is signaled. When the count is 0, the semaphore is nonsignaled.

Threads wait on semaphore objects as they do events, using WaitForSingleObject or WaitForMultipleObjects. When a thread waits on a semaphore, the thread is blocked until the count is greater than 0. When another thread releases the semaphore, the count is incremented and the thread blocking on the semaphore returns from the wait function. The maximum count value is defined when the semaphore is created so that a programmer can define how many threads can access a resource protected by a semaphore.

Semaphores are typically used to protect a resource that can be accessed only by a set number of threads at one time. For example, if you have a set of five buffers for passing data, you can allow up to five threads to grab a buffer at any one time. When a sixth thread attempts to access the buffer array protected by the semaphore, it will be blocked until one of the other threads releases the semaphore.

To create a semaphore, call the function

HANDLE CreateSemaphore (LPSECURITY_ATTRIBUTES lpSemaphoreAttributes, 
LONG lInitialCount, LONG lMaximumCount,
LPCTSTR lpName);

The first parameter, lpSemaphoreAttributes, should be set to NULL. The parameter lInitialCount is the count value when the semaphore is created and must be greater than or equal to 0. If this value is greater than 0, the semaphore will be initially signaled. The lMaximumCount parameter should be set to the maximum allowable count value the semaphore will allow. This value must be greater than 0.

The final parameter, lpName, is the optional name of the object. This parameter can point to a name or be NULL. As with events, if two threads call CreateSemaphore and pass the same name, the second call to CreateSemaphore returns the handle to the original semaphore instead of creating a new object. In this case, the other parameters, lInitialCount and lMaximumCount, are ignored. To determine whether the semaphore already exists, you can call GetLastError and check the return code for ERROR_ALREADY_EXISTS.

When a thread returns from waiting on a semaphore, it can perform its work with the knowledge that only lMaximumCount threads or fewer are running within the protection of the semaphore. When a thread has completed work with the protected resource, it should release the semaphore with a call to

BOOL ReleaseSemaphore (HANDLE hSemaphore, LONG lReleaseCount, 
LPLONG lpPreviousCount);

The first parameter is the handle to the semaphore. The lReleaseCount parameter contains the number by which you want to increase the semaphore's count value. This value must be greater than 0. While you might expect this value to always be 1, sometimes a thread might increase the count by more than 1. The final parameter, lpPreviousCount, is set to the address of a variable that will receive the previous resource count of the semaphore. You can set this pointer to NULL if you don't need the previous count value.

To destroy a semaphore, call CloseHandle. If more than one thread has created the same semaphore, all threads must call CloseHandle, or more precisely, CloseHandle must be called as many times as CreateSemaphore was called before the operating system destroys the semaphore.

Another function, OpenSemaphore, is supported on the desktop versions of Windows but not supported by Windows CE. This function is redundant on Windows CE because a thread that wants the handle to a named semaphore can just as easily call CreateSemaphore and check the return code from GetLastError to determine whether it already exists.


Mutexes


Another synchronization object is the mutex. A mutex is a synchronization object that's signaled when it's not owned by a thread and nonsignaled when it is owned. Mutexes are extremely useful for coordinating exclusive access to a resource such as a block of memory across multiple threads.

A thread gains ownership by waiting on that mutex with one of the wait functions. When no other threads own the mutex, the thread waiting on the mutex is unblocked and implicitly gains ownership of the mutex. After the thread has completed the work that requires ownership of the mutex, the thread must explicitly release the mutex with a call to ReleaseMutex.

To create a mutex, call this function:

HANDLE CreateMutex (LPSECURITY_ATTRIBUTES lpMutexAttributes, 
BOOL bInitialOwner, LPCTSTR lpName);

The lpMutexAttributes parameter should be set to NULL. The bInitialOwner parameter lets you specify that the calling thread should immediately own the mutex being created. Finally, the lpName parameter lets you specify a name for the object so that it can be shared across other processes. When calling CreateMutex with a name specified in the lpName parameter, Windows CE checks whether a mutex with the same name has already been created. If so, a handle to the previously created mutex is returned. To determine whether the mutex already exists, call GetLastError. It returns ERROR_ALREADY_EXISTS if the mutex has been previously created.

Gaining immediate ownership of a mutex using the bInitialOwner parameter works only if the mutex is being created. Ownership isn't granted if you're opening a previously created mutex. If you need ownership of a mutex, be sure to call GetLastError to determine whether the mutex had been previously committed. If so, call WaitForSingleObject to gain ownership of the mutex.

You release the mutex with this function:

BOOL ReleaseMutex (HANDLE hMutex);

The only parameter is the handle to the mutex.

If a thread owns a mutex and calls one of the wait functions to wait on that same mutex, the wait call immediately returns because the thread already owns the mutex. Since mutexes retain an ownership count for the number of times the wait functions are called, a call to ReleaseMutex must be made for each nested call to the wait function.

To close a mutex, call CloseHandle. As with events and semaphores, if multiple threads have opened the same mutex, the operating system doesn't destroy the mutex until it has been closed the same number of times that CreateMutex was called.


Duplicating Synchronization Handles


Event, semaphore, and mutex handles are process specific, meaning that they shouldn't be passed from one process to another. The ability to name each of these kernel objects makes it easy for each process to "create" an event of the same name, which, as we've seen, simply opens the same event for both processes. There are times, however, when having to name an event is overkill. An example of this situation might be using an event to signal the end of asynchronous I/O between an application and a driver. The driver shouldn't have to create a new and unique event name and pass it to the application for each operation.

The DuplicateHandle function exists to avoid having to name events, mutexes, and semaphores all the time. It is prototyped as follows:

BOOL DuplicateHandle (HANDLE hSourceProcessHandle, HANDLE hSourceHandle,
HANDLE hTargetProcessHandle, LPHANDLE lpTargetHandle,
DWORD dwDesiredAccess, BOOL bInheritHandle,
DWORD wOptions);

The first parameter is the handle of the process that owns the source handle. If a process is duplicating its own handle, it can get this handle by using GetCurrentProcess. The second parameter is the handle to be duplicated. The third and fourth parameters are the handle of the destination process and a pointer to a variable that will receive the duplicated handle. The dwDesiredAccess parameter is ignored, and the bInheritHandle parameter must be FALSE. The dwOptions parameter must have the flag DUPLICATE_SAME_ACCESS set. The parameter can optionally have the DUPLICATE_CLOSE_SOURCE flag set, indicating that the source handle should be closed if the handle is successfully duplicated.

DuplicateHandle is restricted on Windows CE to only duplicating event, mutex, and semaphore handles. Passing any other type of handle will cause the function to fail.


Critical Sections


Using critical sections is another method of thread synchronization. Critical sections are good for protecting sections of code from being executed by two different threads at the same time. Critical sections work by having a thread call EnterCriticalSection to indicate that it has entered a critical section of code. If another thread calls EnterCriticalSection referencing the same critical section object, it's blocked until the first thread makes a call to LeaveCriticalSection. Critical sections can protect more than one linear section of code. All that's required is that all sections of code that need to be protected use the same critical section object. The one limitation of critical sections is that they can be used to coordinate threads only within a process.

Critical sections are similar to mutexes, with a few important differences. On the downside, critical sections are limited to a single process by means of which mutexes can be shared across processes. But this limitation is also an advantage. Because they're isolated to a single process, critical sections are implemented so that they're significantly faster than mutexes. If you don't need to share a resource across a process boundary, always use a critical section instead of a mutex.

To use a critical section, you first create a critical section handle with this function:

void InitializeCriticalSection (LPCRITICAL_SECTION lpCriticalSection);

The only parameter is a pointer to a CRITICAL_SECTION structure that you define somewhere in your application. Be sure not to allocate this structure on the stack of a function that will be deallocated as soon the function returns. You should also not move or copy the critical section structure. Since the other critical section functions require a pointer to this structure, you'll need to allocate it within the scope of all functions using the critical section. While the CRITICAL_SECTION structure is defined in WINBASE.H, an application doesn't need to manipulate any of the fields in that structure. So for all practical purposes, think of a pointer to a CRITICAL_SECTION structure as a handle instead of as a pointer to a structure of a known format.

When a thread needs to enter a protected section of code, it should call this function:

void EnterCriticalSection (LPCRITICAL_SECTION lpCriticalSection);

The function takes as its only parameter a pointer to the critical section structure initialized with InitializeCriticalSection. If the critical section is already owned by another thread, this function blocks the new thread and doesn't return until the other thread releases the critical section. If the thread calling EnterCriticalSection already owns the critical section, a use count is incremented and the function returns immediately.

If you need to enter a critical section but can't afford to be blocked waiting for that critical section, you can use the function

BOOL TryEnterCriticalSection (LPCRITICAL_SECTION lpCriticalSection);

TryEnterCriticalSection differs from EnterCriticalSection because it always returns immediately. If the critical section was unowned, the function returns TRUE and the thread now owns the critical section. If the critical section is owned by another thread, the function returns FALSE. This function, added in Windows CE 3.0, allows a thread to attempt to perform work in a critical section without being forced to wait until the critical section is free.

When a thread leaves a critical section, it should call this function:

void LeaveCriticalSection (LPCRITICAL_SECTION lpCriticalSection);

As with all the critical section functions, the only parameter is the pointer to the critical section structure. Since critical sections track a use count, one call to LeaveCriticalSection must be made for each call to EnterCriticalSection by the thread that owns the section.

Finally, when you're finished with the critical section, you should call

void DeleteCriticalSection (LPCRITICAL_SECTION lpCriticalSection);

This action cleans up any system resources used to manage the critical section.


Interlocked Variable Access


Here's one more low-level method for synchronizing threads—using the functions for interlocked access to variables. While programmers with multithread experience already know this, I need to warn you that Murphy's Law[2] seems to come into its own when you're using multiple threads in a program. One of the sometimes overlooked issues in a preemptive multitasking system is that a thread can be preempted in the middle of incrementing or checking a variable. For example, a simple code fragment such as

if (!i++) {
// Do something because i was 0.
}

can cause a great deal of trouble. To understand why, let's look into how that statement might be compiled. The assembly code for that if statement might look something like this:

load     reg1, [addr of i]             ;Read variable 
add reg2, reg1, 1 ;reg2 = reg1 + 1
store reg2, [addr of i] ;Save incremented var
bne reg1, zero, skipblk ;Branch reg1 != zero

There's no reason that the thread executing this section of code couldn't be preempted by another thread after the load instruction and before the store instruction. If this happened, two threads could enter the block of code when that isn't the way the code is supposed to work. Of course, I've already described a number of methods (such as critical sections and the like) that you can use to prevent such incidents from occurring. But for something like this, a critical section is overkill. What you need is something lighter.

Windows CE supports the full set of interlocked functions from the Win32 API. The first three, InterlockedIncrement, InterlockedDecrement, and InterlockedExchange, allow a thread to increment, decrement, and in some cases optionally exchange a variable without your having to worry about the thread being preempted in the middle of the operation. The other functions allow variables to be added to and optionally exchanged. The functions are prototyped here:

LONG InterlockedIncrement(LPLONG lpAddend);
LONG InterlockedDecrement(LPLONG lpAddend);
LONG InterlockedExchange(LPLONG Target, LONG Value);
LONG InterlockedCompareExchange (LPLONG Destination, LONG Exchange,
LONG Comperand);
LONG InterlockedTestExchange (LPLONG Target, LONG OldValue, LONG NewValue);LONG
InterlockedExchangeAdd (LPLONG Addend, LONG Increment);
PVOID InterlockedCompareExchangePointer (PVOID* Destination, PVOID ExChange,
PVOID Comperand);
PVOID InterlockedExchangePointer (PVOID* Target, PVOID Value);

For the interlocked increment and decrement, the one parameter is a pointer to the variable to increment or decrement. The returned value is the new value of the variable after it has been incremented or decremented. The InterlockedExchange function takes a pointer to the target variable and the new value for the variable. It returns the previous value of the variable. Rewriting the previous code fragment so that it's thread safe produces this code:

if (!InterlockedIncrement(&i)) {
// Do something because i was 0.
}

The InterlockedCompareExchange and InterlockedTestExchange functions exchange a value with the target only if the target value is equal to the test parameter. Otherwise, the original value is left unchanged. The only difference between the two functions is the order of the parameters.

InterlockedExchangeAdd adds the second parameter to the LONG pointed to by the first parameter. The value returned by the function is the original value before the add operation. The final two functions, InterlockedCompareExchangePointer and InterlockedExchangePointer, are identical to the InterlockedCompareExchange and InterlockedExchange functions, but the parameters have been type cast to pointers instead of longs.

[2] Murphy’s Law: Anything that can go wrong will go wrong. Murphy’s first corollary: When something goes wrong, it happens at the worst possible moment.

/ 169