Category Archives: .NET

The foreach statement.

The following example illustrates low-level use of IEnumerable and IEnumerator:

string s = "Hello";
// Because string implements IEnumerable, we can call GetEnumerator():
IEnumerator rator = s.GetEnumerator();
while (rator.MoveNext())
{
    char c = (char) rator.Current;
    Console.Write (c + ".");
}
// Output: H.e.l.l.o.

However, it’s rare to call methods on enumerators directly in this manner, because C# provides a syntactic shortcut: the foreach statement. Here’s the same example rewritten using foreach:

string s = "Hello"; // The String class implements IEnumerable
foreach (char c in s)
    Console.Write (c + ".");

From “C# 5.0 in a Nutshell”.

Advertisements

Signaling with Monitors.

Controlling access to data is a primary concern with thread synchronization. However, another important aspect is the ability for threads to coordinate their actions by signaling to each other. So, in addition to providing a mechanism for mutually exclusive access to state, monitors also provide an API for signaling.
The goal of signaling is for one thread to be able to inform one or more other threads that a particular event has occurred. The Monitor class exposes three methods—Wait, Pulse, and PulseAll—for precisely this purpose. All three of these methods can only be invoked when the calling thread owns the monitor. Wait gives up the monitor but leaves the thread in a waiting, alertable state. Pulse wakes up one alertable thread. PulseAll wakes up all threads that have called Wait on the monitor in question.
To illustrate Wait and Pulse, let’s look at an example by implementing the producer/consumer pattern: one thread produces work to be performed and enqueues it; another thread pulls the data off the queue and processes it. The key job that signaling will do is to ensure that the consumer thread consumes resources only when there is actually work available on the queue. Listing 4-13 shows the implementation with a monitor.
Listing 4-13. Producer/Consumer with Monitor

private static void Produce(object obj)
{
    var queue = (Queue)obj;
    var rnd = new Random();
    while (true)
    {
        lock (queue)
        {
            queue.Enqueue(rnd.Next(100));
            Monitor.Pulse(queue);
        }
        Thread.Sleep(rnd.Next(2000));
    }
}

private static void Consume(object obj)
{
    var queue = (Queue)obj;
    while (true)
    {
        int val;
        lock (queue)
        {
            while (queue.Count == 0)
            {
                Monitor.Wait(queue);
            }
            val = queue.Dequeue();
        }
        ProcessValue(val);
    }
}

The Produce method generates the work and then acquires the queue’s monitor so it can safely enqueue the work (Queue is not internally thread safe). Once enqueued it calls Pulse on the monitor to wake up a waiting thread, in this case the consumer. Note that at this point the Producer still owns the monitor. Last, the producer releases the monitor and sleeps before enqueuing more work.
Meanwhile the Consume method starts its processing loop. The queue’s Count and Dequeue must be bundled into an atomic operation to avoid a race condition (Count returning 1, then another thread dequeuing before we get to the call to Dequeue), so Consume first acquires the queue’s monitor. However, if there is nothing on the queue then we need to give up the monitor so the producer can enqueue some work. If we simply called Monitor.Exit, then the only way we would know if there was work on the queue would be to poll, which would be inefficient. Therefore, you call Wait to give up the monitor but remain alertable by a Pulse. When Wait returns, the consumer once again owns the monitor and so can Dequeue safely. Once the data is dequeued, Consume releases the monitor so it can process the data without blocking the producer.
One possibly strange detail in Listing 4-13 is the use of a while loop around the Wait in the Consume method; what is that for? Well, there is another subtle race condition: sequencing of operations is nondeterministic, so in theory the producer could end up reacquiring the monitor before the consumer has been rescheduled. The effect of this is that Pulse would be called twice, waking more than one consumer thread. One of the threads could consume both items, so when the second consumer finally comes out of Wait, it will own the monitor but there will be nothing on the queue. Therefore, instead of immediately dequeuing it needs to check that there is still work to do on the queue by checking the Count.

From “Pro asynchronous programming with .net”.

Lock keyword.

Now, what if Monitor.Enter were to throw an exception? You really need to bring this call inside the try block. But then you need to cater for two possibilities: the exception happening before the monitor was acquired and the exception being thrown after the monitor was acquired. So you need to have some way of telling whether or not you should release the monitor. Fortunately, in version 4.0 of .NET a new overload was introduced that allows users to verify, one way or the other, whether the lock was taken. Listing 4-8 shows the code necessary to use this new overload.

Listing 4-8. Monitor.Enter Inside the try Block

public void ReceivePayment(decimal amount)
{
    bool lockTaken = false;
    try
    {
        Monitor.Enter(stateGuard, ref lockTaken);
        cash += amount;
        receivables -= amount;
    }
    finally
    {
        if (lockTaken)
        {
            Monitor.Exit(stateGuard);
        }
    }
}

If this is the code you should write to use monitors correctly, you have a problem: the chances of getting developers to write this every time and get it right every time are not great. And so the C# language has a keyword that makes the compiler emit the code in Listing 4-8.

The lock Keyword

The idea of the lock keyword is to allow you to concentrate on the work that needs to be protected rather than the semantics of using monitors correctly. Listing 4-9 shows how the lock keyword makes the code in Listing 4-8 much simpler. This is surely something you can expect developers to write.

Listing 4-9. Using the lock Keyword

public void ReceivePayment(decimal amount)
{
    lock(stateGuard)
    {
        cash += amount;
        receivables -= amount;
    }
}

WHAT SHOULD I LOCK?

In the early days of .NET it was very common to see people write code like the following:

lock(this)
{
    // change state here
}

In fact, this is also what the event keyword generated until .NET 4.0. The problem is that an object’s this reference is really a public field, in that anyone with a reference to the object is looking at the same thing. This means that, although you are using this for your own internal synchronization, other code may also choose your object for synchronization. You then end up in needless contention at best and with hard-to-diagnose deadlocks at worst.

Objects are cheap to allocate, and so the simplest way to ensure you only have contention where necessary is to use private instance or static variables, depending on whether the state to be protected is instance or static data, respectively.

Even though the lock keyword appears to be a big improvement on manually manipulating the monitor, there is still a fundamental problem: when trying to acquire synchronization primitives it’s a very good idea to be able to time out of a wait. Failing to do this can result in hard-to-identify deadlocks and other synchronization bugs (such as a thread failing to release a monitor). The problem is that neither Monitor.Enter nor lock supports the passing of a timeout.

From “Pro asynchronous programming with .net, page 67”.