Monday, October 25, 2010

The Case of NUnit Hanging During Startup

Recently, a coworker of mine encountered a strange behavior in NUnit. Every time he'd open NUnit's graphical interface, it would freeze and stop responding. Even though a test haven't even started to execute, the application simply froze, leaving the user only to wait patiently and stare helplessly at the screen.

In order to investigate the issue, I've fired up Windbg and attached it to the hanged process.
After loading SOS, the !clrstack command was issued so I could get a better understanding to what was keeping the application busy. Since NUnit's graphical interface stopped from responding, one could already assume that the main thread stopped handling messages from the message pump for some reason.
Reviewing the command's output confirmed that assumption. (The outputs in the post were edited for brevity).
0:007> ~*e!clrstack
PDB symbol for mscorwks.dll not loaded
OS Thread Id: 0xe4 (0)
ESP       EIP     
0012ee44 7c90e4f4 Win32Native.GetFileAttributesEx(...)
0012ee58 792e03f6 System.IO.File.FillAttributeInfo(...)
0012eeec 7927ff71 System.IO.File.InternalExists(System.String)
0012ef20 792e96a6 System.IO.File.Exists(System.String)
0012ef4c 03a214c3 NUnit.Util.RecentFileEntry.get_Exists()
0012ef54 00e0fb7a NUnit.Gui.NUnitForm.NUnitForm_Load(Object, EventArgs)
0012ef8c 7b1d4665 System.Windows.Forms.Form.OnLoad(EventArgs)
0012efc0 7b1d4105 System.Windows.Forms.Form.OnCreateControl()
0012efcc 7b1c6d11 System.Windows.Forms.Control.CreateControl(Boolean)
0012f008 7b1c6b14 System.Windows.Forms.Control.CreateControl()
0012f020 7b1d2fc8 System.Windows.Forms.Control.WmShowWindow(Forms.Message)
0012f05c 7b1c8906 System.Windows.Forms.Control.WndProc(Forms.Message)
0012f060 7b1d1d6a [InlinedCallFrame: 0012f060] 
...
0012f418 7b195911 System.Windows.Forms.Application.Run(System.Windows.Forms.Form)
0012f42c 00e00643 NUnit.Gui.AppEntry.Main(System.String[])
0012f464 00e00076 NUnit.Gui.Class1.Main(System.String[])
0012f688 79e71b4c [GCFrame: 0012f688] 
OS Thread Id: 0x1464 (1)
Unable to walk the managed stack. The current thread is likely not a 
managed thread. You can run !threads to get a list of managed threads in
the process
OS Thread Id: 0xdac (2)
Failed to start stack walk: 80004005 
Evidently, NUnit is performing a synchronous IO operation on its main thread, which for some reason seems to refuse to complete. It happens to be a good example to why it's considered to be a bad practice to perform "heavy-weight lifting" on the main UI thread, or synchronous IO in general.

After obtaining this piece of information, it was required to understand why the operation doesn't complete. Perhaps the path of the requested file could shed some light on the observed behavior.
Since the path is passed across several managed methods before entering the Win32 API, I've attempted to dump the values of the parameters in the managed call stack using the !clrstack -p command.
0:000> !clrstack -p
OS Thread Id: 0xe4 (0)
ESP       EIP     
0012ee44 7c90e4f4 Win32Native.GetFileAttributesEx(...)
0012ee58 792e03f6 File.FillAttributeInfo(System.String, ...)
    PARAMETERS:
        path = no data
        data = 0x0012eeec
        tryagain = no data
        returnErrorOnNotFound = 0x00000001

0012eeec 7927ff71 System.IO.File.InternalExists(System.String)
    PARAMETERS:
        path = no data

0012ef20 792e96a6 System.IO.File.Exists(System.String)
    PARAMETERS:
        path = no data

0012ef4c 03a214c3 NUnit.Util.RecentFileEntry.get_Exists()
    PARAMETERS:
        this = no data

0012ef54 00e0fb7a NUnit.Gui.NUnitForm.NUnitForm_Load(Object, EventArgs)
    PARAMETERS:
        this = 0x013349ec
        sender = no data
        e = no data
Unfortunately, SOS wasn't able to retrieve the address of the managed string (thus, stating no data). It's likely that either the variable was stored in a register that was already overwritten by some later executing code, or perhaps we're just witnessing some of SOS's good old "buggy nature".

At this point, I take a small step into the world of native debugging and issue the kb command, to display the native call stack, including the first 3 parameters passed to each method.
0:000> kb
ChildEBP RetAddr  Args to Child              
0012ed84 7c90d79c 7c8111ff 0012eddc 0012eda4 ntdll!KiFastSystemCallRet
0012ed88 7c8111ff 0012eddc 0012eda4 00151bb8 ntdll!ZwQueryFullAttributesFile+0xc
0012ee08 0097a310 0139272c 00000000 0012eeec KERNEL32!GetFileAttributesExW+0x84
WARNING: Frame IP not in any known module. Following frames may be wrong.
0012ee2c 792e03f6 0012ee5c 00000000 01392764 +0x97a30f
0012eedc 7927ff71 00000001 00000000 00000000 mscorlib_ni+0x2203f6
0012ef18 792e96a6 00000000 00000000 00000000 mscorlib_ni+0x1bff71
0012ef80 7b1d4665 01336c40 0134f25c 00000000 mscorlib_ni+0x2296a6
0012efb8 7b1d4105 00000004 0012f000 7b1c6d11 System_Windows_Forms_ni+0x204665
0012efc4 7b1c6d11 0133797c 013349ec 00000001 System_Windows_Forms_ni+0x204105
0012f000 7b1c6b14 00000000 00000000 013349ec System_Windows_Forms_ni+0x1f6d11
0012f018 7b1d2fc8 013349ec 00000000 00000000 System_Windows_Forms_ni+0x1f6b14
0012f054 7b1c8906 3cc1af52 79e7a6b8 0012f2d4 System_Windows_Forms_ni+0x202fc8
0012f0ac 7b1d1d6a 013349ec 0012f0c0 7b1d1d20 System_Windows_Forms_ni+0x1f8906
0012f0b8 7b1d1d20 0012f0d0 7b1d2f11 0012f10c System_Windows_Forms_ni+0x201d6a
0012f0c0 7b1d2f11 0012f10c 013349ec 0012f0e4 System_Windows_Forms_ni+0x201d20
0012f0d0 7b1d1af4 00e111ec 0012f10c 01335448 System_Windows_Forms_ni+0x202f11
0012f0e4 7b1c8640 0012f100 7b1c85c1 00000000 System_Windows_Forms_ni+0x201af4
0012f0ec 7b1c85c1 00000000 01335448 0012f130 System_Windows_Forms_ni+0x1f8640
0012f100 7b1c849a 01335448 00291106 00000018 System_Windows_Forms_ni+0x1f85c1
0012f164 7e418734 00291106 00000018 00000001 System_Windows_Forms_ni+0x1f849a
As you can see, the thread eventually made its way into the kernel, calling GetFileAttributesExW.
Knowing that the first parameter that gets passed to the method (lpFileName) represent the name of the file or directory in hand, we are only left to dump its value using the du command.
0:000> du 0139272c
0139272c  "\\123.123.123.123\foo.dll"
With that, the culprit is found. Apparently, some time ago a UT assembly was loaded into NUnit from a certain IP address that is no longer available in the network, causing the thread to stall indefinitely.

In order to fix the issue, the problematic path was needed to be removed from the RecentProjects list in NUnit's settings file (located at %AppData%\NUnit\NUnitSettings.xml).

Monday, October 11, 2010

Hot/Cold Data Separation Considered Harmful in Multithreaded Environments

When optimizing your code for performance, a quite important step in the way is to check that your application correctly utilizes its processor's cache. Of course, the magnitude of importance here is solely depended on your own personal scenario, but in general, it's always a good idea to keep your cache usage patterns in mind when attempting to resolve a performance bottleneck or perhaps for when you're just looking for possible ways to enhance you're application's performance.

Due to the relative high cost of accessing the main memory, modern processors make use of a data and an instruction cache in an attempt to lower the latency involved in accessing the main memory.
A processor may choose to keep in its cache the values of frequently accessed memory addresses, or perhaps prefetch the values of adjacent memory addresses for possible future usage.
When the processor attempts to access a memory address, it first check to see whether it's already exist in one of its caches (L1, L2, L3 etc.). If the address isn't found, then we have a "cache miss" and the processor we'll have to perform a round-trip to the main memory in order to obtain the required data. Wasting valuable CPU cycles on the way.

Not too long ago, one of the most popular guidelines in designing cache-aware data structures was to split them into "hot" and "cold" parts. This kind of separation makes sense since it could lead to a more efficient utilization of the cache. Frequently used fields (the hot part) are grouped together in the same cache lines, instead of being mixed with infrequently used fields (the cold part). This way, the cache is able to contain more hot data, and cache lines aren't wasted to hold cold data.
For those of you who are interested, the subject is covered in greater detail in the article Splitting Data Objects to Increase Cache Utilization by Michael Franz and Thomas Kistler.

 Red: Hot data, Green: Cold data

While following this guideline could lead to a more efficient cache utilization, it won't necessarily lead to a performance improvement in a multithreaded environment. In fact, it's more likely that you'll face a performance degradation than gaining any performance improvement at all.
The problem in hand is that highly utilized caches could easily send your valuable CPU cycles down the drain, due to the effects of false sharing. Once a single cache line holds several frequently written-to fields, the chance it will get invalidated by another processor gets greater. Sharing a single cache line with multiple frequently modified fields could easily have a negative effect on your cache's locality.
In multithreaded environments, one could benefit from sparsely allocating frequently used fields in the cache. There's an obvious trade-off between cache utilization (memory usage) and locality (performance). While the cache will contain less hot data (which could result in more round-trips to the main memory), it would benefit from better locality, hence better scalability across multiple processors that might attempt to modify the same object simultaneously.

When designing a cache-aware data structure, you don't necessarily have to order your fields in such way that a hot field will always get followed by a cold field. Instead, "artificial" padding could be used to fill the excessive space left in the cache line which holds the structure. In .Net, decorating the type with an StructLayout(LayoutKind.Explicit) attribute and assigning each field with its appropriate offset would do the job.

Friday, September 24, 2010

Writing a Semi-Local Object Pool

Using object pooling in managed environments can usually benefit us in two ways:
  • Reducing the amount of time required to create "heavyweight" objects (that might involve executing some time consuming tasks).
  • Reducing the amount and rate of dynamic memory allocations. Thus, reducing the GC latency in future collections.
Nevertheless, it's important to remember that under certain scenarios, using object pools might actually have a negative impact on your application's performance. Since in managed environments (e.g, CLR, JVM), dynamic memory allocations is considerably fast, using object pools in favor of "lightwight" objects could cause somewhat of an unnecessary overhead during the object's allocation process. Brian Goetz summarized this issue:
Allocation in JVMs was not always so fast -- early JVMs indeed had poor allocation and garbage collection performance, which is almost certainly where this myth got started. In the very early days, we saw a lot of "allocation is slow" advice -- because it was, along with everything else in early JVMs -- and performance gurus advocated various tricks to avoid allocation, such as object pooling. (Public service announcement: Object pooling is now a serious performance loss for all but the most heavyweight of objects, and even then it is tricky to get right without introducing concurrency bottlenecks.)
A common, simple pattern for implementing an object pool is to create a single pool instance that is shared across all of the application. To achieve thread-safety, you would usually find a single, global lock around the Allocate and Free methods.
It's obvious that this type of design could introduce major concurrency bottlenecks. The more objects we'll attempt to pool, the greater the chance that we'll have threads attempting to acquire the pool's lock. And since we only maintain a single, global pool, contentions around that lock are bound to appear. Effectively ruining our application's scalability.
To demonstrate the issue, I've written a small benchmark that uses all of the available processors to allocate and free a constant number of objects (each thread gets an equal amount of objects to pool). Logically speaking, the more processors we use, the faster we should be able to finish allocating and freeing the constant number of objects. However, as the results show, we're actually experiencing a slowdown that gets worse as soon as we add more and more processors.
The results aren't surprising since they can be easily explained due to the massive amount of contentions we're experiencing around our single lock. (The time axis in the chart is expressed in milliseconds).

The first implementation of the pool that was being used in the test:
(Mind you that the code samples in this post are purely meant to demonstrate the conceptual differences between the pools).
    // holds a dictionary that makes a pool-per-type corelation
    public class SimpleMainPool
    {
        private Dictionary<Type, ISubPool> m_main;

        // to make things simpler, the dictionary isn't modified
        // after the first initialization
        public SimpleMainPool(Type[] pooledTypes)
        {
            m_main = new Dictionary<Type, ISubPool>();

            foreach (Type curType in pooledTypes)
                m_main.Add(curType, new SemiLocalPool(curType));
        }

        public object Allocate(Type type)
        {
            ISubPool sub = m_main[type];

            object pooledObj = sub.Allocate();
            return pooledObj;
        }

        public void Free(object obj)
        {
            ISubPool sub = m_main[obj.GetType()];
            sub.Free(obj);
        }
    }

    // our simple thread-safe pool
    class SimplePool : ISubPool
    {
        private const int PRIME = 50;

        private Type m_type;

        private Stack<object> m_sharedPool;

        public SimplePool(Type type)
        {
            m_sharedPool = new Stack<object>(PRIME);
            m_type = type;

            for (int i = 0; i < PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public object Allocate()
        {
            lock (m_sharedPool)
            {
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < PRIME; i++)
                    {
                        object newAlloc = Activator.CreateInstance(m_type);
                        m_sharedPool.Push(newAlloc);
                    }
                }

                object fromLocal = m_sharedPool.Pop();
                return fromLocal;
            }
        }

        public void Free(object obj)
        {
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }

    interface ISubPool
    {
        object Allocate();
        void Free(object obj);
    }  
As in all things related to concurrency, if you don't have locality, then you've got sharing, and once you have sharing, you will probably end up with contentions that are bound to harm your application's performance, wasting valuable CPU cycles.
So if we'd like to improve our scalability, then our goal is clear: reducing the amount of shared data. For example, if pools wouldn't be shared across different threads, then we wouldn't had to worry about synchronizing them and we could avoid the involved contentions altogether. A simple way to achieve this, is to use the TLS to allocate an independent pool for every thread. This way, on the one hand we'll achieve perfect scalability due to the avoidance of state sharing, but on the other hand, this kind of implementation could lead to an excessive memory usage. For instance, if a single instance of our pool (including all of its pre-allocated objects) weights about 10Mb, then on a machine with 16 processors, we could find ourselves dedicating no less then 160Mb in favor of our thread-local pools, even though its not likely that every single thread needs to use all the types of objects that we're allocated in its local pool.
For example, if we're parallelizing some algorithm using 3 threads, where thread 1 needs to use objects of type A and thread 2 needs to use objects of type B and thread 3 needs to use objects of type C, then it makes no sense that every one of those threads will hold a pool that will contain objects of all three types.

A possible solution for this problem is to use a pool hierarchy, where every time a thread attempts to create an object, it will direct itself to its "closest" pool instance. If that pool doesn't contain available instances of the requested object, then it will continue to navigate up the hierarchy until it reaches a pool that holds available instances of the object. Once the thread finishes using the object, it will return it to a pool that is located "closer" to that thread, this way we are able to maintain a level of locality between a thread and its used objects.
Instead of getting confused with unclear and too complex hierarchies, I'll demonstrate the concept using a flat hierarchy that offers a single "global" pool that is shared across all threads, and another local pool for every thread.
Basically, the idea is that the only place synchronization is involved is in the shared pool. So in the optimal scenario, each local pool will eventually hold only the amount of objects required to keep the thread from accessing the shared pool.
Every time a thread needs to create an object, it will first check its local pool. Since this pool only serves the requesting thread, we won't have to deal with synchronization here. Only in case where we've ran out of objects, we'll move on to the shared pool and transfer N more instances of the requested object to the local pool. It could be wise to transfer more objects than the thread initially asked for in order to avoid future accesses to the shared pool. Also, in order to cap the amount of memory we'd like to dedicate for each thread, we could decide that each local pool can hold a maximum of X objects. Once we've exceeded that number, every time a thread will want to free an object, it will return it to the shared pool instead of its local pool (of course, this could cause some contentions, depending on the implementation detail [e.g. the pool may buffer object returns]. But its entirely up to the developer to perform this kind of fine-tuning [memory usage vs. scalability]).

To demonstrate to concept, I've came up with this simplistic pool implementation:
    class SemiLocalPool : ISubPool
    {
        private const int SHARED_PRIME = 50;

        private const int LOCAL_PRIME = 20;
        private const int LOCAL_MAX = 1000;

        [ThreadStatic]
        private static Stack<object> t_localPool;

        private Type m_type;
        private Stack<object> m_sharedPool;

        public SemiLocalPool(Type type)
        {
            m_sharedPool = new Stack<object>(SHARED_PRIME);

            m_type = type;

            for (int i = 0; i < SHARED_PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public static void Init()
        {
            t_localPool = new Stack<object>(LOCAL_PRIME);
        }

        public object Allocate()
        {
            // first, try to allocate from the local pool
            if (t_localPool.Count > 0)
            {
                object localObj = t_localPool.Pop();
                return localObj;
            }

            int allocated = 0;

            lock (m_sharedPool)
            {
                // pass objects from shared to local pool
                for (; m_sharedPool.Count > 0 && allocated < LOCAL_PRIME - 1; allocated++)
                {
                    object sharedObj = m_sharedPool.Pop();
                    t_localPool.Push(sharedObj);
                }

                // prime share pool
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < SHARED_PRIME; i++)
                    {
                        // bad practice: holding the lock while executing external code
                        object sharedObj = Activator.CreateInstance(m_type);

                        m_sharedPool.Push(sharedObj);
                    }
                }
            }

            // if the shared pool didn't contain enough elements, prime the remaining items
            for (; allocated < LOCAL_PRIME - 1; allocated++)
            {
                object newAlloc = Activator.CreateInstance(m_type);
                t_localPool.Push(newAlloc);
            }

            object fromLocal = Activator.CreateInstance(m_type);
            return fromLocal;
        }

        public void Free(object obj)
        {
            // first return to local pool
            if (t_localPool.Count < LOCAL_MAX)
            {
                t_localPool.Push(obj);
                return;
            }

            // only after reaching LOCAL_MAX push back to the shared pool
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }
The scalability difference between the two implementations is closely related to the thread's pool usage pattern and to the values given to LOCAL_MAX, LOCAL_PRIME etc. If we reach a situation where there's always enough objects in the local pool, then we'll should enjoy perfect scalability.
For the purpose of the demonstration, here are the results of the previous benchmark, now using the new pool implementation (beside exceeding the predefined values at the beginning of the benchmark, the benchmark's behavior exhibits optimal usage pattern [accessing only the local pool after a while]).

One problematic characteristic of this type of design is its reliance on thread affinity. While in some scenarios it could actually benefit us, in others it could make the Semi-Local Pool irrelevant.
If every thread in our application is affinitized to certain section of the code (that allocates a constant set of objects), then using this design could be optimal since we dedicate each local pool to a managed thread. We actually assume that the thread will always attempt to allocate objects from a specific, constant set of objects.
However, if the threads doesn't comply with this assumption, then its only a matter of time until each local pool will hold the entire set of pooled objects in the applications (which will of course lead to high memory usage).
In order to improve our way of handling with such scenarios, we could decide to add a kind of additional hierarchy level, that will separate the shared pools according to different sections in the code. Meaning, threads that are currently executing code from a network module for example will access Pool X, while threads that are currently executing some algorithm will access Pool Y. This way we could achieve object locality not by relaying on thread affinity, but on "category affinity" (each section of the code uses a certain set of objects, relevant to it). When a thread will want to allocate an object, it will tell the pool which area in the code its currently executing, so it would receive the appropriate "category pool. It's likely that this pool already contains the same type of objects that will be requested by the current thread since they we're already allocated by other threads that previously executed the same code section.
And some code to illustrate the concept:
    public class CategorizedMainPool
    {
        private Dictionary<string, SimpleMainPool> m_main;

        public CategorizedMainPool(Tuple<string, Type[]>[] pooledCategories)
        {
            m_main = new Dictionary<string, SimpleMainPool>();

            foreach (Tuple<string, Type[]> curCategory in pooledCategories)
            {
                SimpleMainPool curSub = new SimpleMainPool(curCategory.Item2);
                m_main.Add(curCategory.Item1, curSub);
            }
        }

        public object Allocate(string category, Type type)
        {
            SimpleMainPool sub = m_main[category];

            object pooledObj = sub.Allocate(type);
            return pooledObj;
        }

        public void Free(string category, object obj)
        {
            SimpleMainPool sub = m_main[catagory];
            sub.Free(obj);
        }
    }

Friday, September 10, 2010

"!CLRStack -p" Isn't Always Reliable

One of the most commonly used commands in SOS is !CLRStack. When combined with the -p switch, SOS will attempt to display the values of the parameters that were passed to the functions in our managed call-stack. It's important to emphasis that SOS will only attempt to display the correct values, since sometimes its just going to get it all wrong.

In the case where SOS comes to the conclusion that it cannot retrieve the value of a parameter, the string will be displayed. This happens at situations where SOS honestly can't track the value of the parameter by just looking at a specific stack frame. For example, if we're using a calling convention such as fast call, the first two parameters (starting from the left) that can be contained in a register, will be passed in the ECX and EDX registers instead of being pushed onto the stack. For member functions, the value of the this pointer is usually passed in the ECX register.
This kind of behavior can lead into situations where the values of some of the function parameters may be missing since the registers we're already overridden by other functions that we're called down the stack.
As opposed to situations where SOS is able to come to the conclusion that it isn't sure what is the correct value of a parameter, every once in a while it's just going to get things wrong, and display incorrect values. This obviously could terribly mislead the person who attempts to debug the application.
The thing that might be the most concerning about this phenomena, is that it's not very hard to reproduce. Let's take the following application as an example:

    class Program
    {
        static void Main(string[] args)
        {
            Foo a = new Foo();
            a.Work(1, 2, 3, 4, 5);
        }
    }
 
    class Foo
    {
        [MethodImpl(MethodImplOptions.NoInlining)]
        public void Work(int x, int y, int z, int k, int p)
        {
            // break with debugger here
        }
    }
Now we'll run WinDbg, load SOS and see what the !CLRStack command has to say about the parameters that we're passed to the Work method.
(Note: The output might slightly vary, depending on your version of SOS. The following debugging session was perform on version 4.0.30319.1).

0:000> !clrstack -p
OS Thread Id: 0xfbc (0)
Child SP IP       Call Site
0012f3fc 030300b3 ConsoleApplication1.Foo.Work(Int32, Int32, Int32, Int32, Int32) [Program.cs @ 24]
    PARAMETERS:
        this () = 0x00b3c358
        x () = 0x00000001
        y (0x0012f40c) = 0x00000003
        z (0x0012f408) = 0x00000004
        k (0x0012f404) = 0x00000005
        p (0x0012f400) = 0x03030092
0012f414 03030092 ConsoleApplication1.Program.Main(System.String[]) [Program.cs @ 16]
    PARAMETERS:
        args =
0012f648 791421db [GCFrame: 0012f648]

0:000> r ecx  // holds "this" pointer
ecx=00b3c358
0:000> r edx  // holds "x" parameter
edx=00000001
If so, it isn't difficult to see that besides the this pointer and the x parameter (that we're passed as registers), SOS got all of the other parameters wrong. In fact, one may notice in a "shift" that was performed on the displayed values (y got the value of z, while z got the value of k, and so on...).
In order to better understand what went on here, we'll print the memory between the relevant stack pointers (in previous versions, this value was represented by the ESP column instead of the Child SP column).

0:000> dp /c 1 0012f3fc  0012f414
0012f3fc  0012f414 // EBP
0012f400  03030092 // Return Address
0012f404  00000005 // p 
0012f408  00000004 // k
0012f40c  00000003 // z
0012f410  00000002 // y
0012f414  0012f424 // EBP

Now, when we compare the stack-addresses of the parameters that SOS gave us, against the real addresses that we see here, we can confirm that a small address shift was performed to the memory addresses of the parameters that we're passed on the stack. So every time SOS attempts to display the value of a parameter, it actually displays the value of the parameter that was passed next to it.
This scenario is a classic example to the bit "buggy" nature of SOS. It doesn't mean that we have to immediatly stop using the !CLRStack command, but it should to remind us not to take SOS's output as the "absolute truth" when debugging, and just keep ourselves alert for all kind of "weird" behaviors such as this one.

Thursday, September 2, 2010

DateTime.Now in v4.0 Causes Dynamic Memory Allocations

A while back I've mentioned in a post that calling DateTime.Now causes boxing. Following the posting, a feedback item was also posted on Microsoft Connect, reporting about the issue.
Yesterday, the feedback's status turned to Resolved after Greg cleverly remarked that this issue was fixed in v4.0 (the boxing occurred until v3.5).

However, after reading the response I took the same benchmark code from the previous post an ran it again, this time using .Net Framework 4.0. Surprisingly, perfmon kept reporting on high allocation rates in the Allocated Bytes/sec counter. So I've opened up Reflector and took a look at DateTime.Now's new implementation detail. And indeed, the boxing issue was resolved since the new implementation uses the TimeZoneInfo type instead of TimeZone. Unable to find the source of the allocations from just looking at the code, it was time to launch WinDbg.
After letting the application to run for a while, I've attached WinDbg to the process and executed the !DumpHeap -stat command a couple of times so I could take a general look at the kind and amount of objects that currently live on the managed heap. The output was as follows:

0:003> !dumpheap -stat
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0     9119       291808 System.Globalization.DaylightTime
Total 9771 objects

0:003> !dumpheap -stat 
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0    20654       660928 System.Globalization.DaylightTime
Total 21306 objects

This output reveals the identity of our memory-consuming bandit: DaylightTime.
Now, all is left is to spot where this type is being instantated and used. For this purpose, we could use the !BPMD -md command in order to place breakpoints on specific managed methods that DaylightTime exposes (you could dump the type's methods using the following command: !DumpMT -md ).
After setting the breakpoint, the application continues its execution and immediately breaks. Looking at the managed callstack using !CLRStack reveals the source method of the allocation: TimeZoneInfo.GetIsDaylightSavingsFromUtc. This method creates an instance of DaylightTime, and since DaylightTime is a class (hence, a reference type), a dynamic memory allocation is being made.

   // a snippet from the implementation:
   TimeSpan span = utc;
   DaylightTime daylightTime = GetDaylightTime(Year, rule);
   DateTime startTime = daylightTime.Start - span;
   DateTime endTime = (daylightTime.End - span) - rule.DaylightDelta;

In conclusion, DateTime.Now's implementation was updated in BCL v4.0 and the boxing of Int32s was removed since the implementation uses the TimeZoneInfo type instead of TimeZone. However, using TimeZoneInfo results in a new source for dynamic allocations, but this time instead of redundant boxings, the allocations are caused just because a reference type is beind used under the hood. And since each instance of DaylightTime is sized up at 32 bytes (including the Object Header), the update in the BCL could be considered as a turn for the worst regarding memory usage since DaylightTime instances are more memory consuming than boxed instances of Int32.

Sunday, August 29, 2010

Brain Teasing With Strings

And in today's post... a riddle.
Actually, it comes down to a short C# program that deals with string comparisons, and demonstrates a series of "unexpected" behaviors. How "unexpected" are the results? Well, it depends on the reader's degree of experience at string comparisons.
Lately this sort of "brain-teasers" became quite popular on different forums/newsgroups, but it seems like no matter where those teasers are posted, it never comes with a full in-depth explanation for all of the "dark-magic" going on behind the scenes, that could potentially explain the demonstrated "weird" and "unexpected" behaviors. So basically, I'll attempt to make things a little bit clearer than it is now.
So without anymore introductions, please tell me, what would be the output for the following program?

static void Main()
{
    char[] a = new char[0];
    string b = new string(a);
    string c = new string(new char[0]);
    string d = new string(new char[0]);

    Console.WriteLine(object.ReferenceEquals(a, b));
    Console.WriteLine(object.ReferenceEquals(b, c));
    Console.WriteLine(object.ReferenceEquals(c, d));
}

When this question is presented to a group of experienced developers, it's most likely to result in a variety of different answers, where each of the answers is backed up with some explanation that usually "makes sense". The problem is that they usually just "makes sense", and doesn't refer to some hard proofs that can uncover the mystery completely.
So, are you ready with your answers? Let's check the output:

False
True
True
"Hmm... that was interesting".
Let's see what do we got here. On the one hand, a isn't equal to b. This makes sense since after all, they are difference instances of different types. But on the other hand, we can also see that b is equal to c and d as well. This could appear to be a little confusing since as we know, strings are supposed to be immutable, so according to the code, we are "supposed" to create three different string instances (mind you that we aren't dealing with user-hardcoded, interned strings here).
It appears that even though we've create three different arrays, we've only allocated a single string (that one that b, c and d references).

Usually, when a char[] (or, some other kind of string-representing structure) is passed to a string constructor, a new string is allocated to represent the array's characters. However, as our brief program demonstrates, this isn't always the case.
To understand why this happens, we'll have to dig into the implementation of the string class, and more specifically, to it's internal implementation in SString (the following code samples are taken from SSCLI's implementation).
Looking at SString's internal constructor, we could notice that the char[] argument that was received is checked whether it's set to NULL or it's length is equal to zero. If it does, the implementation completely "forgets" about that argument, and instead uses the new SString instance to reference an interned version of an empty string (so basically, the redundant allocation of the new, empty string, is optimized away).
We could see this behavior in SString's Set and Clear functions (the Set function is invoked from the constructor).

void SString::Set(const WCHAR *string)
{
    if (string == NULL || *string == 0)
        Clear();
    else
    {
        Resize((COUNT_T) wcslen(string), REPRESENTATION_UNICODE);
        wcscpy_s(GetRawUnicode(), GetBufferSizeInCharIncludeNullChar(), string);
    }

    RETURN;
}

void SString::Clear()
{
    SetRepresentation(REPRESENTATION_EMPTY);

    if (IsImmutable())
    {
        // Use shared empty string rather than allocating a new buffer
        SBuffer::SetImmutable(s_EmptyBuffer, sizeof(s_EmptyBuffer));
    }
    else
    {
        SBuffer::TweakSize(sizeof(WCHAR));
        GetRawUnicode()[0] = 0;
    }

    RETURN;
}

Tuesday, August 24, 2010

Reducing AutoResetEvent's Synchronization Overhead

One of the most common design patterns in multithreaded programming is the producer-consumer. In the usual scenario, a consumer thread is spawned into an infinite loop, where it waits on some handle until new data arrives from the producer threads. Then it wakes up, and process the new data.
The overall idea can be expressed via this code:

AutoResetEvent m_event = new AutoResetEvent(false);

private void WorkCycles()
{
    while(true)
    {
        // wait for a signal
        m_event.WaitOne();

        // do work..
    }
}

Using the AutoResetEvent class seems very suitable for this kind of scenario since it supplies us the exact behavior that we're looking for: causing a thread to to wait on a handle until a signal arrives. However, calls to WaitOne/Set come with a price, and it ain't cheap. The AutoResetEvent class is simply a managed-code wrapper for Win32's WaitForSingleObject, and it doesn't keep track of the internal event's state (whether it's currently signaled state or not), thus, every call to AutoResetEvent will result in diving all the way into kernel-mode, even though "we don't have to" (if we would have tracked the event's internal state).
In some scenarios, especially in those where producer threads pass data for the consumer in high frequencies, most of the calls to Set() are redundant. If for example every 100ms a producer passes new data (which results in a calling to Set), but it takes the consumer an entire second to process that data, then it means we've spent time calling Set 9 times more than we had to. Additionally, after the consumer finished its processing, it will have to call WaitOne again, only to realize the event was already set by a producer.

So, how expensive are those redundant calls to AutoResetEvent? The following benchmark should demonstrate:

while (true)
{
    AutoResetEvent eventObj = new AutoResetEvent(false);

    Stopwatch sw = Stopwatch.StartNew();
    for (int i = 0; i < 1000000; i++)
    {
        eventObj.Set();

        eventObj.WaitOne();
    }

    Console.WriteLine(sw.ElapsedMilliseconds);
}

On my machine, the average result was 1035ms.
A whole second, this is the ridiculously large amount of time it took us the complete those few lines of code. We are wasting here an entire second just for using a synchronization primitive that in our real application, could be considered to be quite insignificant at first look. Seemingly, we shouldn't waste more than a few milliseconds running that code, but as you can see, things are a little different in practice.

EconomicAutoResetEvent
The solution for this problem is quite straightforward, since our goal is to reduce the number of redundant calls to Set/WaitOne to a minimum. How could you achieve that? One option is to use an alternative synchronization primitive, that for the sake of this post, will be named EconomicAutoResetEvent. The function of this new primitive, is to improve AutoResetEvent's performance by keeping track of the event's state, thus, avoiding transitions into the kernel.
I'll demonstrate a possible implementation for such primitive, and explain its behavior in more detail afterwords:

public class EconomicResetEvent
{
    private volatile int m_eventState;
    private AutoResetEvent m_event;

    private Thread m_worker;

    private const int EVENT_SET     = 1;
    private const int EVENT_NOT_SET = 2;
    private const int EVENT_ON_WAIT = 3;

    public EconomicResetEvent(bool initialState, Thread workerThread)
    {
        m_event = new AutoResetEvent(initialState);
        m_eventState = initialState ? EVENT_SET : EVENT_NOT_SET;

        m_worker = workerThread;
    }

    public void WaitForWork()
    {
        verifyCaller();

        if (m_eventState == EVENT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_NOT_SET, EVENT_SET) == EVENT_SET)
        {
            return;
        }

        if (m_eventState == EVENT_NOT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_ON_WAIT, EVENT_NOT_SET) == EVENT_NOT_SET)
        {
            m_event.WaitOne();
        }
    }

    public void SignalWork()
    {
        if (m_eventState == EVENT_NOT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_SET, EVENT_NOT_SET) == EVENT_NOT_SET)
        {
            return;
        }

        if (m_eventState == EVENT_ON_WAIT && Interlocked.CompareExchange(
            ref m_eventState, EVENT_NOT_SET, EVENT_ON_WAIT) == EVENT_ON_WAIT)
        {
            m_event.Set();
        }
    }

    // [Conditional("DEBUG")]
    private void verifyCaller()
    {
        if (m_worker != Thread.CurrentThread)
        {
            string errMsg = string.Format("Only the pre-defined Worker thread may
               call WaitOne (Current: {0}, Worker: {1})", Thread.CurrentThread, m_worker);

            throw new SynchronizationLockException(errMsg);
        }
    }
}
 
I assume that the question most of you are probably thinking about right now is "How fast it is?". Well, after running the same benchmark as before, the average result I get on my machine is only 9ms. An outstanding 11500% of improvement over using AutoResetEvent directly. And to think that all we just did is to avoid some calls to the internal kernel object.
The most important part in the implementation is the tracking of the state of the internal kernel object (signaled/not-signaled). The state itself is kept in m_eventState, this how we could know which calls to WaitForWork/SignalWork could avoid accessing m_event (AutoResetEvent).
Additionally, in order to make the store/load operations on m_eventState thread-safe, I've used a couple of CAS operations, that even though they could be cheaper then a "full blown lock", it's still considered to be quite an expensive instruction that usually it's best to avoid when they're not necessary. This is exactly why the double-tests on the if statements are performed. This technique is usually referred to as TATAS (i.e, Test-and-Test-and-Set).
As the comment at the class's deceleration mentions, some mild race conditions could occur when the consumer thread is entering/leaving WaitForWork while a producer calls SignalWork exactly at a very specific moment. This race condition won't cause any functional harm, at its worst-case, it would cause us to perform a redundant call to Set/WaitOne. Resolving this "issue" would mean that we prefer to enforce fairness. Such enforcement could have some negative effects on performance.

Since we're already discussing the subject, It would be appropriate to mention that the Parallel Extensions library had introduced a new synchronization primitive named ManualResetEventSlim, that also supplies us with extended functionality and better performance over the standard event objects. Beyond keeping the state of the internal kernel event, it also performs some spinning around the flag before actually blocking the thread, and also uses lazy initialization to create the kernel object (its created only when it's needed). The good guys at Microsoft were kind enough to publish a short but informative document that demonstrates the performance differences between the new and old synchronization primitives under a selected group of usage scenarios.

Friday, August 20, 2010

The Case of Delayed ACKs and Nagle's Algorithm

Even though the negative consequences of combining Nagle's Algorithm with Delayed ACKs when using TCP is well documented in the literature, it's still considered to be a common pitfall that many people tend to forget about. Since the negative effects of the combination will appear only under certain pre-conditions, it could be a while until one correlates the application's "weird" behavior with the usage of those two algorithms. So as the performance penalty grows to be more distinct, it would be easier the identify its source.

Delayed ACKs
Since TCP guarantees that every packet that is sent on the network will arrive to its designated remote host, it needs some kind of acknowledgment from that remote host that the packet managed to arrive. So, every time a host receives a packet, it needs to send back an ACK message as an acknowledgment.
The problem with using dedicated ACK messages is that we could find ourselves "overflowing" the network with messages that have nothing to do with our application (and contains almost no data at all). All of this just to say "Yes, I've received your packet" (if you combine the minimal size of a TCP and IP header [even without the Ethernet's header size] you already reach 40 bytes [and for IPv6, that number increases to 60 bytes]).
So, in order to reduce that overhead, the usage of "Delayed ACKs" was defined. The idea is that instead of sending a dedicated ACK message for each received packet, we make an assumption that our application (the receiver) is probably about to send "some" message (not necessarily a response message) back to the remote host that sent us the packet in the first place. This way, we could "piggyback" the application's message and add to its header the ACK for the previously received packet. Hench, we could substantially reduce that amount of redundant data circulating in our network.
Usually its customary to use a 200ms delay for sending ACKs (the exact behavior of the timeout depends on the protocol's implementation detail. Does a 200ms timer is created when the socket opens? or perhaps only at times that the application needs to send ACKs?).
In Windows, the default timeout is 200ms as well. But if you want to, you can change it by modifying the value of the TcpDelAckTicks in the registry, and set it somewhere between 0ms and 600ms (as an anecdote, the Host Requirements RFC forbids using ACK delays greater than 500ms).
Its worth noting that according to the RFC, there's no need to send back an ACK for every packet we receive, since a single ACK message could acknowledge multiple received packets. If for example machine A sends 5 different messages in 10ms differences to machine B, the minute the first message arrives to machine B, it opens a timer with a 200ms timeout so it will be alerted to send back an ACK message for the first message (assuming this is a one-way interface, so machine B won't send back any messages). Until that timer will elapse, the machine will receive the other 4 messages. So if we're already going to send an ACK for the first message, we might as well modify it so it will acknowledge the entire set of 5 messages.
Additionally, receiving packets aren't the only triggers for sending an ACK message. For example, if our receive window (the maximum number of bytes we can receive without sending back an ACK message), we will have to immediately send back an ACK message. Another trigger is the "Send ACK for Every Second Packet", that does exactly what its name implies. In Windows, the default value (2) can be changed, by modifying the TcpAckFrequency registry value.

Nagle's Algorithm
Even though it isn't directly related to the usage of Delayed ACKs, this algorithm attempts to resolve a similar problem that occurs on the sender's side. From the same reason we shouldn't overflow the network with "small, ACK messages", we shouldn't send "small, application messages" ourselves, due to the overhead involved in sending the entire header, while the message's body is very small.
According to the algorithm, the protocol may delay small "send" operations so it could buffer them together (and thus, a single packet could be send for more than a couple of small, applicative messages).
Deciding when to stop buffering the data isn't arbitrary, as it depends on the rate of receiving ACKs from the remote host. The idea is that until we haven't received an ACK message for our previously sent packet, there's no point in sending an additional packet. So, while we are waiting to receive an ACK from the remote host, we are buffering all of data we are about to send (of course, under the limits of the MSS). The minute we'll receive the ACK for our previous packet, all of the buffered data will be combined, and sent as a single packet.

Seemingly, both of the mentioned algorithms justifies their existence since they are attempting to resolve real problems. However, what will happen if we'll combine them both? On one hand, both of them will try to reduce the amount of tinygrams being sent on their side (either ACK messages, or applicative messages). But on the other hand, under certain conditions, they may cause significant amount of delay when attempting to send messages on the network. The most obvious example is when you've got a one-way interface that only one side only sends messages, and the other just receive them (without "answering"). In this case, even if the application continuously sends messages under a high rate, we are expected to notice latencies up to 200ms, since the time the message was "sent" by the application, until it was received by the recipient (since it was delayed by Nagle's algorithm). In other cases, also in two-way interfaces, there might be occasions in which the client/server stops sending message for a couple of moments. This also might cause latencies up to 200ms in its receive rate. In those kind of cases, identifying the source of the latencies might be more difficult since they tend to appear randomly, with latencies that are not always constant (less then 200ms). The exact behavior is determined by the characteristics of the application.


In order to illustrate this behavior via code, I've written the following program that measures the time it takes us to send two messages that seemingly should be sent instantly. One right after the other.

void Server()
{
    Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    server.Bind(ServerEndPoint);
    server.Listen(1);
    Socket s = server.Accept();

    while (true)
    {
        // measure how long it takes to receive both messages
        Stopwatch stopwatch = Stopwatch.StartNew();

        s.Receive(new byte[8]);
        s.Receive(new byte[8]);

        // Output: around 200ms
        Console.WriteLine(stopwatch.ElapsedMilliseconds);
    }
}

void Client()
{
    Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    client.Bind(ClientEndPoint);
    client.Connect(ServerEndPoint);

    while (true)
    {
        client.Send(new byte[8]); // will be sent immediately
        client.Send(new byte[8]); // delayed for 200ms

        // wait for an imaginery response
        client.Receive(new byte[0]);
    }
}

Due to the unpleasant effects that might cause due to the combination of the algorithms, the RFC states that implementations of the TCP protocol, that use Nagle's algorithm, must also support a way to disable it, so applicative messages won't be delayed by the protocol when they are being sent. This ability is exposed to us by using the TCP_NODELAY flag, and in .Net, its usage is wrapped with the Socket.NoDelay property.

Saturday, August 14, 2010

Don't Rely on Environment.ProcessorCount

One of the most hidden knowledge in multithreaded programming is the question "How many threads I should use in my application to achieve the best performance from my available hardware?". The answer to this question may vary since it depends on the characteristics of the application. Whether it's CPU bound, or IO bound? How much of the work is being paralleled? and so on... At the bottom line, all of these "formulas" are based upon the number of available processors.

Usually when an application runs it's initialization routine and gets ready to create a collection of worker threads, it will check how many processors are installed on the machine, so it could decide how many threads exactly it should create. To get the number of installed processors, you would usually call Environment.ProcessorCount. This property simply calls the Win32 GetSystemInfo function, and returns the dwNumberOfProcessors field to the caller. The problem with this value is that it doesn't necessarily represent the number of processors available to our process. In a certain scenario, the user that launched our application might have decided to give it a certain Processor Affinity, that will cause our application's threads to execute only on the processors set by the given affinity, instead of all the installed processors. The problem in this case is that the application will completely ignore the processor affinity, and create a larger amount of threads than it should (in an even worse case, it could assign the threads with affinities set to processors that aren't even available to the process).
What will eventually happen is that we'll have too many threads running on our subset of available processors. Causing us to suffer from a performance penalty caused by excessive context switching.
The scenario in which we set our process to run on only a subset of processors is far from being far-fetched since in situations where we have a couple of applications that are designed to take advantage of all of the available processors, and are meant to run in parallel on the same machine (without using some kind of VMware), then we would probably like to split our installed processors in half, giving each process only a set or processors to use. If the applications won't be aware to this subset, then we'll find ourselves wasting valuable CPU cycles, for no good reason.

In .Net, you could get the processor affinity by using the property Process.ProcessorAffinity. It will return a bitmask where each bit represents a logical CPU where our process can execute (in case the mask is set to 0, the scheduler will decide which processors our application will use. So basically, all of the processors are available).
Current versions of Windows provide support for more than 64 logical CPUs. In order to address all of those CPUs, they are divided into Groups, where each group can address up to 64 processors. So when looking at the processor affinity bitmask, you should be aware that it belongs only to a specific group (by default, only a single group is used).
So next time you're checking how many processors are available to your process, remember to count the number of lit bits in your processor affinity bitmask.

 static void PrintAffinitizedProcessors()
 {
     // gets the number of affinitized proccesors in the
     // current processor group (up to 64 logical processors)
     Process currentProcess = Process.GetCurrentProcess();
     long affinityMask = (long)currentProcess.ProcessorAffinity;
 
     if (affinityMask == 0)
         affinityMask = (long)Math.Pow(Environment.ProcessorCount, 2) - 1;
 
     const int BITS_IN_BYTE = 8;
     int numberOfBits = IntPtr.Size * BITS_IN_BYTE;
 
     int counter = 0;
 
     for (int i = 0; i < numberOfBits; i++)
     {
         if ((affinityMask >> i & 1) == 1)
         {
             Console.WriteLine(i);
             counter++;
         }
     }
 
     Console.WriteLine("Total: " + counter);
 }

Saturday, August 7, 2010

Accurately Measuring GC Suspensions

When you analyze the performance of a managed application, and look for the application's main bottlenecks, one place you should always check is the amount of time your application spend in GC. In order to get that information, you can always run Perfmon and get a general analysis of how the GC behaves in your application (where the most of the time, you'll be looking at the %Time in GC counter).
While looking at those data sets can give us a good overview about how the GC affects our application's performance, it's just not enough in order to get more in-depth insights about observed "fezzes" in the application. For instance, even if we'll see that in 90% of the time we only spend 2% in GC collections, it still doesn't mean that during some critical moment in our application, there wasn't any generation 2 collection that caused our application to freeze for about 200ms, which could cause some serious damage in some scenarios.
The problem is that GC collections don't just cause the application to "run more slowly", but it puts the application into a complete stop for unknown amounts of time. So solely looking at the %Time in GC graph doesn't tells us for sure what was the real damage of each collection. Also, remember that perfmon's highest sampling resolution is only 1 second. So in case we've suffered from a single generation 2 collection, we've might even won't notice it as we should. More than that, its almost impossible to make sure that perfmon is runs, gathers and records the correct data on every machine that runs your application at any given time.

This is why we need a reliable way to monitor when the GC decides to perform a collection, and when it does, check the collection's generation, and the amount of time it takes to complete it.
Today, the System.GC class doesn't expose us an accurate way to collect this data. However, by writing a custom CLR Host, we could integrate our code with the CLR and receive from it the proper notifications that will tell us exactly when each and every GC starts and end in the process.
In order to do so, we will need to implement the IHostGCManager interface, and use one out of its three available callback functions. The first is SuspensionStarting, and the second is SuspensionEnding, which also passes us a single DWORD parameter that represent the number of generation that ended. So as you might have already figured out, the CLR makes sure to call SuspensionStarting right before it starts the collection, and afterwords, it calls SuspensionEnding.
One thing to pay attention about SuspensionEnding is that the documentation available on MSDN isn't entirely accurate, and might confuse users about what are the possible triggers to call the callback. This is what the documentation says today:

"...Notifies the host that the common language runtime (CLR) is resuming execution of tasks on threads that had been suspended for a garbage collection.
[parameter: generation] The garbage collection generation that is just finishing, from which the thread is resuming."
So according to the documentation, the callback will only be invoked due to freezings caused by a GC. However, this isn't the only trigger for invoking this callback. Actually, the CLR will invoke it also after it continuous execution after other kinds of "stop-the-world" operations that it might perform (e.g, loading and unloading of AppDomains). If we'll look in the SSCLI's implementation to where that callback is invoked, we could notice the following code:

if (pGCThreadControl)
{
    // If we the suspension was for a GC, tell the host what generation GC.
    DWORD   Generation = (bFinishedGC
        ? GCHeap::GetGCHeap()->GetCondemnedGeneration()
        : ~0U);

    pGCThreadControl->SuspensionEnding(Generation);
}

So we can see that for every freezing that wasn't caused due to a GC, the generation parameter that is passed to the callback will contain the value of UINT_MAX, so when implementing your CLR Host, you should remember checking for this special value.

As for the measuring itself, we'll use the QueryPerformanceCounter function (in .Net, wrapped by the Stopwatch class), to achieve the highest possible time resolution for every collection.
Since in most of the time, the collections that we'll encounter will be very short ones (mostly ephemeral collections, that could take only a few millisecond per collection), we'll likely want to avoid spending time recording the data (so we'll avoid unnecessary IO). In such case it could be useful to use a logging framework that will filter collections according to their severity (e.g, Debug for very short collections, Info for more notable collections, and Warn for more lengthy collections that might indicate a problem). After attaching an appender that writes all of our logs to the console window, and running an application that constantly allocates memory, we could get an output such as this:


As a reference, I'm including a sample CLR Host that monitors collections, and writes to the console their duration:


#include
#include
#include
#include

using namespace std;

#define APP_STARTUP_EXE L"TestApplication.exe"
#define APP_ENTRY_TYPE L"SomeNamespace.Program"
#define APP_ENTRY_METHOD L"Main"

class MyCLRHost : public IHostControl, public IHostGCManager
{
private:
    LONG m_refCount;
    LARGE_INTEGER m_lastGCStart;
    LARGE_INTEGER m_frequency;

public:
    MyCLRHost() { QueryPerformanceFrequency(&m_frequency); }

    // IHostControl
    HRESULT __stdcall GetHostManager(REFIID riid, void** ppObject)
    {
        if(riid == IID_IHostGCManager)
        {
            *ppObject = static_cast(this);
            return S_OK;
        }

        *ppObject = NULL;
        return E_NOINTERFACE;
    }

    // IUnknown
    HRESULT __stdcall QueryInterface(REFIID riid, void** ppvObject)
    {
        if (riid == IID_IHostGCManager)
        {
            *ppvObject = static_cast(this);
            return S_OK;
        }

        *ppvObject = NULL;
        return E_NOINTERFACE;
    }

    HRESULT __stdcall SetAppDomainManager(DWORD appDomain, IUnknown* domainManager)
    {
        return S_OK;
    }

    ULONG __stdcall AddRef() { return InterlockedIncrement(&m_refCount); }
    ULONG __stdcall Release() { return InterlockedDecrement(&m_refCount); }

    // IHostGCManager
    HRESULT __stdcall ThreadIsBlockingForSuspension() { return S_OK; }

    HRESULT __stdcall SuspensionStarting()
    {
        m_lastGCStart;
        QueryPerformanceCounter(&m_lastGCStart);

        return S_OK;
    }

    HRESULT __stdcall SuspensionEnding(DWORD gen)
    {
        LARGE_INTEGER gcEnd;
        QueryPerformanceCounter(&gcEnd);
        double duration = ((gcEnd.QuadPart - m_lastGCStart.QuadPart))
            * 1000.0 / (double)m_frequency.QuadPart;

        if(gen != UINT_MAX)
            cout<<"GC generation "<<<" ended: "<<<"ms"<
        else
            cout<<"CLR suspension ended: "<<<" ms"<

        return S_OK;
    }
};

int _tmain(int argc, _TCHAR* argv[])
{
    ICLRRuntimeHost* pCLR;
    DWORD startupFlags = STARTUP_CONCURRENT_GC;
    HRESULT hr = CorBindToRuntimeEx(L"v2.0.50727", L"wks", startupFlags,
        CLSID_CLRRuntimeHost, IID_ICLRRuntimeHost, (LPVOID*)&pCLR);
    assert(SUCCEEDED(hr));

    MyCLRHost customHost;
    hr = pCLR->SetHostControl(&customHost);
    assert(SUCCEEDED(hr));

    hr = pCLR->Start();
    assert(SUCCEEDED(hr));

    DWORD retcode;
    hr = pCLR->ExecuteInDefaultAppDomain(APP_STARTUP_EXE,
        APP_ENTRY_TYPE, APP_ENTRY_METHOD, L"" , &retcode);
    assert(SUCCEEDED(hr));

    return 0;
};