Friday, September 24, 2010

Writing a Semi-Local Object Pool

Using object pooling in managed environments can usually benefit us in two ways:
  • Reducing the amount of time required to create "heavyweight" objects (that might involve executing some time consuming tasks).
  • Reducing the amount and rate of dynamic memory allocations. Thus, reducing the GC latency in future collections.
Nevertheless, it's important to remember that under certain scenarios, using object pools might actually have a negative impact on your application's performance. Since in managed environments (e.g, CLR, JVM), dynamic memory allocations is considerably fast, using object pools in favor of "lightwight" objects could cause somewhat of an unnecessary overhead during the object's allocation process. Brian Goetz summarized this issue:
Allocation in JVMs was not always so fast -- early JVMs indeed had poor allocation and garbage collection performance, which is almost certainly where this myth got started. In the very early days, we saw a lot of "allocation is slow" advice -- because it was, along with everything else in early JVMs -- and performance gurus advocated various tricks to avoid allocation, such as object pooling. (Public service announcement: Object pooling is now a serious performance loss for all but the most heavyweight of objects, and even then it is tricky to get right without introducing concurrency bottlenecks.)
A common, simple pattern for implementing an object pool is to create a single pool instance that is shared across all of the application. To achieve thread-safety, you would usually find a single, global lock around the Allocate and Free methods.
It's obvious that this type of design could introduce major concurrency bottlenecks. The more objects we'll attempt to pool, the greater the chance that we'll have threads attempting to acquire the pool's lock. And since we only maintain a single, global pool, contentions around that lock are bound to appear. Effectively ruining our application's scalability.
To demonstrate the issue, I've written a small benchmark that uses all of the available processors to allocate and free a constant number of objects (each thread gets an equal amount of objects to pool). Logically speaking, the more processors we use, the faster we should be able to finish allocating and freeing the constant number of objects. However, as the results show, we're actually experiencing a slowdown that gets worse as soon as we add more and more processors.
The results aren't surprising since they can be easily explained due to the massive amount of contentions we're experiencing around our single lock. (The time axis in the chart is expressed in milliseconds).

The first implementation of the pool that was being used in the test:
(Mind you that the code samples in this post are purely meant to demonstrate the conceptual differences between the pools).
    // holds a dictionary that makes a pool-per-type corelation
    public class SimpleMainPool
    {
        private Dictionary<Type, ISubPool> m_main;

        // to make things simpler, the dictionary isn't modified
        // after the first initialization
        public SimpleMainPool(Type[] pooledTypes)
        {
            m_main = new Dictionary<Type, ISubPool>();

            foreach (Type curType in pooledTypes)
                m_main.Add(curType, new SemiLocalPool(curType));
        }

        public object Allocate(Type type)
        {
            ISubPool sub = m_main[type];

            object pooledObj = sub.Allocate();
            return pooledObj;
        }

        public void Free(object obj)
        {
            ISubPool sub = m_main[obj.GetType()];
            sub.Free(obj);
        }
    }

    // our simple thread-safe pool
    class SimplePool : ISubPool
    {
        private const int PRIME = 50;

        private Type m_type;

        private Stack<object> m_sharedPool;

        public SimplePool(Type type)
        {
            m_sharedPool = new Stack<object>(PRIME);
            m_type = type;

            for (int i = 0; i < PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public object Allocate()
        {
            lock (m_sharedPool)
            {
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < PRIME; i++)
                    {
                        object newAlloc = Activator.CreateInstance(m_type);
                        m_sharedPool.Push(newAlloc);
                    }
                }

                object fromLocal = m_sharedPool.Pop();
                return fromLocal;
            }
        }

        public void Free(object obj)
        {
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }

    interface ISubPool
    {
        object Allocate();
        void Free(object obj);
    }  
As in all things related to concurrency, if you don't have locality, then you've got sharing, and once you have sharing, you will probably end up with contentions that are bound to harm your application's performance, wasting valuable CPU cycles.
So if we'd like to improve our scalability, then our goal is clear: reducing the amount of shared data. For example, if pools wouldn't be shared across different threads, then we wouldn't had to worry about synchronizing them and we could avoid the involved contentions altogether. A simple way to achieve this, is to use the TLS to allocate an independent pool for every thread. This way, on the one hand we'll achieve perfect scalability due to the avoidance of state sharing, but on the other hand, this kind of implementation could lead to an excessive memory usage. For instance, if a single instance of our pool (including all of its pre-allocated objects) weights about 10Mb, then on a machine with 16 processors, we could find ourselves dedicating no less then 160Mb in favor of our thread-local pools, even though its not likely that every single thread needs to use all the types of objects that we're allocated in its local pool.
For example, if we're parallelizing some algorithm using 3 threads, where thread 1 needs to use objects of type A and thread 2 needs to use objects of type B and thread 3 needs to use objects of type C, then it makes no sense that every one of those threads will hold a pool that will contain objects of all three types.

A possible solution for this problem is to use a pool hierarchy, where every time a thread attempts to create an object, it will direct itself to its "closest" pool instance. If that pool doesn't contain available instances of the requested object, then it will continue to navigate up the hierarchy until it reaches a pool that holds available instances of the object. Once the thread finishes using the object, it will return it to a pool that is located "closer" to that thread, this way we are able to maintain a level of locality between a thread and its used objects.
Instead of getting confused with unclear and too complex hierarchies, I'll demonstrate the concept using a flat hierarchy that offers a single "global" pool that is shared across all threads, and another local pool for every thread.
Basically, the idea is that the only place synchronization is involved is in the shared pool. So in the optimal scenario, each local pool will eventually hold only the amount of objects required to keep the thread from accessing the shared pool.
Every time a thread needs to create an object, it will first check its local pool. Since this pool only serves the requesting thread, we won't have to deal with synchronization here. Only in case where we've ran out of objects, we'll move on to the shared pool and transfer N more instances of the requested object to the local pool. It could be wise to transfer more objects than the thread initially asked for in order to avoid future accesses to the shared pool. Also, in order to cap the amount of memory we'd like to dedicate for each thread, we could decide that each local pool can hold a maximum of X objects. Once we've exceeded that number, every time a thread will want to free an object, it will return it to the shared pool instead of its local pool (of course, this could cause some contentions, depending on the implementation detail [e.g. the pool may buffer object returns]. But its entirely up to the developer to perform this kind of fine-tuning [memory usage vs. scalability]).

To demonstrate to concept, I've came up with this simplistic pool implementation:
    class SemiLocalPool : ISubPool
    {
        private const int SHARED_PRIME = 50;

        private const int LOCAL_PRIME = 20;
        private const int LOCAL_MAX = 1000;

        [ThreadStatic]
        private static Stack<object> t_localPool;

        private Type m_type;
        private Stack<object> m_sharedPool;

        public SemiLocalPool(Type type)
        {
            m_sharedPool = new Stack<object>(SHARED_PRIME);

            m_type = type;

            for (int i = 0; i < SHARED_PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public static void Init()
        {
            t_localPool = new Stack<object>(LOCAL_PRIME);
        }

        public object Allocate()
        {
            // first, try to allocate from the local pool
            if (t_localPool.Count > 0)
            {
                object localObj = t_localPool.Pop();
                return localObj;
            }

            int allocated = 0;

            lock (m_sharedPool)
            {
                // pass objects from shared to local pool
                for (; m_sharedPool.Count > 0 && allocated < LOCAL_PRIME - 1; allocated++)
                {
                    object sharedObj = m_sharedPool.Pop();
                    t_localPool.Push(sharedObj);
                }

                // prime share pool
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < SHARED_PRIME; i++)
                    {
                        // bad practice: holding the lock while executing external code
                        object sharedObj = Activator.CreateInstance(m_type);

                        m_sharedPool.Push(sharedObj);
                    }
                }
            }

            // if the shared pool didn't contain enough elements, prime the remaining items
            for (; allocated < LOCAL_PRIME - 1; allocated++)
            {
                object newAlloc = Activator.CreateInstance(m_type);
                t_localPool.Push(newAlloc);
            }

            object fromLocal = Activator.CreateInstance(m_type);
            return fromLocal;
        }

        public void Free(object obj)
        {
            // first return to local pool
            if (t_localPool.Count < LOCAL_MAX)
            {
                t_localPool.Push(obj);
                return;
            }

            // only after reaching LOCAL_MAX push back to the shared pool
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }
The scalability difference between the two implementations is closely related to the thread's pool usage pattern and to the values given to LOCAL_MAX, LOCAL_PRIME etc. If we reach a situation where there's always enough objects in the local pool, then we'll should enjoy perfect scalability.
For the purpose of the demonstration, here are the results of the previous benchmark, now using the new pool implementation (beside exceeding the predefined values at the beginning of the benchmark, the benchmark's behavior exhibits optimal usage pattern [accessing only the local pool after a while]).

One problematic characteristic of this type of design is its reliance on thread affinity. While in some scenarios it could actually benefit us, in others it could make the Semi-Local Pool irrelevant.
If every thread in our application is affinitized to certain section of the code (that allocates a constant set of objects), then using this design could be optimal since we dedicate each local pool to a managed thread. We actually assume that the thread will always attempt to allocate objects from a specific, constant set of objects.
However, if the threads doesn't comply with this assumption, then its only a matter of time until each local pool will hold the entire set of pooled objects in the applications (which will of course lead to high memory usage).
In order to improve our way of handling with such scenarios, we could decide to add a kind of additional hierarchy level, that will separate the shared pools according to different sections in the code. Meaning, threads that are currently executing code from a network module for example will access Pool X, while threads that are currently executing some algorithm will access Pool Y. This way we could achieve object locality not by relaying on thread affinity, but on "category affinity" (each section of the code uses a certain set of objects, relevant to it). When a thread will want to allocate an object, it will tell the pool which area in the code its currently executing, so it would receive the appropriate "category pool. It's likely that this pool already contains the same type of objects that will be requested by the current thread since they we're already allocated by other threads that previously executed the same code section.
And some code to illustrate the concept:
    public class CategorizedMainPool
    {
        private Dictionary<string, SimpleMainPool> m_main;

        public CategorizedMainPool(Tuple<string, Type[]>[] pooledCategories)
        {
            m_main = new Dictionary<string, SimpleMainPool>();

            foreach (Tuple<string, Type[]> curCategory in pooledCategories)
            {
                SimpleMainPool curSub = new SimpleMainPool(curCategory.Item2);
                m_main.Add(curCategory.Item1, curSub);
            }
        }

        public object Allocate(string category, Type type)
        {
            SimpleMainPool sub = m_main[category];

            object pooledObj = sub.Allocate(type);
            return pooledObj;
        }

        public void Free(string category, object obj)
        {
            SimpleMainPool sub = m_main[catagory];
            sub.Free(obj);
        }
    }

Friday, September 10, 2010

"!CLRStack -p" Isn't Always Reliable

One of the most commonly used commands in SOS is !CLRStack. When combined with the -p switch, SOS will attempt to display the values of the parameters that were passed to the functions in our managed call-stack. It's important to emphasis that SOS will only attempt to display the correct values, since sometimes its just going to get it all wrong.

In the case where SOS comes to the conclusion that it cannot retrieve the value of a parameter, the string will be displayed. This happens at situations where SOS honestly can't track the value of the parameter by just looking at a specific stack frame. For example, if we're using a calling convention such as fast call, the first two parameters (starting from the left) that can be contained in a register, will be passed in the ECX and EDX registers instead of being pushed onto the stack. For member functions, the value of the this pointer is usually passed in the ECX register.
This kind of behavior can lead into situations where the values of some of the function parameters may be missing since the registers we're already overridden by other functions that we're called down the stack.
As opposed to situations where SOS is able to come to the conclusion that it isn't sure what is the correct value of a parameter, every once in a while it's just going to get things wrong, and display incorrect values. This obviously could terribly mislead the person who attempts to debug the application.
The thing that might be the most concerning about this phenomena, is that it's not very hard to reproduce. Let's take the following application as an example:

    class Program
    {
        static void Main(string[] args)
        {
            Foo a = new Foo();
            a.Work(1, 2, 3, 4, 5);
        }
    }
 
    class Foo
    {
        [MethodImpl(MethodImplOptions.NoInlining)]
        public void Work(int x, int y, int z, int k, int p)
        {
            // break with debugger here
        }
    }
Now we'll run WinDbg, load SOS and see what the !CLRStack command has to say about the parameters that we're passed to the Work method.
(Note: The output might slightly vary, depending on your version of SOS. The following debugging session was perform on version 4.0.30319.1).

0:000> !clrstack -p
OS Thread Id: 0xfbc (0)
Child SP IP       Call Site
0012f3fc 030300b3 ConsoleApplication1.Foo.Work(Int32, Int32, Int32, Int32, Int32) [Program.cs @ 24]
    PARAMETERS:
        this () = 0x00b3c358
        x () = 0x00000001
        y (0x0012f40c) = 0x00000003
        z (0x0012f408) = 0x00000004
        k (0x0012f404) = 0x00000005
        p (0x0012f400) = 0x03030092
0012f414 03030092 ConsoleApplication1.Program.Main(System.String[]) [Program.cs @ 16]
    PARAMETERS:
        args =
0012f648 791421db [GCFrame: 0012f648]

0:000> r ecx  // holds "this" pointer
ecx=00b3c358
0:000> r edx  // holds "x" parameter
edx=00000001
If so, it isn't difficult to see that besides the this pointer and the x parameter (that we're passed as registers), SOS got all of the other parameters wrong. In fact, one may notice in a "shift" that was performed on the displayed values (y got the value of z, while z got the value of k, and so on...).
In order to better understand what went on here, we'll print the memory between the relevant stack pointers (in previous versions, this value was represented by the ESP column instead of the Child SP column).

0:000> dp /c 1 0012f3fc  0012f414
0012f3fc  0012f414 // EBP
0012f400  03030092 // Return Address
0012f404  00000005 // p 
0012f408  00000004 // k
0012f40c  00000003 // z
0012f410  00000002 // y
0012f414  0012f424 // EBP

Now, when we compare the stack-addresses of the parameters that SOS gave us, against the real addresses that we see here, we can confirm that a small address shift was performed to the memory addresses of the parameters that we're passed on the stack. So every time SOS attempts to display the value of a parameter, it actually displays the value of the parameter that was passed next to it.
This scenario is a classic example to the bit "buggy" nature of SOS. It doesn't mean that we have to immediatly stop using the !CLRStack command, but it should to remind us not to take SOS's output as the "absolute truth" when debugging, and just keep ourselves alert for all kind of "weird" behaviors such as this one.

Thursday, September 2, 2010

DateTime.Now in v4.0 Causes Dynamic Memory Allocations

A while back I've mentioned in a post that calling DateTime.Now causes boxing. Following the posting, a feedback item was also posted on Microsoft Connect, reporting about the issue.
Yesterday, the feedback's status turned to Resolved after Greg cleverly remarked that this issue was fixed in v4.0 (the boxing occurred until v3.5).

However, after reading the response I took the same benchmark code from the previous post an ran it again, this time using .Net Framework 4.0. Surprisingly, perfmon kept reporting on high allocation rates in the Allocated Bytes/sec counter. So I've opened up Reflector and took a look at DateTime.Now's new implementation detail. And indeed, the boxing issue was resolved since the new implementation uses the TimeZoneInfo type instead of TimeZone. Unable to find the source of the allocations from just looking at the code, it was time to launch WinDbg.
After letting the application to run for a while, I've attached WinDbg to the process and executed the !DumpHeap -stat command a couple of times so I could take a general look at the kind and amount of objects that currently live on the managed heap. The output was as follows:

0:003> !dumpheap -stat
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0     9119       291808 System.Globalization.DaylightTime
Total 9771 objects

0:003> !dumpheap -stat 
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0    20654       660928 System.Globalization.DaylightTime
Total 21306 objects

This output reveals the identity of our memory-consuming bandit: DaylightTime.
Now, all is left is to spot where this type is being instantated and used. For this purpose, we could use the !BPMD -md command in order to place breakpoints on specific managed methods that DaylightTime exposes (you could dump the type's methods using the following command: !DumpMT -md ).
After setting the breakpoint, the application continues its execution and immediately breaks. Looking at the managed callstack using !CLRStack reveals the source method of the allocation: TimeZoneInfo.GetIsDaylightSavingsFromUtc. This method creates an instance of DaylightTime, and since DaylightTime is a class (hence, a reference type), a dynamic memory allocation is being made.

   // a snippet from the implementation:
   TimeSpan span = utc;
   DaylightTime daylightTime = GetDaylightTime(Year, rule);
   DateTime startTime = daylightTime.Start - span;
   DateTime endTime = (daylightTime.End - span) - rule.DaylightDelta;

In conclusion, DateTime.Now's implementation was updated in BCL v4.0 and the boxing of Int32s was removed since the implementation uses the TimeZoneInfo type instead of TimeZone. However, using TimeZoneInfo results in a new source for dynamic allocations, but this time instead of redundant boxings, the allocations are caused just because a reference type is beind used under the hood. And since each instance of DaylightTime is sized up at 32 bytes (including the Object Header), the update in the BCL could be considered as a turn for the worst regarding memory usage since DaylightTime instances are more memory consuming than boxed instances of Int32.