Showing posts with label Performance. Show all posts
Showing posts with label Performance. Show all posts

Friday, August 12, 2011

Writing a Manual Memory Manager in C#

Garbage collection. Aye? or Nay?
As usual, it depends. That is, on which developer you might ask. Some like to have as much control as possible over the way their code executes, while others simply love the fact that they don't have to deal with the "mundane" job of keeping track on their memory allocations.
Since there aren't really any "absolute truths" in anything related to programming, in reality you'd sometimes want to have complete control over your memory management, while at other times you wouldn't really care about it "as long as it gets done".
Since we're mostly discussing .Net and here, we could say that we've got the "as long as it gets done" part covered quite well, by the CLR's garbage collection mechanism. So it's time to see how we could approach implementing a manual memory manager in C#.

What we've eventually would like to have, is an API that would enable us to allocate and deallocate typed memory on demand. Of course C# doesn't natively support the new and delete keywords we so kindly remember from C++, so we'll have come up with our own utility functions to do the job.
Eventually, our code should look something similar to this:

static void Main()
{
    ITypeAllocator mgr = new ManalocManager();

    IFooData obj = mgr.New<IFooData>();

    obj.Bar = 1;
    obj.Car = 2;
    obj.Jar = 3;
    obj.Tar = 4;

    mgr.Delete(obj);
}
Disabling the garbage collector completely is an unreasonable thing to do in a platform such as .Net. Doing so would probably miss the platform's purpose. Anyone who truly wants to have complete control over the execution of its program wouldn't bother using C# anyway (or any other managed language for that matter).
However, while using C#, there might be some times that we'll want to manage our own memory, instead of having the garbage collector doing it for us. And even if not, it's still a subject interesting enough to explore and mostly play with.

In order to demonstrate how we could do achieve manual memory management in C#, lets have a look at the following interface:

public interface IFooData
{
    int Bar { get; set; }
    long Jar { get; set; }
    double Car { get; set; }
    byte Tar { get; set; }
}

The classic method to implement this interface would be to create a class with four members that will match the coresponding properties. However, doing so will result in a 21 bytes structure that will reside in the GC heap (not counting padding and the preceding object header).
Instead, we could allocate the required memory block in the native heap (using AllocHGlobal) and modify out propertie's to access the native memory at the required offsets (e.g. 0 for Bar, 4 for Jar and 12 for Car). Using a Delete method, we could free the native memory block on demand, when we please.

public class FooData : IFooData
{
    private unsafe byte* _native;

    public FooData()
    {
        unsafe
        {
            _native = (byte*)(Marshal.AllocHGlobal(21).ToPointer());
        }
    }

    public int Bar
    {
        get { unsafe { return *(int*)&_native[0]; } }
        set { unsafe { *(int*)&_native[0] = value; } }
    }

    public long Jar
    {
        get { unsafe { return *(long*)&_native[4]; } }
        set { unsafe { *(long*)&_native[4] = value; } }
    }

    public double Car
    {
        get { unsafe { return *(double*)&_native[12]; } }
        set { unsafe { *(double*)&_native[12] = value; } }
    }

    public byte Tar
    {
        get { unsafe { return *(byte*)&_native[20]; } }
        set { unsafe { *(byte*)&_native[20] = value; } }
    }

    public void Delete()
    {
        unsafe
        {
            Marshal.FreeHGlobal(new IntPtr(((void*)(_native))));
            _native = (byte*)(IntPtr.Zero);
        }
    }
}
The problem with such implementation is that it could be very tedious to code and implement. Even for simple structures like IFooData, the resulting implementation could be quite taunting.
Fortunately enough, we can automate the implementation process by adding a code generator that will implemenet our interfaces on the fly, during runtime.
The following interface should loosely describe the capabilities our manual memory manager should support:
 
public interface ITypeAllocator
{
    T New();
    void Delete(T instance);

    void PreGenerate(params Type[] types);
}

The generic parameter T accepts user-defined data representing interfaces such as IFooData.
Once the New method is called, our manager should generate code, compile it and instancate it during runtime. The resulting instance is then returned to the caller for it to be used. Once it finishes using it, and wants to release its memory, it calls the Delete method.
The PreGenerate method's purpose is the optimize the code's generation/compilation process. Once the user pre-generates a type, it won't have to wait on the first call to the New method (much like the process of forcing the JIT compiler to execute on your assemblies).

When it comes to code generation, there are basically two ways to choose from: CodeDOM and Templates. Each one of them has its pros and cons, personaly I tend to prefer the CodeDOM way of doing things. While using it could result in quite verbose code, I believe that its easier to maintain in larger projects than templates.
Unforutantly, .Net's CodeDOM model doesn't support unsafe code, so I had to resort to using some workarounds to represent all of the unsafe code blocks.
This should be a good time to mention the Refly library which wraps around .Net CodeDOM API, making it much simpler and innutative to use.
The demonstrated implementation is very naive and limited regarding the kind of types it is able to generate, though it should illustrate the discussed concept.

public class ManalocManager : ITypeAllocator
{
    // key: userType, value: generatedType
    private Dictionary<Type, Type> m_generatedTypesCache;

    public ManalocManager()
    {
        m_generatedTypesCache = new Dictionary<Type, Type>();
    }

    public void Delete(T instance)
    {
        if (!(instance is IManalocGeneratedType))
            throw new ArgumentException("Attempted to delete an unexpected type");

        IManalocGeneratedType generatedType = (IManalocGeneratedType)instance;
        generatedType.Delete();
    }

    public void PreGenerate(params Type[] types)
    {
        foreach (Type curUserType in types)
            generateAndAddToCache(curUserType);
    }

    public T New()
    {
        Type userType = typeof(T);

        Type generatedType;
        bool alreadyGenerated = m_generatedTypesCache.TryGetValue(userType, out generatedType);
        if (!alreadyGenerated)
            generatedType = generateAndAddToCache(userType);

        object result = Activator.CreateInstance(generatedType);
        return (T)result;
    }

    private Type generateAndAddToCache(Type userType)
    {
        Type generatedType = generateProxy(userType);
        m_generatedTypesCache.Add(userType, generatedType);

        return generatedType;
    }

    private Type generateProxy(Type userType)
    {
        NamespaceDeclaration ns;
        string typeName = createType(userType, out ns);

        string sourceFile = generateCode(ns);
        Assembly compiledAssembly = compile(userType, sourceFile);

        Type compiledType = compiledAssembly.GetType(typeName);

        return compiledType;
    }

    private string createType(Type userType, out NamespaceDeclaration namespaceDec)
    {
        PropertyInfo[] userProperties = userType.GetProperties();

        namespaceDec = new NamespaceDeclaration("Manaloc.AutoGenerated");
        namespaceDec.Imports.Add(userType.Namespace);
        ClassDeclaration classDec = namespaceDec.AddClass(userType.Name + "_Manaloc_AutoGenerated");
        classDec.Interfaces.Add(userType);
        classDec.Interfaces.Add(typeof(IManalocGeneratedType));

        FieldDeclaration nativeMember = classDec.AddField("unsafe byte*", "native");

        addConstructor(userProperties, classDec, nativeMember);
        addDeleteMethod(classDec, nativeMember);
        addProperties(userProperties, classDec, nativeMember);

        string typeName = namespaceDec.Name + "." + classDec.Name;
        return typeName;
    }

    private void addConstructor(PropertyInfo[] userProperties, ClassDeclaration classDec, FieldDeclaration nativeMember)
    {
        int totalSize = sumSize(userProperties);

        ConstructorDeclaration ctor = classDec.AddConstructor();
        ctor.Body.Add(Stm.Snippet("unsafe{"));

        ctor.Body.AddAssign(
            Expr.This.Field(nativeMember),
            Expr.Cast(typeof(byte*), Expr.Type(typeof(Marshal)).Method("AllocHGlobal").Invoke(Expr.Prim(totalSize)).
            Method("ToPointer").Invoke()));

        ctor.Body.Add(Stm.Snippet("}"));
    }

    private void addDeleteMethod(ClassDeclaration classDec, FieldDeclaration nativeMember)
    {
        MethodDeclaration disposeMethod = classDec.AddMethod("Delete");
        disposeMethod.Attributes = MemberAttributes.Final | MemberAttributes.Public;

        disposeMethod.Body.Add(Stm.Snippet("unsafe{"));
        disposeMethod.Body.Add(
            Expr.Type(typeof(Marshal)).Method("FreeHGlobal").Invoke(
            Expr.New(typeof(IntPtr), Expr.Cast("void*", Expr.This.Field(nativeMember)))));
        disposeMethod.Body.AddAssign(Expr.This.Field(nativeMember),
            Expr.Cast("byte*", Expr.Type(typeof(IntPtr)).Field("Zero")));
        disposeMethod.Body.Add(Stm.Snippet("}"));
    }

    private void addProperties(PropertyInfo[] userProperties, ClassDeclaration classDec, FieldDeclaration nativeMember)
    {
        int offset = 0;
        foreach (PropertyInfo curProperty in userProperties)
        {
            Type propType = curProperty.PropertyType;
            int propSize = Marshal.SizeOf(propType);

            PropertyDeclaration propDec = classDec.AddProperty(propType, curProperty.Name);
            propDec.Attributes = MemberAttributes.Final | MemberAttributes.Public;

            if (curProperty.CanRead)
                addGetter(nativeMember, offset, propType, propDec);

            if (curProperty.CanWrite)
                addSetter(nativeMember, offset, propType, propDec);

            offset += propSize;
        }
    }

    private void addSetter(FieldDeclaration nativeMember, int offset, Type propType, PropertyDeclaration propDec)
    {
        propDec.Set.Add(Stm.Snippet("unsafe{"));
        propDec.Set.Add(Stm.Snippet("*(" + propType.Name + "*)&"));
        propDec.Set.AddAssign(Expr.This.Field(nativeMember).Item(offset), Expr.Value);
        propDec.Set.Add(Stm.Snippet("}"));
    }

    private void addGetter(FieldDeclaration nativeMember, int offset, Type propType, PropertyDeclaration propDec)
    {
        propDec.Get.Add(Stm.Snippet("unsafe{"));
        propDec.Get.Add(Stm.Snippet("return *(" + propType.Name + "*)&"));
        propDec.Get.Add(Expr.This.Field(nativeMember).Item(offset));
        propDec.Get.Add(Stm.Snippet("}"));
    }

    private string generateCode(NamespaceDeclaration ns)
    {
        string sourceFile = null;

        const string outDir = "ManalocAutoGenerated";
        if (!Directory.Exists(outDir))
            Directory.CreateDirectory(outDir);

        Refly.CodeDom.CodeGenerator generator = new Refly.CodeDom.CodeGenerator();
        generator.CreateFolders = false;
        generator.FileCreated += (object sender, StringEventArgs args) => { sourceFile = args.Value; };

        generator.GenerateCode(outDir, ns);

        if (sourceFile == null)
            throw new Exception("Faliled to generate source file");

        return sourceFile;
    }

    private Assembly compile(Type userType, string sourceFile)
    {
        CompilerParameters compilerParams = new CompilerParameters();
        compilerParams.CompilerOptions = "/unsafe /optimize";
        compilerParams.ReferencedAssemblies.Add(userType.Assembly.Location);
        CompilerResults result =
            Refly.CodeDom.CodeGenerator.CsProvider.CompileAssemblyFromFile(compilerParams, new string[] { sourceFile });

        Assembly compiledAssembly = result.CompiledAssembly;
        return compiledAssembly;
    }

    private int sumSize(PropertyInfo[] userProperties)
    {
        int size = 0;
        foreach (PropertyInfo curProperty in userProperties)
            size += Marshal.SizeOf(curProperty.PropertyType);

        return size;
    }
}

public interface IManalocGeneratedType
{
    void Delete();
}

Monday, October 11, 2010

Hot/Cold Data Separation Considered Harmful in Multithreaded Environments

When optimizing your code for performance, a quite important step in the way is to check that your application correctly utilizes its processor's cache. Of course, the magnitude of importance here is solely depended on your own personal scenario, but in general, it's always a good idea to keep your cache usage patterns in mind when attempting to resolve a performance bottleneck or perhaps for when you're just looking for possible ways to enhance you're application's performance.

Due to the relative high cost of accessing the main memory, modern processors make use of a data and an instruction cache in an attempt to lower the latency involved in accessing the main memory.
A processor may choose to keep in its cache the values of frequently accessed memory addresses, or perhaps prefetch the values of adjacent memory addresses for possible future usage.
When the processor attempts to access a memory address, it first check to see whether it's already exist in one of its caches (L1, L2, L3 etc.). If the address isn't found, then we have a "cache miss" and the processor we'll have to perform a round-trip to the main memory in order to obtain the required data. Wasting valuable CPU cycles on the way.

Not too long ago, one of the most popular guidelines in designing cache-aware data structures was to split them into "hot" and "cold" parts. This kind of separation makes sense since it could lead to a more efficient utilization of the cache. Frequently used fields (the hot part) are grouped together in the same cache lines, instead of being mixed with infrequently used fields (the cold part). This way, the cache is able to contain more hot data, and cache lines aren't wasted to hold cold data.
For those of you who are interested, the subject is covered in greater detail in the article Splitting Data Objects to Increase Cache Utilization by Michael Franz and Thomas Kistler.

 Red: Hot data, Green: Cold data

While following this guideline could lead to a more efficient cache utilization, it won't necessarily lead to a performance improvement in a multithreaded environment. In fact, it's more likely that you'll face a performance degradation than gaining any performance improvement at all.
The problem in hand is that highly utilized caches could easily send your valuable CPU cycles down the drain, due to the effects of false sharing. Once a single cache line holds several frequently written-to fields, the chance it will get invalidated by another processor gets greater. Sharing a single cache line with multiple frequently modified fields could easily have a negative effect on your cache's locality.
In multithreaded environments, one could benefit from sparsely allocating frequently used fields in the cache. There's an obvious trade-off between cache utilization (memory usage) and locality (performance). While the cache will contain less hot data (which could result in more round-trips to the main memory), it would benefit from better locality, hence better scalability across multiple processors that might attempt to modify the same object simultaneously.

When designing a cache-aware data structure, you don't necessarily have to order your fields in such way that a hot field will always get followed by a cold field. Instead, "artificial" padding could be used to fill the excessive space left in the cache line which holds the structure. In .Net, decorating the type with an StructLayout(LayoutKind.Explicit) attribute and assigning each field with its appropriate offset would do the job.

Friday, September 24, 2010

Writing a Semi-Local Object Pool

Using object pooling in managed environments can usually benefit us in two ways:
  • Reducing the amount of time required to create "heavyweight" objects (that might involve executing some time consuming tasks).
  • Reducing the amount and rate of dynamic memory allocations. Thus, reducing the GC latency in future collections.
Nevertheless, it's important to remember that under certain scenarios, using object pools might actually have a negative impact on your application's performance. Since in managed environments (e.g, CLR, JVM), dynamic memory allocations is considerably fast, using object pools in favor of "lightwight" objects could cause somewhat of an unnecessary overhead during the object's allocation process. Brian Goetz summarized this issue:
Allocation in JVMs was not always so fast -- early JVMs indeed had poor allocation and garbage collection performance, which is almost certainly where this myth got started. In the very early days, we saw a lot of "allocation is slow" advice -- because it was, along with everything else in early JVMs -- and performance gurus advocated various tricks to avoid allocation, such as object pooling. (Public service announcement: Object pooling is now a serious performance loss for all but the most heavyweight of objects, and even then it is tricky to get right without introducing concurrency bottlenecks.)
A common, simple pattern for implementing an object pool is to create a single pool instance that is shared across all of the application. To achieve thread-safety, you would usually find a single, global lock around the Allocate and Free methods.
It's obvious that this type of design could introduce major concurrency bottlenecks. The more objects we'll attempt to pool, the greater the chance that we'll have threads attempting to acquire the pool's lock. And since we only maintain a single, global pool, contentions around that lock are bound to appear. Effectively ruining our application's scalability.
To demonstrate the issue, I've written a small benchmark that uses all of the available processors to allocate and free a constant number of objects (each thread gets an equal amount of objects to pool). Logically speaking, the more processors we use, the faster we should be able to finish allocating and freeing the constant number of objects. However, as the results show, we're actually experiencing a slowdown that gets worse as soon as we add more and more processors.
The results aren't surprising since they can be easily explained due to the massive amount of contentions we're experiencing around our single lock. (The time axis in the chart is expressed in milliseconds).

The first implementation of the pool that was being used in the test:
(Mind you that the code samples in this post are purely meant to demonstrate the conceptual differences between the pools).
    // holds a dictionary that makes a pool-per-type corelation
    public class SimpleMainPool
    {
        private Dictionary<Type, ISubPool> m_main;

        // to make things simpler, the dictionary isn't modified
        // after the first initialization
        public SimpleMainPool(Type[] pooledTypes)
        {
            m_main = new Dictionary<Type, ISubPool>();

            foreach (Type curType in pooledTypes)
                m_main.Add(curType, new SemiLocalPool(curType));
        }

        public object Allocate(Type type)
        {
            ISubPool sub = m_main[type];

            object pooledObj = sub.Allocate();
            return pooledObj;
        }

        public void Free(object obj)
        {
            ISubPool sub = m_main[obj.GetType()];
            sub.Free(obj);
        }
    }

    // our simple thread-safe pool
    class SimplePool : ISubPool
    {
        private const int PRIME = 50;

        private Type m_type;

        private Stack<object> m_sharedPool;

        public SimplePool(Type type)
        {
            m_sharedPool = new Stack<object>(PRIME);
            m_type = type;

            for (int i = 0; i < PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public object Allocate()
        {
            lock (m_sharedPool)
            {
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < PRIME; i++)
                    {
                        object newAlloc = Activator.CreateInstance(m_type);
                        m_sharedPool.Push(newAlloc);
                    }
                }

                object fromLocal = m_sharedPool.Pop();
                return fromLocal;
            }
        }

        public void Free(object obj)
        {
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }

    interface ISubPool
    {
        object Allocate();
        void Free(object obj);
    }  
As in all things related to concurrency, if you don't have locality, then you've got sharing, and once you have sharing, you will probably end up with contentions that are bound to harm your application's performance, wasting valuable CPU cycles.
So if we'd like to improve our scalability, then our goal is clear: reducing the amount of shared data. For example, if pools wouldn't be shared across different threads, then we wouldn't had to worry about synchronizing them and we could avoid the involved contentions altogether. A simple way to achieve this, is to use the TLS to allocate an independent pool for every thread. This way, on the one hand we'll achieve perfect scalability due to the avoidance of state sharing, but on the other hand, this kind of implementation could lead to an excessive memory usage. For instance, if a single instance of our pool (including all of its pre-allocated objects) weights about 10Mb, then on a machine with 16 processors, we could find ourselves dedicating no less then 160Mb in favor of our thread-local pools, even though its not likely that every single thread needs to use all the types of objects that we're allocated in its local pool.
For example, if we're parallelizing some algorithm using 3 threads, where thread 1 needs to use objects of type A and thread 2 needs to use objects of type B and thread 3 needs to use objects of type C, then it makes no sense that every one of those threads will hold a pool that will contain objects of all three types.

A possible solution for this problem is to use a pool hierarchy, where every time a thread attempts to create an object, it will direct itself to its "closest" pool instance. If that pool doesn't contain available instances of the requested object, then it will continue to navigate up the hierarchy until it reaches a pool that holds available instances of the object. Once the thread finishes using the object, it will return it to a pool that is located "closer" to that thread, this way we are able to maintain a level of locality between a thread and its used objects.
Instead of getting confused with unclear and too complex hierarchies, I'll demonstrate the concept using a flat hierarchy that offers a single "global" pool that is shared across all threads, and another local pool for every thread.
Basically, the idea is that the only place synchronization is involved is in the shared pool. So in the optimal scenario, each local pool will eventually hold only the amount of objects required to keep the thread from accessing the shared pool.
Every time a thread needs to create an object, it will first check its local pool. Since this pool only serves the requesting thread, we won't have to deal with synchronization here. Only in case where we've ran out of objects, we'll move on to the shared pool and transfer N more instances of the requested object to the local pool. It could be wise to transfer more objects than the thread initially asked for in order to avoid future accesses to the shared pool. Also, in order to cap the amount of memory we'd like to dedicate for each thread, we could decide that each local pool can hold a maximum of X objects. Once we've exceeded that number, every time a thread will want to free an object, it will return it to the shared pool instead of its local pool (of course, this could cause some contentions, depending on the implementation detail [e.g. the pool may buffer object returns]. But its entirely up to the developer to perform this kind of fine-tuning [memory usage vs. scalability]).

To demonstrate to concept, I've came up with this simplistic pool implementation:
    class SemiLocalPool : ISubPool
    {
        private const int SHARED_PRIME = 50;

        private const int LOCAL_PRIME = 20;
        private const int LOCAL_MAX = 1000;

        [ThreadStatic]
        private static Stack<object> t_localPool;

        private Type m_type;
        private Stack<object> m_sharedPool;

        public SemiLocalPool(Type type)
        {
            m_sharedPool = new Stack<object>(SHARED_PRIME);

            m_type = type;

            for (int i = 0; i < SHARED_PRIME; i++)
            {
                object sharedObj = Activator.CreateInstance(m_type);
                m_sharedPool.Push(sharedObj);
            }
        }

        public static void Init()
        {
            t_localPool = new Stack<object>(LOCAL_PRIME);
        }

        public object Allocate()
        {
            // first, try to allocate from the local pool
            if (t_localPool.Count > 0)
            {
                object localObj = t_localPool.Pop();
                return localObj;
            }

            int allocated = 0;

            lock (m_sharedPool)
            {
                // pass objects from shared to local pool
                for (; m_sharedPool.Count > 0 && allocated < LOCAL_PRIME - 1; allocated++)
                {
                    object sharedObj = m_sharedPool.Pop();
                    t_localPool.Push(sharedObj);
                }

                // prime share pool
                if (m_sharedPool.Count == 0)
                {
                    for (int i = 0; i < SHARED_PRIME; i++)
                    {
                        // bad practice: holding the lock while executing external code
                        object sharedObj = Activator.CreateInstance(m_type);

                        m_sharedPool.Push(sharedObj);
                    }
                }
            }

            // if the shared pool didn't contain enough elements, prime the remaining items
            for (; allocated < LOCAL_PRIME - 1; allocated++)
            {
                object newAlloc = Activator.CreateInstance(m_type);
                t_localPool.Push(newAlloc);
            }

            object fromLocal = Activator.CreateInstance(m_type);
            return fromLocal;
        }

        public void Free(object obj)
        {
            // first return to local pool
            if (t_localPool.Count < LOCAL_MAX)
            {
                t_localPool.Push(obj);
                return;
            }

            // only after reaching LOCAL_MAX push back to the shared pool
            lock (m_sharedPool)
            {
                m_sharedPool.Push(obj);
            }
        }
    }
The scalability difference between the two implementations is closely related to the thread's pool usage pattern and to the values given to LOCAL_MAX, LOCAL_PRIME etc. If we reach a situation where there's always enough objects in the local pool, then we'll should enjoy perfect scalability.
For the purpose of the demonstration, here are the results of the previous benchmark, now using the new pool implementation (beside exceeding the predefined values at the beginning of the benchmark, the benchmark's behavior exhibits optimal usage pattern [accessing only the local pool after a while]).

One problematic characteristic of this type of design is its reliance on thread affinity. While in some scenarios it could actually benefit us, in others it could make the Semi-Local Pool irrelevant.
If every thread in our application is affinitized to certain section of the code (that allocates a constant set of objects), then using this design could be optimal since we dedicate each local pool to a managed thread. We actually assume that the thread will always attempt to allocate objects from a specific, constant set of objects.
However, if the threads doesn't comply with this assumption, then its only a matter of time until each local pool will hold the entire set of pooled objects in the applications (which will of course lead to high memory usage).
In order to improve our way of handling with such scenarios, we could decide to add a kind of additional hierarchy level, that will separate the shared pools according to different sections in the code. Meaning, threads that are currently executing code from a network module for example will access Pool X, while threads that are currently executing some algorithm will access Pool Y. This way we could achieve object locality not by relaying on thread affinity, but on "category affinity" (each section of the code uses a certain set of objects, relevant to it). When a thread will want to allocate an object, it will tell the pool which area in the code its currently executing, so it would receive the appropriate "category pool. It's likely that this pool already contains the same type of objects that will be requested by the current thread since they we're already allocated by other threads that previously executed the same code section.
And some code to illustrate the concept:
    public class CategorizedMainPool
    {
        private Dictionary<string, SimpleMainPool> m_main;

        public CategorizedMainPool(Tuple<string, Type[]>[] pooledCategories)
        {
            m_main = new Dictionary<string, SimpleMainPool>();

            foreach (Tuple<string, Type[]> curCategory in pooledCategories)
            {
                SimpleMainPool curSub = new SimpleMainPool(curCategory.Item2);
                m_main.Add(curCategory.Item1, curSub);
            }
        }

        public object Allocate(string category, Type type)
        {
            SimpleMainPool sub = m_main[category];

            object pooledObj = sub.Allocate(type);
            return pooledObj;
        }

        public void Free(string category, object obj)
        {
            SimpleMainPool sub = m_main[catagory];
            sub.Free(obj);
        }
    }

Thursday, September 2, 2010

DateTime.Now in v4.0 Causes Dynamic Memory Allocations

A while back I've mentioned in a post that calling DateTime.Now causes boxing. Following the posting, a feedback item was also posted on Microsoft Connect, reporting about the issue.
Yesterday, the feedback's status turned to Resolved after Greg cleverly remarked that this issue was fixed in v4.0 (the boxing occurred until v3.5).

However, after reading the response I took the same benchmark code from the previous post an ran it again, this time using .Net Framework 4.0. Surprisingly, perfmon kept reporting on high allocation rates in the Allocated Bytes/sec counter. So I've opened up Reflector and took a look at DateTime.Now's new implementation detail. And indeed, the boxing issue was resolved since the new implementation uses the TimeZoneInfo type instead of TimeZone. Unable to find the source of the allocations from just looking at the code, it was time to launch WinDbg.
After letting the application to run for a while, I've attached WinDbg to the process and executed the !DumpHeap -stat command a couple of times so I could take a general look at the kind and amount of objects that currently live on the managed heap. The output was as follows:

0:003> !dumpheap -stat
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0     9119       291808 System.Globalization.DaylightTime
Total 9771 objects

0:003> !dumpheap -stat 
total 0 objects
Statistics:
      MT    Count    TotalSize Class Name
...
79ba1d88       13         1532 System.Char[]
79ba2f20        1         3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac      449        12220 System.String
001663d8       53        13212      Free
79b56c28       30        17996 System.Object[]
79b8e7b0    20654       660928 System.Globalization.DaylightTime
Total 21306 objects

This output reveals the identity of our memory-consuming bandit: DaylightTime.
Now, all is left is to spot where this type is being instantated and used. For this purpose, we could use the !BPMD -md command in order to place breakpoints on specific managed methods that DaylightTime exposes (you could dump the type's methods using the following command: !DumpMT -md ).
After setting the breakpoint, the application continues its execution and immediately breaks. Looking at the managed callstack using !CLRStack reveals the source method of the allocation: TimeZoneInfo.GetIsDaylightSavingsFromUtc. This method creates an instance of DaylightTime, and since DaylightTime is a class (hence, a reference type), a dynamic memory allocation is being made.

   // a snippet from the implementation:
   TimeSpan span = utc;
   DaylightTime daylightTime = GetDaylightTime(Year, rule);
   DateTime startTime = daylightTime.Start - span;
   DateTime endTime = (daylightTime.End - span) - rule.DaylightDelta;

In conclusion, DateTime.Now's implementation was updated in BCL v4.0 and the boxing of Int32s was removed since the implementation uses the TimeZoneInfo type instead of TimeZone. However, using TimeZoneInfo results in a new source for dynamic allocations, but this time instead of redundant boxings, the allocations are caused just because a reference type is beind used under the hood. And since each instance of DaylightTime is sized up at 32 bytes (including the Object Header), the update in the BCL could be considered as a turn for the worst regarding memory usage since DaylightTime instances are more memory consuming than boxed instances of Int32.

Tuesday, August 24, 2010

Reducing AutoResetEvent's Synchronization Overhead

One of the most common design patterns in multithreaded programming is the producer-consumer. In the usual scenario, a consumer thread is spawned into an infinite loop, where it waits on some handle until new data arrives from the producer threads. Then it wakes up, and process the new data.
The overall idea can be expressed via this code:

AutoResetEvent m_event = new AutoResetEvent(false);

private void WorkCycles()
{
    while(true)
    {
        // wait for a signal
        m_event.WaitOne();

        // do work..
    }
}

Using the AutoResetEvent class seems very suitable for this kind of scenario since it supplies us the exact behavior that we're looking for: causing a thread to to wait on a handle until a signal arrives. However, calls to WaitOne/Set come with a price, and it ain't cheap. The AutoResetEvent class is simply a managed-code wrapper for Win32's WaitForSingleObject, and it doesn't keep track of the internal event's state (whether it's currently signaled state or not), thus, every call to AutoResetEvent will result in diving all the way into kernel-mode, even though "we don't have to" (if we would have tracked the event's internal state).
In some scenarios, especially in those where producer threads pass data for the consumer in high frequencies, most of the calls to Set() are redundant. If for example every 100ms a producer passes new data (which results in a calling to Set), but it takes the consumer an entire second to process that data, then it means we've spent time calling Set 9 times more than we had to. Additionally, after the consumer finished its processing, it will have to call WaitOne again, only to realize the event was already set by a producer.

So, how expensive are those redundant calls to AutoResetEvent? The following benchmark should demonstrate:

while (true)
{
    AutoResetEvent eventObj = new AutoResetEvent(false);

    Stopwatch sw = Stopwatch.StartNew();
    for (int i = 0; i < 1000000; i++)
    {
        eventObj.Set();

        eventObj.WaitOne();
    }

    Console.WriteLine(sw.ElapsedMilliseconds);
}

On my machine, the average result was 1035ms.
A whole second, this is the ridiculously large amount of time it took us the complete those few lines of code. We are wasting here an entire second just for using a synchronization primitive that in our real application, could be considered to be quite insignificant at first look. Seemingly, we shouldn't waste more than a few milliseconds running that code, but as you can see, things are a little different in practice.

EconomicAutoResetEvent
The solution for this problem is quite straightforward, since our goal is to reduce the number of redundant calls to Set/WaitOne to a minimum. How could you achieve that? One option is to use an alternative synchronization primitive, that for the sake of this post, will be named EconomicAutoResetEvent. The function of this new primitive, is to improve AutoResetEvent's performance by keeping track of the event's state, thus, avoiding transitions into the kernel.
I'll demonstrate a possible implementation for such primitive, and explain its behavior in more detail afterwords:

public class EconomicResetEvent
{
    private volatile int m_eventState;
    private AutoResetEvent m_event;

    private Thread m_worker;

    private const int EVENT_SET     = 1;
    private const int EVENT_NOT_SET = 2;
    private const int EVENT_ON_WAIT = 3;

    public EconomicResetEvent(bool initialState, Thread workerThread)
    {
        m_event = new AutoResetEvent(initialState);
        m_eventState = initialState ? EVENT_SET : EVENT_NOT_SET;

        m_worker = workerThread;
    }

    public void WaitForWork()
    {
        verifyCaller();

        if (m_eventState == EVENT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_NOT_SET, EVENT_SET) == EVENT_SET)
        {
            return;
        }

        if (m_eventState == EVENT_NOT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_ON_WAIT, EVENT_NOT_SET) == EVENT_NOT_SET)
        {
            m_event.WaitOne();
        }
    }

    public void SignalWork()
    {
        if (m_eventState == EVENT_NOT_SET && Interlocked.CompareExchange(
            ref m_eventState, EVENT_SET, EVENT_NOT_SET) == EVENT_NOT_SET)
        {
            return;
        }

        if (m_eventState == EVENT_ON_WAIT && Interlocked.CompareExchange(
            ref m_eventState, EVENT_NOT_SET, EVENT_ON_WAIT) == EVENT_ON_WAIT)
        {
            m_event.Set();
        }
    }

    // [Conditional("DEBUG")]
    private void verifyCaller()
    {
        if (m_worker != Thread.CurrentThread)
        {
            string errMsg = string.Format("Only the pre-defined Worker thread may
               call WaitOne (Current: {0}, Worker: {1})", Thread.CurrentThread, m_worker);

            throw new SynchronizationLockException(errMsg);
        }
    }
}
 
I assume that the question most of you are probably thinking about right now is "How fast it is?". Well, after running the same benchmark as before, the average result I get on my machine is only 9ms. An outstanding 11500% of improvement over using AutoResetEvent directly. And to think that all we just did is to avoid some calls to the internal kernel object.
The most important part in the implementation is the tracking of the state of the internal kernel object (signaled/not-signaled). The state itself is kept in m_eventState, this how we could know which calls to WaitForWork/SignalWork could avoid accessing m_event (AutoResetEvent).
Additionally, in order to make the store/load operations on m_eventState thread-safe, I've used a couple of CAS operations, that even though they could be cheaper then a "full blown lock", it's still considered to be quite an expensive instruction that usually it's best to avoid when they're not necessary. This is exactly why the double-tests on the if statements are performed. This technique is usually referred to as TATAS (i.e, Test-and-Test-and-Set).
As the comment at the class's deceleration mentions, some mild race conditions could occur when the consumer thread is entering/leaving WaitForWork while a producer calls SignalWork exactly at a very specific moment. This race condition won't cause any functional harm, at its worst-case, it would cause us to perform a redundant call to Set/WaitOne. Resolving this "issue" would mean that we prefer to enforce fairness. Such enforcement could have some negative effects on performance.

Since we're already discussing the subject, It would be appropriate to mention that the Parallel Extensions library had introduced a new synchronization primitive named ManualResetEventSlim, that also supplies us with extended functionality and better performance over the standard event objects. Beyond keeping the state of the internal kernel event, it also performs some spinning around the flag before actually blocking the thread, and also uses lazy initialization to create the kernel object (its created only when it's needed). The good guys at Microsoft were kind enough to publish a short but informative document that demonstrates the performance differences between the new and old synchronization primitives under a selected group of usage scenarios.

Friday, August 20, 2010

The Case of Delayed ACKs and Nagle's Algorithm

Even though the negative consequences of combining Nagle's Algorithm with Delayed ACKs when using TCP is well documented in the literature, it's still considered to be a common pitfall that many people tend to forget about. Since the negative effects of the combination will appear only under certain pre-conditions, it could be a while until one correlates the application's "weird" behavior with the usage of those two algorithms. So as the performance penalty grows to be more distinct, it would be easier the identify its source.

Delayed ACKs
Since TCP guarantees that every packet that is sent on the network will arrive to its designated remote host, it needs some kind of acknowledgment from that remote host that the packet managed to arrive. So, every time a host receives a packet, it needs to send back an ACK message as an acknowledgment.
The problem with using dedicated ACK messages is that we could find ourselves "overflowing" the network with messages that have nothing to do with our application (and contains almost no data at all). All of this just to say "Yes, I've received your packet" (if you combine the minimal size of a TCP and IP header [even without the Ethernet's header size] you already reach 40 bytes [and for IPv6, that number increases to 60 bytes]).
So, in order to reduce that overhead, the usage of "Delayed ACKs" was defined. The idea is that instead of sending a dedicated ACK message for each received packet, we make an assumption that our application (the receiver) is probably about to send "some" message (not necessarily a response message) back to the remote host that sent us the packet in the first place. This way, we could "piggyback" the application's message and add to its header the ACK for the previously received packet. Hench, we could substantially reduce that amount of redundant data circulating in our network.
Usually its customary to use a 200ms delay for sending ACKs (the exact behavior of the timeout depends on the protocol's implementation detail. Does a 200ms timer is created when the socket opens? or perhaps only at times that the application needs to send ACKs?).
In Windows, the default timeout is 200ms as well. But if you want to, you can change it by modifying the value of the TcpDelAckTicks in the registry, and set it somewhere between 0ms and 600ms (as an anecdote, the Host Requirements RFC forbids using ACK delays greater than 500ms).
Its worth noting that according to the RFC, there's no need to send back an ACK for every packet we receive, since a single ACK message could acknowledge multiple received packets. If for example machine A sends 5 different messages in 10ms differences to machine B, the minute the first message arrives to machine B, it opens a timer with a 200ms timeout so it will be alerted to send back an ACK message for the first message (assuming this is a one-way interface, so machine B won't send back any messages). Until that timer will elapse, the machine will receive the other 4 messages. So if we're already going to send an ACK for the first message, we might as well modify it so it will acknowledge the entire set of 5 messages.
Additionally, receiving packets aren't the only triggers for sending an ACK message. For example, if our receive window (the maximum number of bytes we can receive without sending back an ACK message), we will have to immediately send back an ACK message. Another trigger is the "Send ACK for Every Second Packet", that does exactly what its name implies. In Windows, the default value (2) can be changed, by modifying the TcpAckFrequency registry value.

Nagle's Algorithm
Even though it isn't directly related to the usage of Delayed ACKs, this algorithm attempts to resolve a similar problem that occurs on the sender's side. From the same reason we shouldn't overflow the network with "small, ACK messages", we shouldn't send "small, application messages" ourselves, due to the overhead involved in sending the entire header, while the message's body is very small.
According to the algorithm, the protocol may delay small "send" operations so it could buffer them together (and thus, a single packet could be send for more than a couple of small, applicative messages).
Deciding when to stop buffering the data isn't arbitrary, as it depends on the rate of receiving ACKs from the remote host. The idea is that until we haven't received an ACK message for our previously sent packet, there's no point in sending an additional packet. So, while we are waiting to receive an ACK from the remote host, we are buffering all of data we are about to send (of course, under the limits of the MSS). The minute we'll receive the ACK for our previous packet, all of the buffered data will be combined, and sent as a single packet.

Seemingly, both of the mentioned algorithms justifies their existence since they are attempting to resolve real problems. However, what will happen if we'll combine them both? On one hand, both of them will try to reduce the amount of tinygrams being sent on their side (either ACK messages, or applicative messages). But on the other hand, under certain conditions, they may cause significant amount of delay when attempting to send messages on the network. The most obvious example is when you've got a one-way interface that only one side only sends messages, and the other just receive them (without "answering"). In this case, even if the application continuously sends messages under a high rate, we are expected to notice latencies up to 200ms, since the time the message was "sent" by the application, until it was received by the recipient (since it was delayed by Nagle's algorithm). In other cases, also in two-way interfaces, there might be occasions in which the client/server stops sending message for a couple of moments. This also might cause latencies up to 200ms in its receive rate. In those kind of cases, identifying the source of the latencies might be more difficult since they tend to appear randomly, with latencies that are not always constant (less then 200ms). The exact behavior is determined by the characteristics of the application.


In order to illustrate this behavior via code, I've written the following program that measures the time it takes us to send two messages that seemingly should be sent instantly. One right after the other.

void Server()
{
    Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    server.Bind(ServerEndPoint);
    server.Listen(1);
    Socket s = server.Accept();

    while (true)
    {
        // measure how long it takes to receive both messages
        Stopwatch stopwatch = Stopwatch.StartNew();

        s.Receive(new byte[8]);
        s.Receive(new byte[8]);

        // Output: around 200ms
        Console.WriteLine(stopwatch.ElapsedMilliseconds);
    }
}

void Client()
{
    Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    client.Bind(ClientEndPoint);
    client.Connect(ServerEndPoint);

    while (true)
    {
        client.Send(new byte[8]); // will be sent immediately
        client.Send(new byte[8]); // delayed for 200ms

        // wait for an imaginery response
        client.Receive(new byte[0]);
    }
}

Due to the unpleasant effects that might cause due to the combination of the algorithms, the RFC states that implementations of the TCP protocol, that use Nagle's algorithm, must also support a way to disable it, so applicative messages won't be delayed by the protocol when they are being sent. This ability is exposed to us by using the TCP_NODELAY flag, and in .Net, its usage is wrapped with the Socket.NoDelay property.

Saturday, August 14, 2010

Don't Rely on Environment.ProcessorCount

One of the most hidden knowledge in multithreaded programming is the question "How many threads I should use in my application to achieve the best performance from my available hardware?". The answer to this question may vary since it depends on the characteristics of the application. Whether it's CPU bound, or IO bound? How much of the work is being paralleled? and so on... At the bottom line, all of these "formulas" are based upon the number of available processors.

Usually when an application runs it's initialization routine and gets ready to create a collection of worker threads, it will check how many processors are installed on the machine, so it could decide how many threads exactly it should create. To get the number of installed processors, you would usually call Environment.ProcessorCount. This property simply calls the Win32 GetSystemInfo function, and returns the dwNumberOfProcessors field to the caller. The problem with this value is that it doesn't necessarily represent the number of processors available to our process. In a certain scenario, the user that launched our application might have decided to give it a certain Processor Affinity, that will cause our application's threads to execute only on the processors set by the given affinity, instead of all the installed processors. The problem in this case is that the application will completely ignore the processor affinity, and create a larger amount of threads than it should (in an even worse case, it could assign the threads with affinities set to processors that aren't even available to the process).
What will eventually happen is that we'll have too many threads running on our subset of available processors. Causing us to suffer from a performance penalty caused by excessive context switching.
The scenario in which we set our process to run on only a subset of processors is far from being far-fetched since in situations where we have a couple of applications that are designed to take advantage of all of the available processors, and are meant to run in parallel on the same machine (without using some kind of VMware), then we would probably like to split our installed processors in half, giving each process only a set or processors to use. If the applications won't be aware to this subset, then we'll find ourselves wasting valuable CPU cycles, for no good reason.

In .Net, you could get the processor affinity by using the property Process.ProcessorAffinity. It will return a bitmask where each bit represents a logical CPU where our process can execute (in case the mask is set to 0, the scheduler will decide which processors our application will use. So basically, all of the processors are available).
Current versions of Windows provide support for more than 64 logical CPUs. In order to address all of those CPUs, they are divided into Groups, where each group can address up to 64 processors. So when looking at the processor affinity bitmask, you should be aware that it belongs only to a specific group (by default, only a single group is used).
So next time you're checking how many processors are available to your process, remember to count the number of lit bits in your processor affinity bitmask.

 static void PrintAffinitizedProcessors()
 {
     // gets the number of affinitized proccesors in the
     // current processor group (up to 64 logical processors)
     Process currentProcess = Process.GetCurrentProcess();
     long affinityMask = (long)currentProcess.ProcessorAffinity;
 
     if (affinityMask == 0)
         affinityMask = (long)Math.Pow(Environment.ProcessorCount, 2) - 1;
 
     const int BITS_IN_BYTE = 8;
     int numberOfBits = IntPtr.Size * BITS_IN_BYTE;
 
     int counter = 0;
 
     for (int i = 0; i < numberOfBits; i++)
     {
         if ((affinityMask >> i & 1) == 1)
         {
             Console.WriteLine(i);
             counter++;
         }
     }
 
     Console.WriteLine("Total: " + counter);
 }