Debugging unhandled exceptions in your code might not be the way you've planned to spend your afternoon. On the other hand, at least its your code you're debugging and not, let's say... the Frameworks'. In this post, I'll talk about the latter, and more specifically, a case that popped up in the PerformIOCompletionCallback function.
The main problem with exceptions that are being thrown before the CLR has a chance to execute your own code, is that the call stack is almost useless. A NullReferenceException that was thrown while handling an IOCompletion might consist only from the following line:
System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)+0x54
Not very helpful. Since none of our code was executed, we can't simply see which variable was null or even to understand which module in our application caused this exception. In order the proceed with the analysis, it's time to pull out the big guns and load a full memory dump of the process into WinDbg.
Once the dump, SOS and the appropriate symbols are loaded, we should be able to find the problematic thread using the ~*e!clrstack command. Using the faulting instruction pointer (IP), we could use the !u command to dump the assembly of the failing function. The output should be similar to this one (truncated for brevity):
0:073> !u 000007FEF33888A4
preJIT generated code
System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)
Begin 000007fef3388850, size 11e. Cold region begin 000007fef39340f0, size 3b
Hot region:
000007fe`f3388850 53 push rbx
000007fe`f3388851 55 push rbp
000007fe`f3388852 57 push rdi
000007fe`f3388853 4883ec40 sub rsp,40h
000007fe`f3388857 488d6c2420 lea rbp,[rsp+20h]
000007fe`f338885c 4c894550 mov qword ptr [rbp+50h],r8
000007fe`f3388860 895548 mov dword ptr [rbp+48h],edx
000007fe`f3388863 894d40 mov dword ptr [rbp+40h],ecx
000007fe`f3388866 48896500 mov qword ptr [rbp],rsp
000007fe`f338886a 33c0 xor eax,eax
000007fe`f338886c 48894510 mov qword ptr [rbp+10h],rax
000007fe`f3388870 48c7450800000000 mov qword ptr [rbp+8],0
000007fe`f3388878 488d5508 lea rdx,[rbp+8]
000007fe`f338887c 488d0d6d7e71ff lea rcx,[mscorlib_ni+0x1b06f0]
000007fe`f3388883 e820a886ff call mscorlib_ni+0x3030a8
000007fe`f3388888 ba9d010000 mov edx,19Dh
000007fe`f338888d b901000000 mov ecx,1
000007fe`f3388892 e889a786ff call mscorlib_ni+0x303020
000007fe`f3388897 488b4d50 mov rcx,qword ptr [rbp+50h]
000007fe`f338889b e8d07b89ff call mscorlib_ni+0x330470 (System.Threading.OverlappedData.GetOverlappedFromNative(...)
000007fe`f33888a0 488b7820 mov rdi,qword ptr [rax+20h]
000007fe`f33888a4 488b4708 mov rax,qword ptr [rdi+8]
000007fe`f33888a8 488b5818 mov rbx,qword ptr [rax+18h]
000007fe`f33888ac 4885db test rbx,rbx
000007fe`f33888af 741a je mscorlib_ni+0xa988cb (000007fe`f33888cb)
...
According to Microsoft's x64 register usage, at the time of the exception, the RAX register contained the return value of the GetOverlappedFronNative(...) function. Then, after a little playing around with certain field offsets, a NullReferenceException is thrown. Such errors often occur when the owner object (socket, file stream etc.) is disposed while there's an outstanding IO operation. Once the asynchronous operation is completed, the CLR attempts to use the disposed object - and we enter the wonderful world of undefined behavior. Though the usual result is quite deterministic: a complete crash of the process due to the unhandled exception. Needles to say, you should never dispose an object before you're sure it has no remaining asynchronous IO operations in progress.
This brings us to the final step. How can we check that the target object was indeed already disposed, and how can we tell which part of our codebase is responsible for this unfortunate bug?
First, we'll need to retrieve the OverlappedData instance that the native pointer is mapped into. This is exactly the purpose of the GetOverlappedFromNative function, but since it's implemented inside the CLR, we can't simply use a decompiler to see its implementation. Instead, we could use the u command again. This time, the output should be similar to this:
0:073> !u 000007fe`f2c20470
Unmanaged code
000007fe`f2c20470 e9bb8d1a01 jmp clr!GetOverlappedFromNative (000007fe`f3dc9230)
0:073> !u 000007fe`f3dc9230
Unmanaged code
000007fe`f3dc9230 488d41b0 lea rax,[rcx-50h]
000007fe`f3dc9234 c3 ret
After a short redirection into the CLR, we can see the function simply draws the pointer back by 0x50 bytes (that is, the native structure is actually contained by the managed one). This is consistent with the function's implementation in the SSCLI:
static OverlappedDataObject* GetOverlapped (LPOVERLAPPED nativeOverlapped)
{
...
return (OverlappedDataObject*)((BYTE*)nativeOverlapped - offsetof(OverlappedDataObject, Internal));
}
Dumping the contents of the OverlappedData's instance should give an output similar to this:
0:073> !do 0x0000000005043e08-0x50
Name: System.Threading.OverlappedData
MethodTable: 000007fef2903180
EEClass: 000007fef2a0a110
Size: 120(0x78) bytes
File: C:\...\mscorlib.dll
Fields:
... Offset Type VT Attr Value Name
... 8 System.IAsyncResult 0 instance 0000000000000000 m_asyncResult
... 10 ...ompletionCallback 0 instance 0000000005a46028 m_iocb
... 18 ...ompletionCallback 0 instance 0000000000000000 m_iocbHelper
... 20 ...eading.Overlapped 0 instance 0000000005a46010 m_overlapped
... 28 System.Object 0 instance 0000000013335c18 m_userObject
... 30 ...ppedDataCacheLine 0 instance 00000000050436d8 m_cacheLine
... 38 System.IntPtr 1 instance 221828 m_pinSelf
... 40 System.IntPtr 1 instance 0 m_userObjectInternal
... 48 System.Int32 1 instance 1 m_AppDomainId
... 4c System.Int16 1 instance 13 m_slot
... 4e System.Byte 1 instance 0 m_isArray
... 4f System.Byte 1 instance 0 m_toBeCleaned
... 50 ....NativeOverlapped 1 instance 0000000005043e08 m_nativeOverlapped
Dumping the m_iocb member and then m_target, should give you the problematic object. In the demonstrated case, the object is an instance of SocketAsyncEventArgs, so dumping it and looking the the m_DisposedCalled field proves the assumed theory:
0:073> !do 0000000005a53b58
Name: System.Net.Sockets.SocketAsyncEventArgs
MethodTable: 000007fef21248c0
EEClass: 000007fef1e6d550
Size: 440(0x1b8) bytes
File: C:\...\v4.0_4.0.0.0__b77a5c561934e089\System.dll
Fields:
... Offset Type VT Attr Value Name
... 80 System.EventArgs 0 shared static Empty
... >> Domain:Value 0000000000112a40:0000000003326168 <<
... 8 ...et.Sockets.Socket 0 instance 0000000000000000 m_AcceptSocket
... 10 ...et.Sockets.Socket 0 instance 0000000000000000 m_ConnectSocket
... 18 System.Byte[] 0 instance 0000000013335c18 m_Buffer
... 170 System.Net.WSABuffer 1 instance 0000000005a53cc8 m_WSABuffer
... e8 System.IntPtr 1 instance 13335c28 m_PtrSingleBuffer
... 130 System.Int32 1 instance 32768 m_Count
... 134 System.Int32 1 instance 0 m_Offset
... 20 ...lib]], mscorlib]] 0 instance 0000000000000000 m_BufferList
... 16c System.Boolean 1 instance 0 m_BufferListChanged
... 28 ...m.Net.WSABuffer[] 0 instance 0000000000000000 m_WSABufferArray
... 138 System.Int32 1 instance 32768 m_BytesTransferred
... 30 ...entArgs, System]] 0 instance 0000000000000000 m_Completed
...
... 16f System.Boolean 1 instance 1 m_DisposeCalled
...
In order to find out which part of our code initiated the current IO operation, we could use !gcroot on the target object or the m_userObject member. In some cases, simply examining the contents of m_userObject could give us a rough idea regarding the identity of the culprit.
Showing posts with label Debugging. Show all posts
Showing posts with label Debugging. Show all posts
Sunday, August 31, 2014
Monday, October 25, 2010
The Case of NUnit Hanging During Startup
Recently, a coworker of mine encountered a strange behavior in NUnit. Every time he'd open NUnit's graphical interface, it would freeze and stop responding. Even though a test haven't even started to execute, the application simply froze, leaving the user only to wait patiently and stare helplessly at the screen.
In order to investigate the issue, I've fired up Windbg and attached it to the hanged process.
After loading SOS, the !clrstack command was issued so I could get a better understanding to what was keeping the application busy. Since NUnit's graphical interface stopped from responding, one could already assume that the main thread stopped handling messages from the message pump for some reason.
Reviewing the command's output confirmed that assumption. (The outputs in the post were edited for brevity).
After obtaining this piece of information, it was required to understand why the operation doesn't complete. Perhaps the path of the requested file could shed some light on the observed behavior.
Since the path is passed across several managed methods before entering the Win32 API, I've attempted to dump the values of the parameters in the managed call stack using the !clrstack -p command.
At this point, I take a small step into the world of native debugging and issue the kb command, to display the native call stack, including the first 3 parameters passed to each method.
Knowing that the first parameter that gets passed to the method (lpFileName) represent the name of the file or directory in hand, we are only left to dump its value using the du command.
In order to fix the issue, the problematic path was needed to be removed from the RecentProjects list in NUnit's settings file (located at %AppData%\NUnit\NUnitSettings.xml).
In order to investigate the issue, I've fired up Windbg and attached it to the hanged process.
After loading SOS, the !clrstack command was issued so I could get a better understanding to what was keeping the application busy. Since NUnit's graphical interface stopped from responding, one could already assume that the main thread stopped handling messages from the message pump for some reason.
Reviewing the command's output confirmed that assumption. (The outputs in the post were edited for brevity).
0:007> ~*e!clrstack PDB symbol for mscorwks.dll not loaded OS Thread Id: 0xe4 (0) ESP EIP 0012ee44 7c90e4f4 Win32Native.GetFileAttributesEx(...) 0012ee58 792e03f6 System.IO.File.FillAttributeInfo(...) 0012eeec 7927ff71 System.IO.File.InternalExists(System.String) 0012ef20 792e96a6 System.IO.File.Exists(System.String) 0012ef4c 03a214c3 NUnit.Util.RecentFileEntry.get_Exists() 0012ef54 00e0fb7a NUnit.Gui.NUnitForm.NUnitForm_Load(Object, EventArgs) 0012ef8c 7b1d4665 System.Windows.Forms.Form.OnLoad(EventArgs) 0012efc0 7b1d4105 System.Windows.Forms.Form.OnCreateControl() 0012efcc 7b1c6d11 System.Windows.Forms.Control.CreateControl(Boolean) 0012f008 7b1c6b14 System.Windows.Forms.Control.CreateControl() 0012f020 7b1d2fc8 System.Windows.Forms.Control.WmShowWindow(Forms.Message) 0012f05c 7b1c8906 System.Windows.Forms.Control.WndProc(Forms.Message) 0012f060 7b1d1d6a [InlinedCallFrame: 0012f060] ... 0012f418 7b195911 System.Windows.Forms.Application.Run(System.Windows.Forms.Form) 0012f42c 00e00643 NUnit.Gui.AppEntry.Main(System.String[]) 0012f464 00e00076 NUnit.Gui.Class1.Main(System.String[]) 0012f688 79e71b4c [GCFrame: 0012f688] OS Thread Id: 0x1464 (1) Unable to walk the managed stack. The current thread is likely not a managed thread. You can run !threads to get a list of managed threads in the process OS Thread Id: 0xdac (2) Failed to start stack walk: 80004005Evidently, NUnit is performing a synchronous IO operation on its main thread, which for some reason seems to refuse to complete. It happens to be a good example to why it's considered to be a bad practice to perform "heavy-weight lifting" on the main UI thread, or synchronous IO in general.
After obtaining this piece of information, it was required to understand why the operation doesn't complete. Perhaps the path of the requested file could shed some light on the observed behavior.
Since the path is passed across several managed methods before entering the Win32 API, I've attempted to dump the values of the parameters in the managed call stack using the !clrstack -p command.
0:000> !clrstack -p OS Thread Id: 0xe4 (0) ESP EIP 0012ee44 7c90e4f4 Win32Native.GetFileAttributesEx(...) 0012ee58 792e03f6 File.FillAttributeInfo(System.String, ...) PARAMETERS: path = no data data = 0x0012eeec tryagain = no data returnErrorOnNotFound = 0x00000001 0012eeec 7927ff71 System.IO.File.InternalExists(System.String) PARAMETERS: path = no data 0012ef20 792e96a6 System.IO.File.Exists(System.String) PARAMETERS: path = no data 0012ef4c 03a214c3 NUnit.Util.RecentFileEntry.get_Exists() PARAMETERS: this = no data 0012ef54 00e0fb7a NUnit.Gui.NUnitForm.NUnitForm_Load(Object, EventArgs) PARAMETERS: this = 0x013349ec sender = no data e = no dataUnfortunately, SOS wasn't able to retrieve the address of the managed string (thus, stating no data). It's likely that either the variable was stored in a register that was already overwritten by some later executing code, or perhaps we're just witnessing some of SOS's good old "buggy nature".
At this point, I take a small step into the world of native debugging and issue the kb command, to display the native call stack, including the first 3 parameters passed to each method.
0:000> kb
ChildEBP RetAddr Args to Child
0012ed84 7c90d79c 7c8111ff 0012eddc 0012eda4 ntdll!KiFastSystemCallRet
0012ed88 7c8111ff 0012eddc 0012eda4 00151bb8 ntdll!ZwQueryFullAttributesFile+0xc
0012ee08 0097a310 0139272c 00000000 0012eeec KERNEL32!GetFileAttributesExW+0x84
WARNING: Frame IP not in any known module. Following frames may be wrong. 0012ee2c 792e03f6 0012ee5c 00000000 01392764As you can see, the thread eventually made its way into the kernel, calling GetFileAttributesExW.+0x97a30f 0012eedc 7927ff71 00000001 00000000 00000000 mscorlib_ni+0x2203f6 0012ef18 792e96a6 00000000 00000000 00000000 mscorlib_ni+0x1bff71 0012ef80 7b1d4665 01336c40 0134f25c 00000000 mscorlib_ni+0x2296a6 0012efb8 7b1d4105 00000004 0012f000 7b1c6d11 System_Windows_Forms_ni+0x204665 0012efc4 7b1c6d11 0133797c 013349ec 00000001 System_Windows_Forms_ni+0x204105 0012f000 7b1c6b14 00000000 00000000 013349ec System_Windows_Forms_ni+0x1f6d11 0012f018 7b1d2fc8 013349ec 00000000 00000000 System_Windows_Forms_ni+0x1f6b14 0012f054 7b1c8906 3cc1af52 79e7a6b8 0012f2d4 System_Windows_Forms_ni+0x202fc8 0012f0ac 7b1d1d6a 013349ec 0012f0c0 7b1d1d20 System_Windows_Forms_ni+0x1f8906 0012f0b8 7b1d1d20 0012f0d0 7b1d2f11 0012f10c System_Windows_Forms_ni+0x201d6a 0012f0c0 7b1d2f11 0012f10c 013349ec 0012f0e4 System_Windows_Forms_ni+0x201d20 0012f0d0 7b1d1af4 00e111ec 0012f10c 01335448 System_Windows_Forms_ni+0x202f11 0012f0e4 7b1c8640 0012f100 7b1c85c1 00000000 System_Windows_Forms_ni+0x201af4 0012f0ec 7b1c85c1 00000000 01335448 0012f130 System_Windows_Forms_ni+0x1f8640 0012f100 7b1c849a 01335448 00291106 00000018 System_Windows_Forms_ni+0x1f85c1 0012f164 7e418734 00291106 00000018 00000001 System_Windows_Forms_ni+0x1f849a
Knowing that the first parameter that gets passed to the method (lpFileName) represent the name of the file or directory in hand, we are only left to dump its value using the du command.
0:000> du 0139272c 0139272c "\\123.123.123.123\foo.dll"With that, the culprit is found. Apparently, some time ago a UT assembly was loaded into NUnit from a certain IP address that is no longer available in the network, causing the thread to stall indefinitely.
In order to fix the issue, the problematic path was needed to be removed from the RecentProjects list in NUnit's settings file (located at %AppData%\NUnit\NUnitSettings.xml).
Friday, September 10, 2010
"!CLRStack -p" Isn't Always Reliable
One of the most commonly used commands in SOS is !CLRStack. When combined with the -p switch, SOS will attempt to display the values of the parameters that were passed to the functions in our managed call-stack. It's important to emphasis that SOS will only attempt to display the correct values, since sometimes its just going to get it all wrong.
In the case where SOS comes to the conclusion that it cannot retrieve the value of a parameter, the string will be displayed. This happens at situations where SOS honestly can't track the value of the parameter by just looking at a specific stack frame. For example, if we're using a calling convention such as fast call, the first two parameters (starting from the left) that can be contained in a register, will be passed in the ECX and EDX registers instead of being pushed onto the stack. For member functions, the value of the this pointer is usually passed in the ECX register.
This kind of behavior can lead into situations where the values of some of the function parameters may be missing since the registers we're already overridden by other functions that we're called down the stack.
As opposed to situations where SOS is able to come to the conclusion that it isn't sure what is the correct value of a parameter, every once in a while it's just going to get things wrong, and display incorrect values. This obviously could terribly mislead the person who attempts to debug the application.
The thing that might be the most concerning about this phenomena, is that it's not very hard to reproduce. Let's take the following application as an example:
(Note: The output might slightly vary, depending on your version of SOS. The following debugging session was perform on version 4.0.30319.1).
Now, when we compare the stack-addresses of the parameters that SOS gave us, against the real addresses that we see here, we can confirm that a small address shift was performed to the memory addresses of the parameters that we're passed on the stack. So every time SOS attempts to display the value of a parameter, it actually displays the value of the parameter that was passed next to it.
This scenario is a classic example to the bit "buggy" nature of SOS. It doesn't mean that we have to immediatly stop using the !CLRStack command, but it should to remind us not to take SOS's output as the "absolute truth" when debugging, and just keep ourselves alert for all kind of "weird" behaviors such as this one.
In the case where SOS comes to the conclusion that it cannot retrieve the value of a parameter, the string
This kind of behavior can lead into situations where the values of some of the function parameters may be missing since the registers we're already overridden by other functions that we're called down the stack.
As opposed to situations where SOS is able to come to the conclusion that it isn't sure what is the correct value of a parameter, every once in a while it's just going to get things wrong, and display incorrect values. This obviously could terribly mislead the person who attempts to debug the application.
The thing that might be the most concerning about this phenomena, is that it's not very hard to reproduce. Let's take the following application as an example:
class Program { static void Main(string[] args) { Foo a = new Foo(); a.Work(1, 2, 3, 4, 5); } } class Foo { [MethodImpl(MethodImplOptions.NoInlining)] public void Work(int x, int y, int z, int k, int p) { // break with debugger here } }Now we'll run WinDbg, load SOS and see what the !CLRStack command has to say about the parameters that we're passed to the Work method.
(Note: The output might slightly vary, depending on your version of SOS. The following debugging session was perform on version 4.0.30319.1).
0:000> !clrstack -p
OS Thread Id: 0xfbc (0)
Child SP IP Call Site
0012f3fc 030300b3 ConsoleApplication1.Foo.Work(Int32, Int32, Int32, Int32, Int32) [Program.cs @ 24]
OS Thread Id: 0xfbc (0)
Child SP IP Call Site
0012f3fc 030300b3 ConsoleApplication1.Foo.Work(Int32, Int32, Int32, Int32, Int32) [Program.cs @ 24]
PARAMETERS:
this () = 0x00b3c358
x () = 0x00000001
y (0x0012f40c) = 0x00000003
z (0x0012f408) = 0x00000004
k (0x0012f404) = 0x00000005
p (0x0012f400) = 0x03030092
this (
x (
y (0x0012f40c) = 0x00000003
z (0x0012f408) = 0x00000004
k (0x0012f404) = 0x00000005
p (0x0012f400) = 0x03030092
0012f414 03030092 ConsoleApplication1.Program.Main(System.String[]) [Program.cs @ 16]
PARAMETERS:
args =
PARAMETERS:
args =
0012f648 791421db [GCFrame: 0012f648]
0:000> r ecx // holds "this" pointer
ecx=00b3c358
ecx=00b3c358
0:000> r edx // holds "x" parameter
edx=00000001
If so, it isn't difficult to see that besides the this pointer and the x parameter (that we're passed as registers), SOS got all of the other parameters wrong. In fact, one may notice in a "shift" that was performed on the displayed values (y got the value of z, while z got the value of k, and so on...).
In order to better understand what went on here, we'll print the memory between the relevant stack pointers (in previous versions, this value was represented by the ESP column instead of the Child SP column).
0:000> dp /c 1 0012f3fc 0012f414
0012f3fc 0012f414 // EBP
0012f400 03030092 // Return Address
0012f404 00000005 // p
0012f404 00000005 // p
0012f408 00000004 // k
0012f40c 00000003 // z
0012f410 00000002 // y
0012f414 0012f424 // EBP
Now, when we compare the stack-addresses of the parameters that SOS gave us, against the real addresses that we see here, we can confirm that a small address shift was performed to the memory addresses of the parameters that we're passed on the stack. So every time SOS attempts to display the value of a parameter, it actually displays the value of the parameter that was passed next to it.
This scenario is a classic example to the bit "buggy" nature of SOS. It doesn't mean that we have to immediatly stop using the !CLRStack command, but it should to remind us not to take SOS's output as the "absolute truth" when debugging, and just keep ourselves alert for all kind of "weird" behaviors such as this one.
Thursday, September 2, 2010
DateTime.Now in v4.0 Causes Dynamic Memory Allocations
A while back I've mentioned in a post that calling DateTime.Now causes boxing. Following the posting, a feedback item was also posted on Microsoft Connect, reporting about the issue.
Yesterday, the feedback's status turned to Resolved after Greg cleverly remarked that this issue was fixed in v4.0 (the boxing occurred until v3.5).
However, after reading the response I took the same benchmark code from the previous post an ran it again, this time using .Net Framework 4.0. Surprisingly, perfmon kept reporting on high allocation rates in the Allocated Bytes/sec counter. So I've opened up Reflector and took a look at DateTime.Now's new implementation detail. And indeed, the boxing issue was resolved since the new implementation uses the TimeZoneInfo type instead of TimeZone. Unable to find the source of the allocations from just looking at the code, it was time to launch WinDbg.
After letting the application to run for a while, I've attached WinDbg to the process and executed the !DumpHeap -stat command a couple of times so I could take a general look at the kind and amount of objects that currently live on the managed heap. The output was as follows:
0:003> !dumpheap -stat
total 0 objects
Statistics:
MT Count TotalSize Class Name
...
79ba1d88 13 1532 System.Char[]
79ba2f20 1 3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac 449 12220 System.String
001663d8 53 13212 Free
79b56c28 30 17996 System.Object[]
79b8e7b0 9119 291808 System.Globalization.DaylightTime
Total 9771 objects
0:003> !dumpheap -stat
total 0 objects
Statistics:
MT Count TotalSize Class Name
...
79ba1d88 13 1532 System.Char[]
79ba2f20 1 3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac 449 12220 System.String
001663d8 53 13212 Free
79b56c28 30 17996 System.Object[]
79b8e7b0 20654 660928 System.Globalization.DaylightTime
Total 21306 objects
This output reveals the identity of our memory-consuming bandit: DaylightTime.
Now, all is left is to spot where this type is being instantated and used. For this purpose, we could use the !BPMD -md command in order to place breakpoints on specific managed methods that DaylightTime exposes (you could dump the type's methods using the following command: !DumpMT -md ).
After setting the breakpoint, the application continues its execution and immediately breaks. Looking at the managed callstack using !CLRStack reveals the source method of the allocation: TimeZoneInfo.GetIsDaylightSavingsFromUtc. This method creates an instance of DaylightTime, and since DaylightTime is a class (hence, a reference type), a dynamic memory allocation is being made.
In conclusion, DateTime.Now's implementation was updated in BCL v4.0 and the boxing of Int32s was removed since the implementation uses the TimeZoneInfo type instead of TimeZone. However, using TimeZoneInfo results in a new source for dynamic allocations, but this time instead of redundant boxings, the allocations are caused just because a reference type is beind used under the hood. And since each instance of DaylightTime is sized up at 32 bytes (including the Object Header), the update in the BCL could be considered as a turn for the worst regarding memory usage since DaylightTime instances are more memory consuming than boxed instances of Int32.
Yesterday, the feedback's status turned to Resolved after Greg cleverly remarked that this issue was fixed in v4.0 (the boxing occurred until v3.5).
However, after reading the response I took the same benchmark code from the previous post an ran it again, this time using .Net Framework 4.0. Surprisingly, perfmon kept reporting on high allocation rates in the Allocated Bytes/sec counter. So I've opened up Reflector and took a look at DateTime.Now's new implementation detail. And indeed, the boxing issue was resolved since the new implementation uses the TimeZoneInfo type instead of TimeZone. Unable to find the source of the allocations from just looking at the code, it was time to launch WinDbg.
After letting the application to run for a while, I've attached WinDbg to the process and executed the !DumpHeap -stat command a couple of times so I could take a general look at the kind and amount of objects that currently live on the managed heap. The output was as follows:
0:003> !dumpheap -stat
total 0 objects
Statistics:
MT Count TotalSize Class Name
...
79ba1d88 13 1532 System.Char[]
79ba2f20 1 3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac 449 12220 System.String
001663d8 53 13212 Free
79b56c28 30 17996 System.Object[]
79b8e7b0 9119 291808 System.Globalization.DaylightTime
Total 9771 objects
0:003> !dumpheap -stat
total 0 objects
Statistics:
MT Count TotalSize Class Name
...
79ba1d88 13 1532 System.Char[]
79ba2f20 1 3164 Dictionary[[String, mscorlib],[String, mscorlib]][]
79b9f9ac 449 12220 System.String
001663d8 53 13212 Free
79b56c28 30 17996 System.Object[]
79b8e7b0 20654 660928 System.Globalization.DaylightTime
Total 21306 objects
This output reveals the identity of our memory-consuming bandit: DaylightTime.
Now, all is left is to spot where this type is being instantated and used. For this purpose, we could use the !BPMD -md command in order to place breakpoints on specific managed methods that DaylightTime exposes (you could dump the type's methods using the following command: !DumpMT -md
After setting the breakpoint, the application continues its execution and immediately breaks. Looking at the managed callstack using !CLRStack reveals the source method of the allocation: TimeZoneInfo.GetIsDaylightSavingsFromUtc. This method creates an instance of DaylightTime, and since DaylightTime is a class (hence, a reference type), a dynamic memory allocation is being made.
// a snippet from the implementation: TimeSpan span = utc; DaylightTime daylightTime = GetDaylightTime(Year, rule); DateTime startTime = daylightTime.Start - span; DateTime endTime = (daylightTime.End - span) - rule.DaylightDelta;
In conclusion, DateTime.Now's implementation was updated in BCL v4.0 and the boxing of Int32s was removed since the implementation uses the TimeZoneInfo type instead of TimeZone. However, using TimeZoneInfo results in a new source for dynamic allocations, but this time instead of redundant boxings, the allocations are caused just because a reference type is beind used under the hood. And since each instance of DaylightTime is sized up at 32 bytes (including the Object Header), the update in the BCL could be considered as a turn for the worst regarding memory usage since DaylightTime instances are more memory consuming than boxed instances of Int32.
Subscribe to:
Posts (Atom)