Memory Leak – Microsoft.Reporting.WinForms.LocalReport

Several weeks ago I was asked to assist with a sluggish ASP.NET C# MVC application. This application is an OLTP system and processes several thousand orders a day. It was hosted on a Windows 2012 R2 64bit 16 Core machine. After discussing with the developer, I was told that the application has acceptable levels of performance for a while and then becomes sluggish (this behavior suggests some kind of  a leak). At times, it becomes extremely slow and rebooting the machine solves the problem. The problem exhibits itself more frequently during days when there were above average volumes.

Downloading Process Explorer from SysInternals, I found the w3wp.exe (there were several, but 2 of them stood out). Process Explorer does not display handle information by default. I will probably blog about this separately. There are many articles out there that explain how to look for Handles using Process Explorer. This should get you started. Here is another excellent blog explaining the theoretical limit of handles. It’s a theoretical limit because there are many types of thresholds that an application can reach which causes severe performance degradation.

I knew that we had a potential handle leak. There were also other issues within the application, that were not related to the handle leak. The challenge was to isolate the handle leak versus defects in the program logic, 3rd party hardware behaviors (like non genuine printer cartridges) and various other user errors.  Debugging a live system with several 100 users connected wasn’t happening. We were able to get a 4.5Gb memory dump.

When I moved the memory dump to a machine where I could analyze it, I found that the Production server was running an older version of the .NET Framework. The MS Symbol server wasn’t much help. I ended up copying the complete framework64 folder to the local machine (a different folder, not the Microsoft.NET\Framework folder) and loaded SOS using the .load command.

01-load sos
Using the time command, the overall CPU utilization by the Process compared to the System uptime showed that it wasn’t CPU contention.

02-time

The dumpheap command displays the managed heap. The output contains the Method Table, Object Instance Count, Memory Allocated and the Type from left to right. !dumpheap -stat reported about 164k lines of output and 9.7M object references. Many of these objects pointed to dynamically created Report Expressions.

01-dumpheap -stat

Typically the next logical step is to use !dumpheap -mt <<method table address>> to display all the individual object instances and then reviewing the individual objects to understand them better.

02-dumpheap -stat

Looking for loaded modules using the !eeheap -loader command showed that the worker process had 2622 App Domains (with a heapsize of 537mb). Dumping one of the modules using !DumpModule command pointed to a Dynamically loaded Expression Library associated with SQL Server Reporting Services.

05-eeheap -loader06-eeheap -loader

At this point, I knew that the issue may be related to the Dynamically Loaded Expression Assembly and its associated AppDomain that failed to unload. What was keeping the AppDomain from unloading? The dynamic expression library is used by the LocalReport object, which is part of the Microsoft.Reporting.Winforms namespace.

Further research showed that creating a new AppDomain is per design. The developers @ Microsoft had identified an issue with the dynamic expression library being loaded into the Default AppDomain and using up more memory each time the LocalReport object was created. An assembly once loaded into an AppDomain can never be unloaded. Hence the dynamic assembly was loaded in a new AppDomain and if all goes well, the AppDomain can be safely unloaded. As per the documentation calling LocalReport.ReleaseSandBoxAppDomain should unload the AppDomain. Unfortunately, the unload never happens because there is a problem with the internal implementation of the Dispose Method.

Using .NET Memory Profiler (JetBrains dotPeek), I was able to identify the Event Delegate

Screen Shot 2017-04-11 at 11.13.42 AM

Looking at the implementation, the handlers are not removed in the Dispose method. This in turn keeps the AppDomain from unloading. Hope this gets resolved in a future release. For now, we are recycling the Application Pool more frequently to work around the memory leak and the developers are looking at an alternate solution to print the labels.

08

One of the questions that I was asked while presenting my findings with the project manager, is why did this suddenly become a problem, since the application had been in Production for over 2 years. The issue had been there from the very beginning. The OOTB Application Pool Recycle was  causing the host process to shutdown after a period of inactivity (this app was used only between 8am-5pm). So it had enough time to shutdown after hours and the problem would never surface until the transaction volume increased, at which point the application started hitting thresholds.

It always helps to set baselines. Keep tabs on System configuration, Transaction Volume and major system change events. This will greatly shorten the time it takes to troubleshoot performance issues.

 

 

Advertisements

WinDbg – Welcome to Logging

Performing dump analysis is often times challenging due to the nature of the beast, the UI or the lack thereof :). Some commands are extremely time consuming or like me, just hate scrolling. Also there is only so much that you can view by scrolling.

Logging can be enabled before running commands that produce tons of output. We can also create seperate log files for each command by setting up separate log files before executing each command. From here, we can open the log files in an External Editor like Notepad++ and work with WinDbg on another monitor.

To enable Logging, goto the Edit Menu -> Open/Close Log File…

01

provide the Log file name and click ok. You are all done. Check the Append checkbox to append to an existing file or else the file gets overwritten.

02

After I am done capturing the logs that I need, I come back and Close Open Log File button to close the open file. This disables logging, while I take the file offline for Analysis.

The output can also be copied to excel and formatted to quickly sift though.

.foreach WinDbg

Recursively executing a command in WinDbg is one of the coolest features. In the example below, I used one method table from the output of a !FinalizeQueue command. The FinalizeQueue similar to DumpHeap provides the Method Table, Object Count, Size allocated and the Type.

  • Here 6dcf1230 is one of the Method Tables that I was interested in. I wanted to see if any objects in this table were rooted.
  • myvar is a variable that holds the output of the !DumpHeap command. –short returns only the address. Maybe in another article I will cover how to process the tokens
  • the .echo command needs no introduction, in this content it is used twice to display the address and to separate the output.
.foreach (myvar {!DumpHeap /d -mt 6dcf1230 -short}) {.echo myvar;!gcroot -all myvar;.echo **************;}

 

Fun with WinDbg

I was looking into a Memory Leak  using a win-32 w3wp memory dump. Looking into the FinalizeQueue, I saw System.Data.DataTableCollection object in Gen2. Based on the hunch, I dumped the method table.

01

Dumped a couple of addresses listed above

02

I looked at the dataset and the _list, which brought me to the _items below.

03

The list of items is actually an Array.

04

Dumping the array and reviewing one of the array element(s) as below.

06

Viewing the value for tablename showed the string below. This is one of the objects used to retain Trace data.

07I am not posting the rest of the table structure, but if you are interested, you are welcome to follow the steps. Trace.axd was still active and ended up disabling it. Now back to looking into the actual issue.

Identify Locked objects that are Invalid

Here is a query that is useful to identify locks on objects that are Invalid, preventing the object from being recompiled. These sessions need to be identified and stopped or killed.

with src as
(
    select sid, ao.owner, object, status
                 from v$access v, all_objects ao
                where ao.object_name = v.object
                  and ao.status= 'INVALID'
                )
select v.sid,
       src.owner,
       src.object
       serial#,
       username,
       osuser,
       logon_time,
       program,
       action,
       src.status
  from v$session v, src
where v.sid = src.sid;

Oracle Performance Issues – FTS

The software that we design needs to be periodically reviewed and optimized for performance. We have the option of throwing cheaper hardware (horizontal or vertical scaling) to mitigate performance issues, not to mention licensing requirements. Unless revisited and optimized, the solution does not scale well. Leaving some boiler plate code that can be reused causes more headaches down the line.

While analyzing one such performance issue, I identified that a dozen staging tables had silently grown to several gigabytes over the years. Even though only few thousand records get processed during each pass, a SELECT query against these tables was using Full Table Scans. Under stress, the performance issue was noticeable.

Now, I had to know what other queries were being executed that was using a full table scan. The query below helps with just that.

WITH src
     AS (  SELECT TRUNC (MAX (sn.begin_interval_time)) last_used,
                  ds.owner,
                  hsp.object_name,
                  ROUND (ds.bytes / 1048576, 0) size_in_mb,
                  hs.sql_id
             FROM dba_hist_sql_plan hsp,
                  dba_hist_sqlstat hs,
                  dba_hist_snapshot sn,
                  dba_segments ds
            WHERE     hsp.object_owner <> 'SYS' --Exclude System Objects
                  AND hsp.object_owner = ds.owner
                  AND hsp.object_name = ds.segment_name
                  AND ds.bytes > 20971520 --Find Objects Greater than 20 MB
                  AND hsp.operation LIKE '%TABLE ACCESS%'
                  AND hsp.options LIKE '%FULL%'
                  AND hsp.sql_id = hs.sql_id
                  AND hs.snap_id = sn.snap_id
         GROUP BY owner,
                  object_name,
                  hs.sql_id,
                  ds.bytes
         ORDER BY object_name)
  SELECT src.*, a.sql_text
    FROM src LEFT OUTER JOIN v$sqlarea a ON a.sql_id = src.sql_id
   WHERE last_used > SYSDATE - 7
ORDER BY object_name, last_used

The size of the object is denoted in column size_in_mb, to help identify high value items.

If the sql_text columns gets Truncated, go to the below view instead and use the SQL_ID. Some parsing may be required or use LISTAGG to concatenate the rows to return a single column.

select sql_text from   v$sqltext
 where sql_id = 'b2t8xugd549k5'
 order by piece

Troubleshooting – Erratic Load Balanced Web Application behavior

Have you ever run into situations where the Customer reports an issue, but the Dev Team is not able to reproduce it. Well, I have had several of these and sometimes one of the Web Server is configured differently than the rest in a Load Balanced environment. Sometimes these behavior also exhibit when we actually deploy a new farm or add new servers to an existing farm.

Recently a customer reported that attachments on a particular customer portal were not being viewable. Sure enough, when we tested this issue initially, it worked for us. We were able to quickly able to use this technique to isolate the server and then add the missing mime type.

The actual issue/behavior can be identified using Fiddler or the Developer Tools. Identifying the server that is causing the erratic user behavior is key to either quickly resolving or removing the server from the farm for offline resolution.

When adding each server to the farm, add a HTTP Response Header that uniquely identifies the server as shown in the screenshot below. Don’t include any sensitive details that might compromise your server.

screen_shot_2016-11-11_at_4_19_00_pm

Shown below is a Network Capture using Firefox / Firebug, but the same steps apply to Fiddler and other Developer Tools. After you have captured enough events to reproduce the issue, sift through the requests to find the offending request, identify the HTTP Response Header and you will be able to locate the server.

screen_shot_2016-11-11_at_4_03_53_pm

Keep in mind that the Developer Tools have limitation to the total number of requests / responses that can be captured without degrading performance. fiddler seems to be a better tool of choice for this particular debugging scenario.

It’s possible to programmatically set the HTTP Response Headers, but that’s a different topic.