Memory Leak – Microsoft.Reporting.WinForms.LocalReport

Several weeks ago I was asked to assist with a sluggish ASP.NET C# MVC application. This application is an OLTP system and processes several thousand orders a day. It was hosted on a Windows 2012 R2 64bit 16 Core machine. After discussing with the developer, I was told that the application has acceptable levels of performance for a while and then becomes sluggish (this behavior suggests some kind of  a leak). At times, it becomes extremely slow and rebooting the machine solves the problem. The problem exhibits itself more frequently during days when there were above average volumes.

Downloading Process Explorer from SysInternals, I found the w3wp.exe (there were several, but 2 of them stood out). Process Explorer does not display handle information by default. I will probably blog about this separately. There are many articles out there that explain how to look for Handles using Process Explorer. This should get you started. Here is another excellent blog explaining the theoretical limit of handles. It’s a theoretical limit because there are many types of thresholds that an application can reach which causes severe performance degradation.

I knew that we had a potential handle leak. There were also other issues within the application, that were not related to the handle leak. The challenge was to isolate the handle leak versus defects in the program logic, 3rd party hardware behaviors (like non genuine printer cartridges) and various other user errors.  Debugging a live system with several 100 users connected wasn’t happening. We were able to get a 4.5Gb memory dump.

When I moved the memory dump to a machine where I could analyze it, I found that the Production server was running an older version of the .NET Framework. The MS Symbol server wasn’t much help. I ended up copying the complete framework64 folder to the local machine (a different folder, not the Microsoft.NET\Framework folder) and loaded SOS using the .load command.

01-load sos
Using the time command, the overall CPU utilization by the Process compared to the System uptime showed that it wasn’t CPU contention.

02-time

The dumpheap command displays the managed heap. The output contains the Method Table, Object Instance Count, Memory Allocated and the Type from left to right. !dumpheap -stat reported about 164k lines of output and 9.7M object references. Many of these objects pointed to dynamically created Report Expressions.

01-dumpheap -stat

Typically the next logical step is to use !dumpheap -mt <<method table address>> to display all the individual object instances and then reviewing the individual objects to understand them better.

02-dumpheap -stat

Looking for loaded modules using the !eeheap -loader command showed that the worker process had 2622 App Domains (with a heapsize of 537mb). Dumping one of the modules using !DumpModule command pointed to a Dynamically loaded Expression Library associated with SQL Server Reporting Services.

05-eeheap -loader06-eeheap -loader

At this point, I knew that the issue may be related to the Dynamically Loaded Expression Assembly and its associated AppDomain that failed to unload. What was keeping the AppDomain from unloading? The dynamic expression library is used by the LocalReport object, which is part of the Microsoft.Reporting.Winforms namespace.

Further research showed that creating a new AppDomain is per design. The developers @ Microsoft had identified an issue with the dynamic expression library being loaded into the Default AppDomain and using up more memory each time the LocalReport object was created. An assembly once loaded into an AppDomain can never be unloaded. Hence the dynamic assembly was loaded in a new AppDomain and if all goes well, the AppDomain can be safely unloaded. As per the documentation calling LocalReport.ReleaseSandBoxAppDomain should unload the AppDomain. Unfortunately, the unload never happens because there is a problem with the internal implementation of the Dispose Method.

Using .NET Memory Profiler (JetBrains dotPeek), I was able to identify the Event Delegate

Screen Shot 2017-04-11 at 11.13.42 AM

Looking at the implementation, the handlers are not removed in the Dispose method. This in turn keeps the AppDomain from unloading. Hope this gets resolved in a future release. For now, we are recycling the Application Pool more frequently to work around the memory leak and the developers are looking at an alternate solution to print the labels.

08

One of the questions that I was asked while presenting my findings with the project manager, is why did this suddenly become a problem, since the application had been in Production for over 2 years. The issue had been there from the very beginning. The OOTB Application Pool Recycle was  causing the host process to shutdown after a period of inactivity (this app was used only between 8am-5pm). So it had enough time to shutdown after hours and the problem would never surface until the transaction volume increased, at which point the application started hitting thresholds.

It always helps to set baselines. Keep tabs on System configuration, Transaction Volume and major system change events. This will greatly shorten the time it takes to troubleshoot performance issues.

 

 

Oracle Performance Issues – FTS

The software that we design needs to be periodically reviewed and optimized for performance. We have the option of throwing cheaper hardware (horizontal or vertical scaling) to mitigate performance issues, not to mention licensing requirements. Unless revisited and optimized, the solution does not scale well. Leaving some boiler plate code that can be reused causes more headaches down the line.

While analyzing one such performance issue, I identified that a dozen staging tables had silently grown to several gigabytes over the years. Even though only few thousand records get processed during each pass, a SELECT query against these tables was using Full Table Scans. Under stress, the performance issue was noticeable.

Now, I had to know what other queries were being executed that was using a full table scan. The query below helps with just that.

WITH src
     AS (  SELECT TRUNC (MAX (sn.begin_interval_time)) last_used,
                  ds.owner,
                  hsp.object_name,
                  ROUND (ds.bytes / 1048576, 0) size_in_mb,
                  hs.sql_id
             FROM dba_hist_sql_plan hsp,
                  dba_hist_sqlstat hs,
                  dba_hist_snapshot sn,
                  dba_segments ds
            WHERE     hsp.object_owner <> 'SYS' --Exclude System Objects
                  AND hsp.object_owner = ds.owner
                  AND hsp.object_name = ds.segment_name
                  AND ds.bytes > 20971520 --Find Objects Greater than 20 MB
                  AND hsp.operation LIKE '%TABLE ACCESS%'
                  AND hsp.options LIKE '%FULL%'
                  AND hsp.sql_id = hs.sql_id
                  AND hs.snap_id = sn.snap_id
         GROUP BY owner,
                  object_name,
                  hs.sql_id,
                  ds.bytes
         ORDER BY object_name)
  SELECT src.*, a.sql_text
    FROM src LEFT OUTER JOIN v$sqlarea a ON a.sql_id = src.sql_id
   WHERE last_used > SYSDATE - 7
ORDER BY object_name, last_used

The size of the object is denoted in column size_in_mb, to help identify high value items.

If the sql_text columns gets Truncated, go to the below view instead and use the SQL_ID. Some parsing may be required or use LISTAGG to concatenate the rows to return a single column.

select sql_text from   v$sqltext
 where sql_id = 'b2t8xugd549k5'
 order by piece

Bandwidth Leeching – latimes.com

I was using Web Inspector to checkout how websites were using Facebook cookies for ad-tracking when I stumbled upon the latimes home page using 14-50MB.

I was surprised to see about 900+ http requests go through in 14 seconds with a total of 20MB of bandwidth usage. All for responsiveness and ad-tracking. The page keeps lazy loading content without any interaction.

Bandwidth leeching latimes.com

I let the page sit there for about a minute and it ended up downloading about 50MB of junk and redirected me to this page to download a Flash Player that I don’t trust, yikes. Do you think this is a responsible design?

Screen Shot 2016-10-01 at 7.53.44 PM.png

 

Excel Slowness – Refreshing External data

We recently ran into a performance issue. An Excel 2013 workbook executing a query against a Oracle database. Refreshing this worksheet took several minutes at a rate of 100+ records every few seconds. This behavior only exhibited itself only on one machine. If the Query is set to execute in the background, a status as shown below show up in the Status bar.

Fig 1.0

I used Process Monitor to troubleshoot this performance issue. I reset all the filters and started the capture. I refreshed the Excel Worksheet for a couple of seconds, so that I had enough events to work with. I right clicked on the process Name column and filtered to view events generated by “EXCEL.EXE”. This showed that there was lot of write activity to a particular sql.log file. The file was too big to view in Notepad++ or Notepad, using the good old DOS Type command with More gave me a glimpse of the Trace Events.

img02

Process Explorer showed that the handle to sql.log was being held by a service host and not Excel. This confirmed my suspicion that it was a process outside of Excel (system wide) that had Debug or Tracing was enabled. Since the worksheet was using a DNS, the next steps was to go to ODBC Data Source Administration console (Control Panel). The Tracing Tab showed that there was a session level trace enabled. Stopping the Trace resolved the Performance Issue.

img03

Process Monitor and Process Explorer are important tools IMO.