This section includes the following topics:

How to identify performance problems

In most cases, a Precise for .NET workspace displays information in the context of a specific instance and time frame. However, if you want to view findings for another time frame or instance, you can change these settings using the respective dropdown lists.

How to investigate findings

When you start investigating the findings, it is good practice to start with the findings that have the highest severity rankings in the Findings table.

To investigate a finding

  1. Identify the findings with the highest severity rankings (red, orange, and yellow icons where red is the highest severity and yellow the lowest) in the Findings table on the Dashboard tab.
  2. Select the finding you want to investigate.
  3. Read the information under the Highlights tab to understand what the problem is about.
  4. After you have studied the information provided, select the Advice tab and perform the recommendation(s) that best suit(s) your needs. If you need additional information, read the information available under the Background tab.

Heavy Entry Point

Heavy entry points may indicate a potential performance irregularity in the entry point's underlying call path. When viewing information for "All Instances" in the invocation tree, a high entry point finding across multiple instances will clarify if the irregularity is a result of the entry point or if it is a result of a specific instance.

The finding is based on the total service time values, whereas by default, the invocation tree is displayed according to the average service time. As a result, the specified heavy entry point is not necessarily found towards the top of the invocation tree.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following:

Frequent SLA Breaches

SLA thresholds are defined to help the user pinpoint HTTP entry points experiencing performance issues according to specific criteria. Frequent SLA breaches and near breaches can be caused by an underlying performance issue.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

Heavy Method Contribution

A high work time for a specific method (reflecting the method's work time only, without the underlying call path), can indicate a performance issue within the context of that method.

In the same way, a high work time for a specific occurrence of a method invoked multiple times in the Invocation tree can indicate a performance issue within the context of that specific occurrence.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following:

The invocation tree opens to the method's heaviest call path, facilitating effective navigation to the root cause. Examine the information displayed and look at the overtime graph and findings to drill down further to find the root cause of the performance issue.

By default, information is displayed for the heaviest method's call path. To investigate the other call paths, select them from the invocation tree. (They are highlighted in bold).

Excessive Lock Time

A significant percentage of the selected entity's total service time is spent waiting for lock acquisitions.

Waiting for locks means that the online activity (or batch process) is on hold, and is waiting for another activity or process to release the lock. Typically, eliminating locks can improve the performance of your transactions.

A possible solution is to consider tuning your transaction performance to reduce the time spent waiting for locks.

Slow DB Request Execution

A significant percentage of the selected entity's total service time is spent waiting for a specific DB request execution. A possible solution is to tune your DB requests to optimize transaction performance.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

The SQL and Exit Points tab opens, displaying details for SQL statements that the selected entity or its underlying call tree is waiting for, according to their contribution to the overall performance. From this view of the overall external activity:

Slow Web Service Execution

A significant percentage of the selected entity's total service time is spent waiting for a specific external Web service invocation.

A possible solution is to tune the external Web service to optimize transaction performance.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

Heavy Exit Point

A significant percentage of the selected entity's total service time is spent waiting for external activity.

A possible solution is to consider tuning your transaction performance and the time spent executing external activity.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following:

The SQL and Exit Points tab opens, displaying details for exit points that the selected entity or its underlying call tree is waiting for, according to their contribution to the overall performance. DB activity is featured in this list as SQL statements.

Significant ADO.NET Activity

A significant percentage of the selected entity's total service time is spent waiting for DB activity invoked by a JDBC request.

This can result from a specific heavy statement or a collection of many, relatively short statements, being executed. A possible solution is to consider tuning your transaction performance to reduce the time spent executing DB statements.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

The SQL and Exit Points tab opens, displaying details for exit points that the selected entity or its underlying call tree is waiting for, according to their contribution to the overall performance. DB activity is featured in this list as SQL statements. From this view of the overall external activity:

Significant External Activity

A significant percentage of the selected entity's total service time is spent waiting for external activity.

This can either indicate that a specific external activity is experiencing performance issues, or a large number of external calls, each of them relatively short, producing a high total service time.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

The SQL and Exit Points tab opens, displaying details for exit points that the selected entity or its underlying call tree is waiting for, according to their contribution to the overall performance. From this view of the overall external activity:

Tuning Opportunities Detected

The selected method showed a high work time. This may indicate that the selected method is the last branch of the call tree, or that visibility to the selected method's underlying call path can be enhanced by adding instrumentation.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

Locks Detected

While executing the selected context and its underlying call tree, time was spent waiting for lock acquisitions.

While waiting for lock acquisition, the online activity (or batch process) is put on hold until another activity or process acquires the lock. Therefore, eliminating locks can improve the performance of your transactions.

Unbalanced Activity

The selected entity was invoked in multiple CLRs and encountered significantly different service times across the CLRs.

A transaction executed on a cluster of CLRs may encounter different service levels per execution, depending on the activity of the specific cluster's CLRs executing the transaction.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

The Load Balancing tab opens, displaying an overview of the selected entity's behavior across the different CLRs, and providing the ability to select the specific CLR and examine its relative performance against the application's average.

Impact on Multiple Entry Points

The selected method/SQL is invoked from multiple call paths and therefore affects the performance of the instance in a cumulative manner.

Therefore, improving the performance of the selected method/SQL will impact more than its current call path, and may improve the performance of additional paths and transactions containing the method.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following:

The Impact tab opens, displaying a list of entry points and call paths affected by the selected method. Tuning methods/SQLs invocations with notably high impact rates will positively affect the overall performance.

Excessive DB Connection Strings Detected

Your application may be experiencing bad connection string usage.

Too many different connection strings were used when connecting from your application to the DB.Each connection string in ADO.NET is using a specific pool. Many different connection strings cause bad pooling and connection opening overhead.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following:

Explicit Calls to Garbage Collection

During the selected time frame, your application performed calls to GC methods.

Calling GC methods from application code is incorrect, since .NET architecture was designed to handle all garbage collections by itself. Every interference with the GC method harms performance.

Working with the finding

To effectively locate the root cause of the performance finding, perform the following: