This section includes the following topics:

How to identify performance problems

In most cases, a Precise for J2EE workspace displays information in the context of a specific instance and time frame. However, if you want to view findings for another time frame or instance, you can change these settings using the respective drop-down lists.

See About J2EE findings.

How to investigate a finding

When investigating finding, it is recommended to investigate findings based on their severity.

To investigate a finding

  1. In the All J2EE Instances table, select the instance you want to investigate. In the Finding Details area, the top findings for a selected instance are displayed in the Findings table of a workspace.
  2. Identify the findings with the highest severity rankings (red, orange, and yellow icons where red is the highest severity and yellow the lowest) in the Findings table.
  3. In the Findings table, select a finding to analyze further the problem.
  4. In the selected finding (the expanded view), read the data displayed for the finding and follow any links provided to view additional information (advice) or next steps (bullets) to perform the recommendation(s) that best suit(s) your needs.

About J2EE findings

Several workspace findings exist in the Findings table to help the user.

The following is a list of current Precise for J2EE findings for an instance/method:

Heavy Entry Point

Heavy entry points may indicate a potential performance irregularity in the entry point's underlying call path. When viewing information for "All Instances" in the execution tree, a high entry point finding across multiple instances will clarify if the irregularity is a result of the entry point or if it is a result of a specific instance.

The finding is based on the total response time values, whereas by default, the execution tree is displayed according to the average response time. As a result, the specified heavy entry point is not necessarily found towards the top of the execution tree.

Working with the finding

Frequent SLA Breaches

SLA thresholds are defined to help the user pinpoint HTTP entry points experiencing performance issues according to specific criteria. Frequent SLA breaches and near breaches can be caused by an underlying performance issue.

Working with the finding

To effectively locate the root cause of the performance finding, perform one of the following:

Heavy Method Contribution

A high work time for a specific method (reflecting the method's work time only, without the underlying call path), can indicate a performance issue within the context of that method.

In the same way, a high work time for a specific occurrence of a method invoked multiple times in the execution tree can indicate a performance issue within the context of that specific occurrence.

Working with the finding

By default, information is displayed for the heaviest method's call path. To investigate the other call paths, select them from the execution tree. (They are highlighted in bold).

Excessive CPU Usage

High CPU usage may indicate that the JVM is experiencing a lack of resources. This influences the performance of applications running on the JVM.

High CPU consumption can be directly related to your JVM's activity. However, it can also result from a general lack of free CPU resources on that server.

Before working with the finding, verify that the high CPU consumption is not caused by the above independent scenario.

Working with the finding

To obtain a clearer view of your application's overall CPU consumption trends, it is recommended to increase the time frame of the displayed information.

Excessive Garbage Collection Time

The application spends too much time performing Garbage Collection activities on the specified JVM. This can result from:

Working with the finding

To obtain a clearer view of your application's overall Garbage Collection times trends, it is recommended to increase the time frame of the displayed information.

Memory Usage Near Maximum

Each JVM has a defined amount of maximum allowed heap size. Reaching the maximum allowed heap size can cause the JVM and the application to crash. This finding is triggered when the maximum used heap is getting dangerously close to that limit.

Working with the finding

High Exceptions Rate

Exceptions handling has a high price in terms of processing time. It is also takes time away from a performing application's business logic. Therefore, a high exceptions rate can significantly influence application performance.

A high exceptions rate can also indicate that the application code is using exceptions as part of its natural flow, and not only as a means to handle errors and unexpected situations.

Working with the finding

Excessive Lock Time

A significant percentage of the selected entity's total response time is spent waiting for lock acquisitions.

Waiting for locks means that the online activity (or batch process) is on hold, and is waiting for another activity or process to release the lock. Typically, eliminating locks can improve the performance of your transactions.

A possible solution is to consider tuning your transaction performance to reduce the time spent waiting for locks.

Working with the finding

Slow DB Request Execution

A significant percentage of the selected entity's total response time is spent waiting for a specific DB request execution.

A possible solution is to tune your DB requests to optimize transaction performance.

Working with the finding

Slow Web service Execution

A significant percentage of the selected entity's total response time is spent waiting for a specific external Web service execution.

A possible solution is to tune the external Web service to optimize transaction performance.

Working with the finding

Heavy Exit Point

A significant percentage of the selected entity's total response time is spent waiting for external activity (for example, SAP RFCs, Tuxedo service requests, RMI calls, and so on).

A possible solution is to consider tuning your transaction performance and the time spent executing external activity.

Working with the finding

Significant JDBC Activity

A significant percentage of the selected entity's total response time is spent waiting for DB activity invoked by a JDBC request.

This can result from a specific heavy statement or a collection of many, relatively short statements, being executed. A possible solution is to consider tuning your transaction performance to reduce the time spent executing DB statements.

Working with the finding

Significant External Activity

A significant percentage of the selected entity's total response time is spent waiting for external activity (for example, SAP RFCs, Tuxedo service requests, RMI calls, and so on).

This can either indicate that a specific external activity is experiencing performance issues, or a large number of external calls, each of them relatively short, producing a high total response time.

Working with the finding

Tuning Opportunities Detected

The selected method showed a high work time. This may indicate that the selected method is the last branch of the call tree, or that visibility to the selected method's underlying call path can be enhanced by adding instrumentation.

Working with the finding

Locks Detected

While executing the selected context and its underlying call tree, time was spent waiting for lock acquisitions.

While waiting for lock acquisition, the online activity (or batch process) is put on hold until another activity or process acquires the lock. Therefore, eliminating locks can improve the performance of your transactions.

Working with the finding

Exceptions Detected

Exceptions were thrown in the selected context or its underlying call tree.

This may cause users (or batch activities) to experience performance or availability issues.

Working with the finding

Unbalanced Activity

The selected entity was invoked in multiple JVMs and encountered significantly different response times across the JVMs.

A transaction executed on a cluster of JVMs may encounter different service levels per execution, depending on the activity of the specific cluster's JVMs executing the transaction.

Working with the finding

Impact on Multiple Entry Point

The selected method/SQL is invoked from multiple call paths and therefore affects the performance of the instance (JVM) in a cumulative manner.

Therefore, improving the performance of the selected method/SQL will impact more than its current call path, and may improve the performance of additional paths and transactions containing the method.

Working with the finding


 |    |  |   |   |