Left Menu
Left menu consists of following categories:
- View
- Actions
- Favorites
- Alerts
- Settings
- Reports
- Template
Here, we are describing the actions a user can perform with the left menu.
View
View menu is further categorizes into following sub menu/menu items:
- Transactions: This displays detailing of all transaction(s) in current test run.
- Show Vector in Title: To view graph with breadcrumb, such as last 1, 2…5.
- Run Time Progress: This shows the run time progress of the test. This is only available when the test is running currently.
- Events: This shows all event(s) generated in current test run.
- Virtual User Trace: This is used for tracing the users for different group(s).
- Stats: It shows the TCP connection information and is available if test is running currently.
- Logs
- Test Output: This shows the test output in browser as well as on panel.
- Debug Trace Log: This displays debug trace logs if Debug feature is enabled in scenario.
- Pause Resume Log: This is used to pause or resume logs (if any).
- Graph Tree: This is used to view the group the graphs at beginning/end/specified level.
- Run Time Changes Logs: This is used to view the list of logs of run time changes
Transaction Details
Any sequence of page clicks is grouped as a transaction. Transactions are added to measure the response time of a particular logical action performed by the user. User can see the detailed reports on click of a particular transaction link.
To view a transaction and its details, follow the below mentioned steps:
- Go to View menu and click the Transactions menu item. The Transactions Details page is displayed:
The Transaction Details page contains following information:
Time Period: User can fetch the transactions for a specified duration, such as last 1/2/4/6/12 hours, days, week, months and so on. User can also apply custom time on selecting the Custom option from the drop-down list. On selecting Custom, two options are displayed – Absolute and Elapsed. Absolute time is the exact time. Elapsed time is simply the amount of time that passes from the beginning of an event to its end. In case of Absolute, enter start date, start time, end date and end time. In case of Elapsed, enter start time and end time. The specified date/time should be in range with current session. Click the Apply button post specifications.
The transaction details section contains following columns:
- Transaction Group: This contains a group of transactions executed on a page. A page is considered as a group. Each group contains a number of transaction inside brackets (). To view the transaction details, click the
icon corresponding to a group. All the transactions inside that group is displayed. After clicking, the icon
is changed to
. To view all the transactions of all the groups, click the
icon on the header.
- Transaction Name: It is the name of executed Transaction in Test Run. To view the transaction summary report, click the transaction name link.
- Min: This field shows the minimum time (sec) taken by the transaction in a sample period/ cumulative.
- Avg: This field shows the average time (sec) taken by the transactions in a sample period/ cumulative. Click the link to open the transaction detail report.
- Max: This field shows the maximum time (sec) taken by the transaction in a sample period/ cumulative.
- Std Dev: This field shows the standard deviation in the data (sec) in a sample period/ cumulative.
- Completed: This field specifies number of completed transactions in a sample period/ cumulative. Click the link to open the transaction instance report.
- Success: This field specifies number of success transactions in a sample period/ cumulative. Click the link to open the successful transaction instance report.
- Failure (%): This field shows the transaction failure percent in a sample period/ cumulative.
- TPS: This field shows the transaction completed per sec in a sample period/ cumulative.
- TPS (%): This field shown the percentage of transactions per second in a sample period/cumulative.
- NetCache Hits: This field shows number of net cache hits.
![]() |
Enable transaction drill down report links check box at the bottom is for enabling/disabling drill down reports links. |
Pause/Refresh Transactions
User can stop auto refreshing of data in transaction detail table by using the Pause button. To enable auto refreshing of transaction data, click the Refresh button. User can also refresh the transaction data at any point of time by using the Refresh button.
Filters in Transaction Details
To enable filters on transactions, user needs to click the Enable Filters button and click the Apply button. Once enabled, user can click the Disable Filters button to disable the filters.
There are following options for filtering transaction details:
- Average Response time: User can specify a range of average response time and can get the filtered result.
- Failure percentage: User can specify failure percentage. Transactions falling in more that the specified failure percentage are displayed.
- TPS: User can specify minimum and maximum TPS value and can get filtered results based on that.
Downloading Transaction Details
User can download the transaction details in PDF, Word, and Excel format using the icons on the bottom left corner of the window.
Show Pie/Donut chart for Failed Transactions
User can view the pie/donut chart for all the failed transactions as well as individual failed transaction by using theicon on the top-right corner of the window.
The pie chart displays two sections. In one section, all transactions errors percentage are displayed. In another section, the specific transaction errors percentage are displayed.
Chart Operations
User can perform following operation on the chart by clicking the icon.
- Change Chart: User can change the chart type to donut using the drop-down list.
- Show Chart: User can view various error results, such as overall, top 5, top 10, or top N. On selecting Top N, user needs to specify the number of top failure transactions to view.
Drill-Down Report from Transaction Detail Report
To see drill down report of a particular transaction, click the Transaction Name link, and to see the drill down report of all transactions, click the All Transactions link. When user clicks the All Transactions link, the Transaction Summary report for all transactions is displayed.
There are following details on the transaction summary report page:
- Transaction name: Name of the transaction
- Script Count: Number of scripts in the transaction
- Tried: Total number of tries for the transaction
- Success: Number of success tries for the transaction
- Fail: Number of fail tries for the transaction
- % Fail: Percentage of fail tries for the transaction
- Min: Minimum duration of the transaction
- Average: Average duration of the transaction
- Max: Maximum duration of the transaction
- Median: Median of the overall duration of the transaction
- 80%tile: 80th percentile of the transaction duration
- 90%tile: 90th percentile of the transaction duration
- 95%tile: 95th percentile of the transaction duration
- 99%tile: 99th percentile of the transaction duration
Comparison in Transactions
User can view the comparison of various set of transactions. To do this, user first needs to apply comparison using the compare feature of Web dashboard. Follow the below mentioned steps to perform comparison between transactions:
- Go to Actions menu on the left pane and click the Compare menu item. The Compare Settings window is displayed.
- Enter the Measurement options and click Add. User can add multiple measurements, here we have added only one i.e. M1.
- Click Apply. The comparison is displayed on the graph panel.
- Go to View menu and click the Transactions menu item. The Transactions window is displayed.
- On the Compare Settings section, click Apply. The comparison details are displayed.
Show Vector in Title
This feature is used to view graph with breadcrumb, such as last 1, 2…5. To view an example of this, first select a graph form the graph tree view. By default, graphs are displayed last 2 breadcrumb. For example – NSAppliance>Tomcat-Appliance.
Now, go to View > Show Vector in Title, and select the level of Breadcrumb, such as Last 3. Then, the selected graph is displayed with last 3 breadcrumb. For example – Cavisson>NSAppliance>Tomcat-Appliance.
Run Time Progress
RunLogic Progress
This section is used to view the RunLogic runtime progress which is configured via script manager. To do this, go to View > Run Time Progress > RunLogic Progress. The Run Logic Progress window is displayed.
This section displays two sub-sections, one is Scenario Group and other is RunLogic Tree. Scenario groups consists of group name, script used in the group, and number of users in that group. RunLogic Tree consists of blocks and flows inside that block. It also displays the pages inside the flow that are captured in the script. The details displayed with each page are – count, rate per second, and expected rate per second. User can user the check boxes to view/hide the details.
Events
This shows all event(s) generated in current test run.
User can apply filters on the events. User can also pause/resume and refresh the event logs.
Virtual User Trace
This is used for tracing the users for different group(s). To view virtual user trace of a test run, go to View > Virtual User Trace. The Available Virtual User Trace window is displayed. It contains group name, user profile, type, script name/URL, and number of users.
Select a group and click the User Trace button. The User trace is displayed with information on session.
Stats
It shows the TCP connection information and is available if test is running currently. To view stats of a test run, go to View > Stats. The Stats window is displayed.
Logs
User can view logs, such as test output, debug trace logs, and pause/resume logs.
Test Output
This shows the test output in browser as well as on panel. To view the test output, go to View > Test Output > View in Panel. The Test Output window is displayed.
Pause/Resume Log
This is used to pause or resume logs (if any). To perform this, go to View > Pause Resume Log. The Pause Resume Log window is displayed.
Debug Trace Log
This displays debug trace logs if Debug feature is enabled in scenario. To view stats of the debug trace log, go to View > Debug Trace Log. The Debug Trace Log window is displayed.
Graph Tree
This feature is used to specify the position of the group in the graph tree. To use this feature, go to View > Graph Tree. There are three options available there:
Group at Beginning
Upon selecting this option, group position is at the beginning in the graph tree. This is the default option selected.
Group at End
Upon selecting this option, group is positioned at the end. Below is an illustration:
Group at N Level
Upon selecting this option, user can specify at which level to position the group. For example, if user specify the group level as two.
The groups are positioned at level 2 in the graph tree. Below is an illustration:
Actions
Action menu is categorized further into following sub menu/menu items:
- Compare: This feature is used to compare two test runs. User can perform further actions for post compare operation:
- Disable Compare: To disable the comparison applied and back to original form.
- Update Compare: To update the inputs provided for comparison.
- Scenario Difference: This displays scenario difference between current and baseline test run.
- Script Difference: This displays script difference between current and baseline test run.
- Server Signature Difference: This displays server signature used in current test.
- Derived Graph: This feature is used to create a derived graph from two or more graphs.
- Color Management: It is used to define the color of the graphs.
- Update User/Session Rate: This updates user/session rate (Online mode only).
- Update Data File: To update data file, it is supported in online mode only.
- Diagnostics: It is used for diagnostics and contains following options:
- Thread Dump: To take thread dump, analyse thread dump, and schedule thread dump.
- Heap Dump: A heap dump is a dump of the state of the Java heap memory. This is useful for analysing the use of memory i.e.an application is making at some point in time so handy in diagnosing some memory issues, and if done at intervals handy in diagnosing memory leaks. User can take heap dump from here.
- TCP Dump: This option is used to take the TCP dump.
- Mission Control: It is a property through which user can get thread JVM information for JRocket. This includes two features – Memory Analyzer and Flight Recorder.
- Run command: This is used to run command on server.
- Import data from Access Log file: This imports data from CSV file and access log file.
- Sync Point(s): This shows event log details for the test run.
- Check Profile: It is used to filter graphs based on certain defined conditions.
- Update Running Scenario: This section is used to update sections in the running scenario. It is applicable in online mode only.
Compare
Compare option is used to compare values of two test runs/sessions. Using compare feature of Dashboard, current running test/session can be compared with existing baseline test run/session. Graphs can be compared both in online and offline modes. In online mode, two graphs are displayed, one static baseline graph and one current running test run/session graph. Compare is allowed only for single graphs. User can set a baseline test run/session number from the existing sessions. User can change or remove baseline test run/session number.
![]() |
In NetStorm, user can compare test runs and sessions of a test run both, but in NetDiagnostics Enterprise (Continuous monitoring), user can compare different sessions of a test run only. |
Key Points in Comparing Test Runs/Sessions
There are following key points in comparing test runs/sessions:
- Current test run/session can be compared with other test runs/sessions respectively.
- Different instances of current test run/session can be compared.
- User can compare same or different instances of different test runs/sessions.
- Current test run/session is always included in compare. No need to select current test run/session in compare window.
- If compare is done and user changes the time from Graph Time or View by phase, then data changes in all graph panels for current test run/session only.
Comparing Test Runs/Sessions
User needs to follow the below mentioned steps for comparing test runs/sessions:
- On the Actions menu, click the Compare menu item. The Compare Settings window is displayed.
- In the Advance Settings, section, specify whether to include current test run/session or not using the Include Current Test Run check box or Include Current Session It is used to include/exclude current test run/session in compare. The current test run/session is included by default.
- Enter the Measurement Name in the specified box. Measurement Name is a unique name which is assigned for the test run measurement. It is an alias of one compare settings. Measurement name can be maximum of 25 characters. All characters are allowed except ‘|’ as it is used as separator between fields. Also it can’t be duplicated.
- In case of NetStorm, select the Scenario and Test run.
- Select the time duration from Preset. On selecting Custom, two options are displayed – Absolute and Elapsed. In case of Absolute, enter Start Date, Start Time, End Date and End Time. In case of Elapsed, enter start time and end time. The specified date/time should be in range with current session.
- Next, select the Color by clicking on it. By default, there are colors defined for each compare setting but user can change the color of any compare setting.
- Then, click This button adds values of compare settings from input fields to table. Delete button, at right side, is used for deleting the data added regarding a particular session.
- Click the Apply After comparison, graphs are displayed.
Operation on Compared Graphs
Following operations can be performed on compared graphs in Dashboard GUI:
- Zoom: User can apply zoom on the compared graph panel.
- Drag single graph: If user drags single graph on compared graph, then that graph is compared if graph is present in both test run. If graph is not present in baseline test run, then an alert message is displayed “Graph is not present in Baseline Test Run. So cannot perform comparison”.
- Drag multiple graphs: If user drags multiple graphs (All graphs of a group) on compared graph then only first graph is compared.
- Load Favorite: If favorite is loaded on compared graph then only first graph of each panel of favorite is compared with baseline Test Run.
- Change Color: User can change color of Baseline from lower panel.
Compare Changes for Auto Scaling
This feature allows a user to compare test runs even if the indices are not available in measurements. In this case, indices are mapped with indices of other measurements.
To use this feature, we are representing an example.
Graphs of first test run
In this test run, Indices for Sys Stats Linux Extended are:
- Cavisson>Ubuntu54
- Cavisson>NSAppliance
- GUIDevTier>Ubuntu52
- GUIDevTier>Ubuntu51
Graphs of second test run
In this test run, Indices for Sys Stats Linux Extended are:
- Cavisson>Ubuntu54
- Cavisson>NSAppliance
- QATier>Ubuntu47
- QATier>Ubuntu48
Now, on applying compare between these two test runs, the following window is displayed:
In the highlighted section, it can be seen that if indices are not available in measurements then system maps it to other indices.
Post Compare Actions
User can perform following actions post compare operation:
- Disable Compare
- Update Compare
- View Scenario difference
- View Script Difference
- View Server Signature Difference
Disable Compare
This feature is used to disable the comparison applied in the test runs. On clicking Disable Compare menu-item, the graphs in the panel is displayed in its original form (without comparison).
Update Compare
This feature is used to update the comparison parameters, such as time, addition/removal of measurements, and so on.
Scenario Difference
This displays scenario difference between current and baseline test run. To view the scenario difference, first user needs to compare two test runs via Actions > Compare. Once compared, go to View > Scenario Difference. The Scenario Difference window is displayed that provides the scenario difference between two test runs. Click the Close button to close the window.
Script Difference
This displays script difference between current and baseline test run. To view the script difference, first user needs to compare two test runs via Actions > Compare. Once compared, go to View > Script Difference. The Script Manager window is displayed that provides the script difference between two test runs.
Server Signature Difference
This displays server signature used in current test. To view the server signature difference, go to View > Server Signature Difference. The Server Signature Difference is displayed.
Derived Graph
Derived graphs are those graphs that are derived from two or more graphs by applying some formulas provided by user. Sometimes, there is a requirement to do some analysis on reports, and for that, some calculations need to be done on reports data to get some derived data. For example, if average of 3 reports is needed, in that case, derived data is required by adding results of 3 reports and divide it by 3. So, derived data is something like extracting new information from existing one by performing some calculations on existing data.
Before adding a derived graph, in custom metrics, user should be aware about options in derived graph window, format of expressions, and derived graph rules.
Options in “Add Derived Graph” window
There are following options in “Add Derived Graph” window.
- Operators like +, -, /, *, () are used in formula to add derived graph.
- On the lower side, a text-area is shown where complete formula is shown and it is editable to that user can modify the formula as per requirements.
- User can manually type the formula or copy paste in a given format. This format is explained in separate section.
- User can add the derived graphs in new group or in existing group by defining the Report Group Name.
- All Derived Graphs nodes are added under custom metrics node.
Format of Expression
- Group name and graph name comes in between curly brackets { } in text area.
- Vector name comes in square brackets [ ]
- Vector name is optional as scalar graph doesn’t have vector names.
- Format: {Group Name}{Graph Name}[Vector Name]
- Example, for scalar graphs
User can enter group name and graph name in following format: {Vusers Info}{Running Vusers}
Derived Graph Rules
If all graphs included in expression are scalar, then output of this expression is scalar. Means, it creates one derived graph.
Example:
{Vusers Info}{Running Vusers} + {Vusers Info}{Active Vusers Info}
Add a Derived Graph
User needs to follow the below mentioned steps for adding a derived graph:
- On the Actions menu, click the Derived Graph menu item. The Add Derived Graph(s) window is displayed.
- Specify the following derived graph details:
- Report Group Name
- Report Name
- Report Description
![]() |
If user selects graphs of scalar group, then All and Specified option is disabled. All and Specified options are enabled in case of vector graphs. |
- Select the Group Name and Graph Name from the drop-down list and click Add Graph button.
- Select the operator from the list of operators based on the operations performed on the derived graph. The basic operators are displayed on the left side and advanced operators are displayed on the right side of the window.
- Again, add some more graphs required for calculation. For this, select the Group Name and Graph Name from the drop-down list and click Add Graph button.
![]() |
Repeat steps 4 and 5 for more operations if required. |
- After adding the required graph information and operators, the expression would look like as displayed below.
- Click the Add button. The derived graph is displayed on the widget and is also added in the custom metrics. Expand the graph to have a better view.
Specified
If user selects Specified option, the Specified Indices Selection window is displayed. There are two tabs displayed – Specified and Advanced. User can select the specified graphs from this section using the Add button.
On clicking Advanced tab, user can add some advanced options for adding graphs.
Select All or other options (Tier, Server, Instance, and Business Transactions). Specify a pattern for the graphs to be added and click Test. A list of tiers is displayed.
Click Apply to apply the settings. The specified pattern is displayed in the Specified text box.
Click the Add Graph button. The graph string is displayed. Click the Add button to generate the derived graph.
The derived graph is displayed. The derived graph is added in the custom metrics with the following options:
- Open All Members
- Merge with Selected panel
- Open Derived Members
- Merge Derived Members
- Edit Derived Graph
Edit a Derived Graph
User can also edit a derived graph to make further modifications. To do this, follow the below mentioned steps:
- Go to Custom metrics section, then go to Metrics > Group > Report level. Right-click at the report level and click the Edit Derived Graph option.
- This displays the Derived metrics in edit mode.
- Update the details as per requirements and click the Update button.
Open all Members
Using this option, it opens all the individual nodes at individual panels:
Merge with Selected Panel
Using this option, all the members are merged in a selected panel.
Open Derived Members
Using this option, all the indices of members that are involved in the creation of derived graph are displayed on the graph panel in separate widgets:
Merge Derived Members
Using this option, all the derived members are merged on the selected panel.
The options – Merge with selected panel, Open derived members, and Merge derived members are also available at graph level.
Color Management
Color Management feature enables a user to have specific color of the graphs using the hierarchy structure of the graphs. Using this feature, the graphs assigned in a particular hierarchy are displayed with different colors. To use this functionality, follow the below mentioned steps:
- Go to Actions menu and click the Color Management menu item. The Color Management window is displayed.
2. Select the hierarchy levels. Here, user needs to define the color by hierarchy level. There are drop-downs based on the maximum number of hierarchical levels in the test. Each drop-down is having “Any” and meta data name of that level in whole Test. In case “Any” is selected at any level, then next drop-down displays all options available on that level otherwise it displays specific to previous selection.
3. Select whether to include graphs in color management. On selecting this, different graphs are displayed with different colors. For example, if user selects Any > Any > Instance in the hierarchy level then all graphs of same instance is displayed with same color. But, on selecting the Include Graphs option, different graphs of the same tier are displayed with different colors.
Example
Case 1: Hierarchy Level – Any > Any > Instance (without selecting include graph option)
In this case, Elapsed time and CPU graphs of same instance is displayed with same color.
Case 2: Hierarchy Level – Any > Any > Instance (with selecting include graph option)
In this case, all graphs of same instance are displayed with different colors.
Update
Using this feature, user can update user/session rate, data file, and running scenario. This is applicable for running test only.
Update User/Session Rate
This option is used to do run time changes configuration that includes updating user(s) and session rate. To update user/session rate, follow the below mentioned steps:
- Go to Actions > Update > Update User / Session Rate The Run Time Changes Configuration window is displayed. This section is categorized into two tabs – General Settings and Advanced Settings.
General Settings
- Select a user group from the Fix Concurrent User Groups
- Change the user configuration (if required), such as user distribution, total virtual users, action (add, remove).
- Increase the session rate (if required) by specifying settings.
- Decrease the session rate (if required) by specifying settings.
Advanced Settings
- This section is used to delete the user options.
- Click the Apply button to apply the settings.
Update Data File
This option is used to update data file. User can append content to a data file or replace the content of a data file with another data file. It is supported in online mode only. To update data file, follow the below mentioned steps:
- Go to Actions > Update > Update Data File. The Update File tab is displayed.
- Select the script name whose content needs to be updated.
- Select the parameter name, such as Id/name.
- Select the mode, such as append/replace.
- Browse the data file(s) to which the script file needs to be updated. The file(s) are displayed.
- Click the Apply button.
Update Running Scenario
This option is used to update sections in the running scenario. It is applicable in online mode only.
Diagnostics
This section is used to diagnose the issues occurred in a test run. Diagnostic menu consists of following menu items:
- Thread dump,
- Heap dump,
- TCP Dump,
- Mission control,
- Java Flight Recording, and
- Run command
The detailed description of each section is provided in the subsequent sections.
Thread Dump
A JVM Thread Dump is a snapshot taken at a given time which provides the user with a complete listing of all created Java Threads. Each individual Java thread found provides information, such as Thread name; often used by middleware vendors to identify the Thread Id along with its associated Thread Pool name and state (running, stuck etc.)
How to take Thread dump
Benefits
- To see how many threads are there as per configuration
- Analyze the behavior of the threads
- To identify is any thread stuck (is in waiting/blocked state) etc.
To open the Thread dump window, go to Actions > Diagnostics > Thread Dump. The Thread dump window is displayed. User can filter the thread dump details based on the Tier, Server, and Instances.
Operations on Thread Dump
There are following operations user can perform on Thread dump:
- View Thread Dump
- Take Thread Dump
- Analyze Thread Dump
- Schedule Thread Dump
View Thread Dump
This window displays a section from where user can view the thread dump summary and full thread dump.
Thread Dump Summary
This window provides details, such as tier name, server name, and instance name for which the thread dump was taken along with the time when it was taken. It also provide the file name along with its path where the thread dump is stored. Apart from this, it also displays the user name and user notes (if any). User can navigate to the pages using the pagination provided and can also download the report (in word/excel/pdf format) using the download icons available.
Full Thread Dump
This window also displays a section at the bottom where user can view the full thread dump.
Take Thread Dump
User needs to follow the below mentioned steps for taking thread dump:
- Select tier name and server name from drop down menu.
- To capture all instances, click the All button, and for ND instances, click the ND Available instance information is displayed according to selected tier name and server name. This instance information displays – Process ID, instance name, process arguments along with log path.
- Select the process ID for which the thread dump is to be taken and click the Take Thread Dump button, the path where thread dump is taken is displayed in a pop-up box.
Analyze Thread Dump
User can analyse a thread dump based on the selection of the thread dump file from the Thread dump table. This section is categorized into various tabs where user can get further details related to threads, such as:
- Thread State,
- Thread Category,
- Common Method,
- Most Used Methods,
- Thread Group,
- Deadlock, and
- Hotstacks
The illustration of each is provided in the subsequent sections:
Thread State
It displays a pie chart that contains percentage of runnable, waiting, and timed waiting threads. On the right, thread details (thread name, priority, state, ID) are displayed. On clicking a thread name, its corresponding stack trace is displayed.
Thread Category
It displays a pie chart that contains percentage of daemon and non-daemon threads. On the right, thread details (thread name, priority, state, ID) are displayed. On clicking a thread name, its corresponding stack trace is displayed.
Common Methods
This displays a bar graph representing a list of common methods along with its count. On clicking a bar, user can view the method details at the right.
Most Used Methods
This displays a bar graph representing a list of most used common methods along with its count. On clicking a bar, user can view the thread details within that method along with stack trace of the selected thread.
Thread Group
This displays a pie chart representing all the thread groups. On clicking a group, user can view associated threads of that group at the right. On clicking a thread name, user can view the stack trace of that thread.
Deadlock
This displays details of the methods that created deadlock along with the thread details.
Hotstacks
This displays a bar graph representing the hotstacks along with its count. On the right, the detailed description of the hotstack is displayed.
Heap Dump
A heap dump is a dump of the state of the Java heap memory. This is useful for analyzing the use of memory i.e. an application is making at some point in time so handy in diagnosing some memory issues, and if done at intervals, handy in diagnosing memory leaks. In dashboard, an option is provided for taking heap dump.
Steps to take Heap Dump
User needs to follow the below mentioned steps to take Heap dump:
- Go to Actions > Diagnostics > Heap Dump option. A window is displayed, select the tier and server.
- To get all instances of the selected tier and server, click the All For ND instances, click the ND button. A list of instances is displayed with the related information, such as process ID, instance name, and process arguments.
3. User can take heap dump after selecting PID of particular selected Tier with respect to Server.
4. Click Take Heap Dump for taking heap dump.
5. Mention the heap dump file name with full path of storage.
- Click on OK button. After the successful heap dump at the specified path, a confirmation message is displayed.
- For ND instances, user can apply some settings on the heap dump window. These settings include download configuration – whether to keep a copy on server, whether to download the heap dump file in compressed format along with some other advanced settings.
8. Click Take Heap Dump button. System takes the heap dump and system displays a confirmation message.
9. To download the heap dump files, click Heap Dump List button. This displays a list of heap dump file, click the download icon corresponding to file to download it.
Analyze Heap Dump
Once the heap dump is taken, the user can analyze the heap dump. There are following options from where the user can analyze it:
- From Actions > Diagnostics > Heap dump > Heap dump list > Heap analyzer icon
- From Tier > Server > Instance graph > Widget menu > Diagnostics > Heap dump > Heap dump list > Heap analyzer icon
- From Actions > Diagnostics > Heap dump > Heap dump list > Analyze From local
- From Actions > Diagnostics > Heap dump > Heap dump list > Analyze From NDE Box
- After clicking the Heap Dump Analyzer icon, it parses the heap dump file and displays the Java Heap dump analyzer window.
This window further contains the following tabs:
- System Overview
- Leak Suspects
- Top components
System Overview
This tab contains the information about Machine details, system properties, Threads Overview, Top Consumers, and Class histograms.
Leak Suspects
This tab displays the overview of leaks, Problem suspects, and other details.
Top Components
This tab contains information about the top consumers, Retained set, possible memory waste, soft memory stats, and weak memory stats.
Analyze Java Heap Dump from Local
A user can also analyze heap dump from local. For that, the user needs to click the Analyze Heap from Local button.
User needs to select the heap dump file from the local.
The uploaded heap dump file is parsed and displayed on the window.
Note: If user clicks the Close icon (), it does not cancel the file uploading process, it just hides the file uploading status
TCP Dump
TCP Dump is a most powerful and widely used command-line packets sniffer or package analyzer tool which is used to capture or filter TCP/IP packets that received or transferred over a network on a specific interface. It is available under most of the Linux/Unix based operating systems. tcpdump also gives us an option to save captured packets in a file for future analysis. It saves the file in a pcap format, that can be viewed by tcpdump command or an open source GUI based tool called Wireshark (Network Protocol Analyzer) that reads tcpdump pcap format files.
TCP Dump Usage
- Capture Packets from Specific Interface
- Capture Only N Number of Packets
- Print Captured Packets in ASCII
- Display Captured Packets in HEX and ASCII
- Capture and Save Packets in a File
- Read Captured Packets File
- Capture IP address Packets
- Capture only TCP Packets
- Capture Packet from Specific Port
- Capture Packets from source IP
Taking TCP Dump
To take TCP dump, user needs to open the TCP Dump window. For this, follow the below mentioned steps:
- In the Web Dashboard, go to Actions > Diagnostics > TCP Dump.
- The TCP Dump Settings window is displayed.
- Select the tier name from the drop-down list, the server list is populated according to the selected tier.
- Select the server from the drop-down list.
- On selecting the server, the interface list is populated automatically, select the interface from the drop-down list on which user needs to capture the TCP dump.
- Selecting a server name also fills the destination path automatically. Destination path is the path where the TCP dump is stored. User can also change the destination path according to the requirements.
- User needs to fill some fields, such as Maximum Duration, Size, Number of Packets, and port.
If any field value reaches its specified limit, then tcpdump is stopped. Suppose, user inserts eth0 as interface, Max duration is 120 Secs, Size is 20 MB and number of packets is 1200. Suppose, within 70 secs if size of pcap file reaches to 20 MB, then tcpdump will be stopped.
- To specify some extra attributes, mention them in Additional Attributes
- To view the TCP command that is going to be executed based on the specifications provided, click the View TCP Command [ ] icon.
- After providing all the required specifications, click the Take TCP Dump
- After processing the TCP Dump, a confirmation message is displayed for successful operation.
- Click OK to close the dialog box.
Viewing TCP Dump List
To view the TCP dump list, follow the below mentioned steps.
- Open the TCP Dump Settings page, via Actions > Diagnostics > TCP Dump. (mentioned earlier). The TCP Dump Settings window is displayed.
- Click the TCP Dump List The TCP Dump List is displayed.
Details of TCP Dump List page:
- Tier: This denotes the tier name selected while taking the TCP dump.
- Server: This denotes the server name (corresponding to the tier) selected at the time of taking TCP Dump.
- TCP Dump Name: This denotes the file name in which TCP dump is stored. The file is stored in pcap To open/view the file, user needs to install wireshark in the system.
- File Size: This denotes the size of the TCP dump file in KB.
- Duration: This denotes the duration specified (in seconds) in the TCP dump settings.
- Packets: This denotes the number of packets received.
- Max Duration: This denotes the actual duration of the TCP dump (in seconds).
- Size: This denotes the size specified for the TCP dump file in MB.
- Date: This denotes the date when the TCP dump is taken.
- Time: This denotes the time when the TCP dump is taken.
- TCP Command: This denotes the TCP command executed for taking the TCP dump.
Downloading TCP Dump File
To download the TCP dump file, follow the below mentioned steps:
- Select a record from the list.
- Click the Download button on the TCP Dump List
![]() |
User can also download the TCP dump file by click the respective TCP Dump Name. |
- Once the file gets downloaded, it can be opened using wireshark application. User needs to install the wireshark application in the system.
![]() |
To download the Wireshark application, click here. |
- Open the TCP dump file. It can be viewed as below:
Deleting TCP Dump File
To delete a TCP dump file, select the file first and then, click the Delete button. The system prompts a confirmation message for deletion. Click OK to delete the file.
Mission Control
How to do mission control
Flight Recorder and Mission Control together create a complete tool chain to continuously collect low level and detailed run time information enabling after-the-fact incident analysis.
Prerequisites: Jrockit should be installed on the Server.
Flight Recorder
Flight Recorder is a profiling and event collection framework which allows Java administrators and developers to gather detailed low level information about how the Java Virtual Machine (JVM) and the Java application are behaving.
Steps for Flight Recording
User needs to follow the below mentioned steps to work with various options in flight recording:
- Go to Actions > Diagnostics > Java Flight Recording. This displays Java Flight Recording window.
2.Select a tier and corresponding server from the list and click the Show All Recordings button.
3. To capture all instances, click the All button, and for ND instances, click the ND button. Available instance information is displayed according to selected tier name and server name. This instance information displays – Process ID, instance name, process arguments along with log path.
4. To download a recording, click the Download Available Recording button. This displays the recordings window from where user can download the file to local by clicking the download icon.
Run Command
How to execute commands from GUI
This feature is used to run command on server. Follow the below mentioned steps to run command on server:
- Click the Run Command menu item under Actions > Diagnostics. The Run Command window is displayed.
- Select tier name, server name, group name, and command name.
- Other options also get enabled based on the selection, such as for process management group, the options – select all process, full format, long list, and show threads get enabled. User can select them according to requirements.
- User can also apply certain filters and can save the output on server.
- Post specifying the details, click the Run button to run the command. The output gets displayed in the Command Output To refresh the command, click the Refresh button.
- User can change the view type and delimeter according to the requirement.
Create Task
Create Task basically adds the current command that user has just fired from GUI as a Task. To add a task, first run a command and click the Create Task button. The Create Task window is displayed.
Enter the task name and task description. Then, click the Create Task button, a confirmation message is popped up stating that the task is saved. Now, once created, that task can be re-run at backend after a specific interval of time or on any date through Scheduler Management. In the subsequent section, the scheduling of task is described.
Scheduler Management
The created task is listed in the scheduler management window, which can be accessed by clicking the Scheduler Management button.
To schedule a task to re-run, double-click over the specified task row, the scheduling options are displayed.
User can schedule it for hourly/weekly/monthly/custom basis. First, select the required tab and provide the details accordingly. Post this, user needs to specify the schedule expiry date and click the Save button. This is the date at which the schedule is expired and the task is not executed further. User can use the icons to enable/disable or delete a task.
Import data from Access Log file
An access log is a file or a combination of files that contains a list of all user interactions or accesses made to the server. Access log files are stored on a server with structure specific to a server type. For example, access log for WebLogic may have different structure than access log for Tomcat or any other application server. During load testing, access log data can be imported and merged in test-run data. This data can then be analyzed for numerous metrics and visually represented to gather meaningful decision-making information for diagnostics purposes.
The access log data analysis can help performance engineering teams to gather insights into application behavior and health by merging access log data in the application test-run data.
This section provides steps to import the external Access log data within a test run and perform in-depth analysis using dashboard.
To view Access Log window
On the Web Dashboard left pane, click Actions > External Monitors > Import Data from Access Log File. The Access Log window is displayed.
Steps for Importing Access Log
User needs to follow the below mentioned steps for importing Access log.
Step – 1: Selecting a Tier
Select a Tier from the list of available tiers in Tier(s) drop-down list. To import a tier, which is not in the list, select ‘Others’, and mention the Tier name. Select only one tier at a time.
Step – 2: Browsing a File
Click the Browse File/Folder [ ] icon to browse the file/folder to import access log details. The File Manager dialog box is displayed showing the list of file(s)/folder(s) available on the server.
- Select the file(s)/folder(s) from the list. The default path is root (/), user can navigate to the desired file/folder, or search for a particular file/folder using the search box.
- To view access log from a file that is currently stored on the client (i.e. in the system), first it needs to be uploaded on the server. Click the
icon and select the file to be uploaded. Once it is uploaded on server, it can be browsed using the Browse File/Folder [
] icon. From client, user can select one file or more than one files. To upload a folder on server from client, zip that folder and upload it.
![]() |
To view the file details, click the file name. This displays the file details in three tabs – View Used Fields, View all Fields, and View as Text.
To delete a file, click the Delete [ |
Step – 3: Enable Auto Hierarchy Mode [Optional]
Auto Hierarchy mode is to check whether the folder/file selection made is in specified format (Tier > Server > Instance). On selecting the Auto Hierarchy check box, user needs to select only one folder (Tier) that constitutes the matching format (Tier > Server > Instance) with its sub folders and all the sub-folders/files containing the matching criteria under that tier are selected automatically. Files at first level/second level of folder are not considered. Only files at third level, i.e. instance level are considered. If the criteria are not matching, system does not allow to select the folder/file, and displays an alert message. Selecting Auto Hierarchy mode is optional.
If the criteria are matched, the Tier selected in Auto Hierarchy mode overwrites the Tier selected from the drop-down list.
![]() |
The server name and instance is displayed in the table if Auto Hierarchy mode is enabled, else user needs to enter the server and instance. File name is displayed in both cases. |
Step – 4: Selecting Access Log Format
Product comes with a well-defined format with different application servers. Each application server has some predefined fields in a particular sequence along with the separator.
Select the Access Log Format from the drop-down list. User can also make his/her own Access log format of the application server. If user selects a different application server, it will change the field number and separator.
To configure the Access log format, click the Setting [] icon.
![]() |
• In Access log format configuration, user can edit the field number and its associated unit.
• User can also configure the separator – space/tab/others. For others, please specify the separator. • User can save the Access log format with a specified name. Select the Save this Access Log Format as check box, specify the format name, and click Apply. This will make a copy of the application server. • To make the selected log format as the default log format, select the Make <Log Format> as Default Selected Log Format check box. • Save the configuration by clicking OK. |
Step – 5: Getting Access Log Details in Tabular Format
To get the access log details displayed, select the Show Access Log Details check box. This displays the first line of the file in the table. For the files which are unable to display the access log details due to wrong log configuration format, system displays an alert message.
Step – 6: Selecting Access Log Date Time
Select Access Log Date Time. There are following options:
- Import data for test run duration only
- Import data for a specific percentage of test duration (from 0% to 100%)
- Import data for a particular duration (in Hr:Min:Sec)
Step – 7: Selecting Time Zone / Adjustments
Select Time Zone / Adjustments section. There are following options:
- Synchronize all log files with test start time: This will synchronize all log files with the time when the test is started.
- Synchronize all log files with test start time (with offset): Here, user can specify the time duration to get the imported data ahead by or behind by.
- Same time zone: This will consider the same time zone where the test was started.
- Specific time zone: User can select specific time zone from the list.
Step – 8: Selecting Metrics Type
These metrics are the graphical representation of access log data. Based on the level of analysis required, user can select from two types of metrics – Standard metrics and Extended metrics. Standard metrics are created for basic analysis whereas extended metrics are created for detailed analysis of access log data.
Standard Metrics
These metrics contain basic details of the access log. System displays 2 metrics under Access Log Stats group in the graph panel.
Metrics:
SR No. | Metrics Name | Description |
1 | Number of requests/second | Number of requests per second |
2 | Average service time (Sec) | Average Service Time in seconds |
Extended Metrics
These metrics contain extended details of the access log. System displays 22 metrics under Access Log Stats Extended group in the graph panel.
Metrics:
SNo. | Metrics Name | Description |
1 | Number of requests/second | Number of requests per second |
2 | Average service time (Sec) | Average Service Time in seconds |
3 | Receive throughput (Kb/Sec) | Receive Throughput in kilobits/sec |
4 | Number of requests 1xx/Sec | Number of requests per second with HTTP status code of 1xx series |
5 | Average service time 1xx (Sec) | Average Service Time in seconds with HTTP status code of 1xx series |
6 | Receive throughput 1xx (Kb/Sec) | Receive Throughput in kilobits/sec with HTTP status code of 1xx series |
7 | Number of requests 2xx/Sec | Number of requests per second with HTTP status code of 2xx series |
8 | Average service time 2xx (Sec) | Average Service Time in seconds with HTTP status code of 2xx series |
9 | Receive throughput 2xx (Kb/Sec) | Receive Throughput in kilobits/sec with HTTP status code of 2xx series |
10 | Number of requests 3xx/Sec | Number of requests per second with HTTP status code of 3xx series |
11 | Average service time 3xx (Sec) | Average Service Time in seconds with HTTP status code of 3xx series |
12 | Receive throughput 3xx (Kb/Sec) | Receive Throughput in kilobits/sec with HTTP status code of 3xx series |
13 | Number of requests 4xx/Sec | Number of requests per second with HTTP status code of 4xx series |
14 | Average service time 4xx (Sec) | Average Service Time in seconds with HTTP status code of 4xx series |
15 | Receive throughput 4xx (Kb/Sec) | Receive Throughput in kilobits/sec with HTTP status code of 4xx series |
16 | Number of requests 5xx/Sec | Number of requests per second with HTTP status code of 5xx series |
17 | Average service time 5xx (Sec) | Average Service Time in seconds with HTTP status code of 5xx series |
18 | Receive throughput 5xx (Kb/Sec) | Receive Throughput in kilobits/sec with HTTP status code of 5xx series |
19 | Number of requests others/Sec | Number of requests per second with other HTTP status code |
20 | Average service time others (Sec) | Average Service Time in seconds with other HTTP status code |
21 | Receive throughput others (Kb/Sec) | Receive Throughput in kilobits/sec with other HTTP status code |
22 | Requests greater than threshold response time (pct) | Percentage of requests which have response time greater than threshold |
Step – 9: Selecting Aggregation Level
Next, select the aggregation level. User can select any or all options. This is a mandatory field for import access log data.
- Aggregate for All URLs: Aggregation will be done for all URLs of all vectors.
- Aggregate for Specified URLs: Aggregation will be done for specific URLs for all pages. Enter the URL pattern and page name at which aggregation is to be performed.
- Aggregate at Tier level: Aggregation will be done at Tier level and a consolidated vector is created for all servers and instances.
Step – 10: Using Request Line [Optional]
Some application server supports request line in which the provided Method, URL, and the HTTP version comes in a common field. At the time of Pattern matching, there are two approaches.
Request Line: If Request line check box is selected, then search pattern is applicable for whole request line. If request line check box is not selected, then system extracts the URL pattern and match whole URL in log file.
Step – 11: Importing Access Logs
Select the server or instance (by clicking over it) in the table and click the Import button to import the access log details. The access log details are displayed on graph panel.
![]() |
To view the Access log metrics, click the Advance Options ![]() ![]() |
Access Log Details on Graph panel:
Check Profile
Check profile is used to filter graphs based on certain defined conditions. These conditions may be positive or negative for test runs. Check profiles are used in different modules of product to verify tests whether they are failed or passed. It is useful to automate Thread dump, Alert, Heap dump etc.
Check Profile Management
To open the Check Profile Management window, go to Actions > Check Profile. The Check Profile Management window is displayed.
This window displays a list of available Check profiles with their descriptions. User can do following operations:
- Add: This adds a new check profile.
- Delete: This deletes the selected check profile. Multiple deletions are also possible.
- Update: User can modify any check profile by selecting the specific row of table.
- Copy: In copy feature, system prompts for a new name and by default it fills the same name with prefix CopyOf. For example, if profile name is Demo and user is making a copy of it then it suggested profile name CopyOfDemo. User can either fill the auto suggested name or update the name.
- Close: This closes the Check Profile Management window.
The table in the Check Profile Management window displays following attributes:
- Profile Name: It displays a list of all the profiles created.
- Profile Description: It displays the description of each profile.
- Check Rules: It provides information about number of check rules related with that particular profile.
- Compare Rule: It provides information about number of compare rules related with that particular profile.
- Last Updated On: It provides information on date and time of last update.
- Updated By: It provides information on the user last updated the profile.
Adding a Check Profile
To add a check profile, user needs to follow the below mentioned steps:
- On the Check Profile Management window, click the Add button, the following window is displayed:
Check Rule
Check Rule works based on the condition and it returns true or false accordingly.
- Enter the following details:
- Profile Name: Specify the name of Check profile. Profile name is mandatory.
- Profile Description: Specify the description of check profile.
- Group Name: Select the specific group from the drop-down list.
- Graph Name: Select the associated graph from the drop-down list.
- Node: If either group type or graph type is a vector, then at least one or all indices needs to be selected.
- Attribute: Select an attribute from the list of attributes.
- Operation: Select an operation from the list of operations and provide the value for comparison on the basis of operation.
- Condition: Define conditions by using Arithmetic as well as Logical operators.
- Rule Name: Define a unique name for the rule.
- Rule Success Criteria: Specify the success criteria either ‘Pass’ or ‘Fail’ for the specific rule.
- Message: Specify a message according to requirement
3. Click the Save button to save the Check Profile.
Catalogue Management
Catalogue management provides a common platform for the selection of graphs and storing them as a catalogue. Catalogue means the collection of items in a systematic manner. Here, collection of items is the collection of graphs. This catalogue is saved at a common place, so that it can be used with different modules of Web Dashboard, such as Favorites, Alert, Template, and Pattern Matching by just importing the catalogue as per requirements.
Key Features:
- User is able to save any type of graph (Normal / Derived).
- User is able to save any series type (Simple / Percentile / Slab Count).
- User is able to select any chart type (Line / Bar / Pie / Area / Stacked Area / Stacked Bar).
- User is able to add / delete a catalogue.
Accessing Catalogue Management UI
To access the catalog management UI, follow the below mentioned steps:
- Go to Actions menu and click the Manage Catalogue option.
2. This displays the Catalogue Management It contains two tabs – Manage Catalogue and Add / Edit Catalogue.
Manage Catalogue
This section contains a list of catalogues with details, such as catalogue name, graph type, description, created by, created on, and actions.
From the actions column, user can Edit or Delete a catalogue. Upon clicking the Edit icon, the created catalogue is displayed in edit mode where user can make changes and overwrite the catalogue settings.
Add / Edit Catalogue
To add a catalogue, go to the Add /Edit Catalogue section and provide the following details:
Catalogue Detail
It will contain information like Catalogue Name, and Catalogue Description:
Chart Detail
The Chart details includes the Selected Metric type provided by the user. By default, it is selected to be as Normal while adding a new Catalogue.
Metric Types:
- Normal
- Derived
Normal Metric Type
Derived Metric Type
Here, another option of Derived Metric Name under Select Graphs is displayed where user needs to provide the Derived Name of the added derived Graphs.
Select Graphs
It contains the area to select graphs. If graph type is Normal, then following window is displayed where user can select graphs in normal mode (without performing any mathematical operation unlike derived graphs).
If graph type is Derived, then user can apply some mathematical operations to get the Derived Graph as required.
If the selected group is vector based, there are two options as follows:
- All
- Specified
For “Specified”, a new window is displayed with following tabs:
- Specified Indices Selection
- Advanced Indices Selection
In case of “Specified” Indices selection:
In case of “Advanced” Indices selection, user needs to select the Tier, Server, and Instance:
Tabular View
This displays the graph data in tabular format. We select the normal or derived graph from the above graph area and then on clicking this button, that graph is added in the table. It contains following fields:
- Graph/Derived Name
- Metrics/Indices_Derived_Formula
- Metric Type
- Action
Table row also has the option to delete the graph, inside the Action.
In case of Normal graph type:
In case of Derived graph type:
Favorites
Introduction
Favorite allows a user to save current view of Real Time Graphs (RTG). User can create a new profile by saving current view of RTG as a profile. To see current view of graphs next time in Web Dashboard, user needs to add graphs in favorite. When user loads saved favorite, then all graphs of favorite are displayed on the graph panel of Web Dashboard.
Adding Favorite
User needs to follow the below mentioned steps for adding favorite in web dashboard.
- On the left pane, click the Favorites menu, and then click the Add Favorite menu item.
- The Add Favorite dialog box is displayed.
- Enter Favorite name and its description. Favorite description is optional. To make this favorite as public, select the Public check box.
- To set the favorite path, click the Browse button. A window to set the favorite path is displayed.
- Click the Apply button.
Organizing Favorite
User needs to follow the below mentioned steps for organizing favorite in Web dashboard:
- Go to Favorites menu and click the Organize Favorites menu item. A list of favorites is displayed.
- User can perform following actions on the favorites:
- Load: When user clicks on Load after selecting favorite, then graphs of selected favorite are loaded in Web dashboard.
- Edit: This option is used to edit the favorite details, such as favorite name, description and path.
- Copy: This option copies the favorite. User needs to specify the destination favorite name, favorite description and destination path.
- Delete: This deletes the favorite from the list.
The user cannot delete a favorite, which is currently loaded.
- User Default: This option is used to save the favorite as default for that particular user. Suppose, the user saves “AdvanceOpenMergeSysStat” as default favorite by clicking the User Default button.
On opening the Web dashboard, graphs of “AdvanceOpenMergeSysStat” favorite is loaded by default for that user.
- Project Default: This option is used to save the favorite as project default for all users except those who have set any favorite as user default. An admin user can set this. Suppose, “AdvanceOpenMerge” favorite is saved as Project default by clicking the Project Default button.
On opening the Web dashboard, graphs of “AdvanceOpenMerge” favorite is loaded by default for all users except those who have set any favorite as user default.
- Select Multiple: A user can select and delete multiple favorites and directories (by using the ‘Select Multiple’ button) at once even if favorites lie in different directories. On the Organize Favorite window, click the Select Multiple button.
This displays a check box preceded to each favorite / directory. The user can select the favorite / directory and click the delete button for deletion.
Note:
- ‘Admin’ user can delete all favorites.
- Default favorite cannot be deleted.
- Currently loaded favorite cannot be deleted.
Close: This closes the “organize favorite” window.
Loading a Favorite
To load a favorite from the list of favorites, go to Favorites menu, and click the desired favorite to load. The favorite is loaded and displayed at the top panel. All the related graphs saved in the favorite are displayed on the widget panel.
Save Open/Merge Rule in Favorite
A user can save rule in favorite which is applied to perform open/merge members from metrics tree. For example, on using ‘All’ or “Pattern” in any hierarchical component while doing open/merge, then it is saved in favorite so that user is able find new set of values of that component later or other test.
- On saving the rule for favorite, result of this rule overrides the graphs available at each widget.
- This rule is removed if user performs any action which changes graph of any widget i.e. Drag Graph from tree, Merge with selected panel, dual axis line/bar, and line stacked graph form tree etc.
- There is a global rule or widget wise rule.
- Global rule is applicable when user do open/merge with new layout option. It is generally used when graphs are displayed on multiple widgets.
- Widget wise rule is applicable when user do open/merge without new layout. It is generally used while doing merge all.
- A user cannot keep Global and widget wise rule together. It overwrites each other. If user applies widget wise rule, then global rule is removed.
- If user have applied multiple time open merge operation, then latest applied rule is saved.
Update / Refresh a Favorite
User can update/refresh favorites on the widgets. On clicking the icon next to the loaded favorite, few options are displayed – Update, Refresh, Parameters, filter, and copy link.
On clicking Update, the current view of the favorite is updated and saved in the favorite. On clicking Refresh, the list of favorite is refreshed.
Parameterization of Favorite
Using this feature of favorite management, user can parameterize a favorite and can get the results based on the parameterized values provided. For this, user first needs to load a favorite, then click the Parameters option from the icon.
A dialog box is displayed prompting for the parameters to be passed.
Specify the values for parameterization, the parameterized graphs are displayed on the graph panel.
Favorite Filter
This feature is used to filter the graphs that are currently visible on the graph panel. To do this, click the Filter option next to the loaded favorite. A dialog box is displayed.
Specify the filter values and click the OK button. The filtered graphs are displayed on the graph panel.
Copy Link
Using this feature, user can copy link of the favorite and paste it in another window/ new tab to view the filter separately. The link is provided for the current view of the favorite.
Alerts
What are Alerts
Cavisson products have an in-build support of proactive alerting through which users can get notified whenever a KPI (Key Performance Indicator metrics like CPU utilization, request per second, average response time etc.) breaches the threshold configured as part of alert rules. This allows users to get an early notification of performance degradation even before an actual issue happens.
Types of Alerts
There are following types of alerts:
Capacity Alerts
In this type of alert, alert generation is done based on a fixed threshold value. User needs to define a Rule and Condition with threshold value. If condition is met, then the alert is generated.
Behavior Alerts
In case of Behavior alert, alerts are generated based on % deviation from the auto-generated baseline trends. The purpose of dynamic threshold is to:
- Provide adaptive threshold functionality for Alert.
- Generate alerts based on Identifying baseline on current load at specific hour of a day.
Load Index based Alerts
This is a highly advanced type of baseline alert, which is unique to Cavisson NetDiagnostics alert framework and it works based upon the load on the system instead of time-based trend. In this baseline, the alert engine learns the system behavior on all the loads and then utilizes this learning to compare the current data at current load with the baseline value at current load. Upon closely monitoring an application behavior, it is seen that the response time of an application is proportional to the load (PVS – page views per second) on the system. For example, if the load (page views per second) is 10, and let us assume that the response time is 100 ms, then, if load becomes 20 PVS, then the response time also becomes high; let’s say it goes to 200ms and so on. Therefore, if on one particular time, the value of response time goes to 500ms on load of 20 PVS then ideally there must be an issue and it should generate an alert.
Alert Properties
Following alert properties are available:
Alert Actions
These refer to the actions associated with an alert, which are executed on a certain trigger. Mainly alert actions are divided in to 3 categories:
- Notification Actions like sending e-mails or SMS or sending SNMP Traps. Besides this, there are multiple extensions available to send notification to third-party programs like ServiceNow, Cisco Spark, and PagerDuty.
- Diagnostics actions like taking TCP Dump, Thread Dump, and Heap Dump.
- Remediation action like running some custom scripts to restart a server/instance upon consuming high CPU/Memory.
Alert Settings
Alert settings are used to configure various parameters required for the configuration of alerts.
Alert Policy
Alert Policies enable a user to take different actions when an alert condition occurs and works as link between alert rules and alert actions. User can configure one or more policies with different actions. Policies are associated with Selected or All alert rules configured.
Alert Maintenance Window
Alert maintenance window configuration is required to disable alert generation at the time of maintenance period or a scheduled down-time.
Alert Rules
Alert rule is the key element where user defines the metric and associated threshold to generate an alert for a specific severity.
Alert History
Alert history is useful in obtaining insights into past-generated alerts. Alert history is also used to understand how severity of specific alerts may have changed over a period of time.
The detailed description is provided in the subsequent sections.
Alert Actions
These refer to the actions associated with an alert, which are executed on a certain trigger. Mainly alert actions are divided in to 3 categories:
- Notification Actions like sending e-mails or SMS or sending SNMP Traps. Besides this, there are multiple extensions available to send notification to third-party programs like ServiceNow, Cisco Spark, and PagerDuty.
- Diagnostics actions like taking TCP Dump, Thread Dump, and Heap Dump.
- Remediation action like running some custom scripts to restart a server/instance upon consuming high CPU/Memory.
Create an Alert Action
To create alert actions, follow the below mentioned steps:
- Go to Alert Actions that is displayed in the Alert window.
- Click the Add button to add an alert action. The Add Action window is displayed.
- Specify the name of the action. There are various sections under this, such as Notification, Diagnostics, and Remediation.
Notification
- Under Notification section, select the notification type either email, SMS, or SNMP and specify the details.
- Turn on the Send an Email toggle key.
- Type the Email Receiver.
- Specify the mail subject by clicking the Add button
.
- Similarly, specify the Pre Text and Post Text.
SMS
- Turn on the Send an SMS message toggle key.
- Type the phone number in SMS Receiver to which the SMS message needs to be sent.
SNMP
- Turn on the Send SNMP Traps toggle key. The user can also enable or disable the Consolidated notification for a rule.
- Specify the SNMP server and SNMP port.
- Specify the SNMP version. There are three versions:
- v1: On selecting this, specify the community to which the alert is to be sent.
- v2c: On selecting this, specify the community to which the alert is to be sent.
- v3: On selecting this, SNMP v3 security level section is displayed. This section contains three options:
- NO_AUTH_NO_PRIV: Authentication protocol and Privacy protocol not required. On selecting this, specify the username.
AUTH_NO_PRIV: Authentication protocol required, but Privacy Protocol not required. On selecting this, specify the username, select the authentication protocol, and provide the authentication password.
AUTH_PRIV: Both Authentication protocol and Privacy protocol are required. On selecting this, specify the username, select the authentication protocol, and specify the authentication password. Also, select the privacy protocol, and provide the password.
Send Alert to Extension
Select the extension to which the alert is to be sent. Extensions are groups of packages and classes that will be bundled in single jar and make it available to the user when need arises. Examples of extensions are – service now, pager duty, and cisco spark.
Forward alerts to a central Dashboard
This feature enables forwarding of alerts from one Cavisson product to other Cavisson product, so that all the alerts are visible at one central dashboard. This is useful in the scenarios where user wants to see multiple products’ alerts in a single dashboard. For example: In one environment there are multiple Cavisson products configured, let’s say NetVision (NV) and NetDiagnostics (NDE) and there is a requirement of sending all NV alerts automatically to NDE so that all the alerts generated from both the products are visible at one place. This feature enables:
- Alert integration added for sending alerts to other Cavisson product(s).
- Sending alerts from multiple products to a single Dashboard (many to one).
Diagnostics
This section contains sub-sections for configuration of thread dump, heap dump, and TCP dump.
Thread Dump
- To take a thread dump, turn on the Take a Thread dump toggle key and specify the number of thread dumps along with the interval. Thread dump is applicable for all applications other than node.js.
Heap Dump
- To take a heap dump, turn on the Take a Heap dump toggle key and specify the number of heap dumps along with the intervals. Heap dump is applicable for all types of applications.
CPU Profiling
- CPU profile is used to identify how the virtual machine (VM) has been spending its time when executing a method. To enable CPU profiling, turn on the CPU Profiling toggle key and specify the duration.
TCP Dump
8. To take a TCP dump, turn on the Take a TCP dump toggle key and specify – Interface, Max Duration, Size, Number of Packets, and Port. Mention the additional attributes, if required.
Remediation
9. To run a script or an .exe file on problematic nodes, turn on the Run a script or executable on problematic Nodes toggle key and type a script name. To run this on server, turn on the Run on server toggle key.
10. After the configuration, click Save.
Advanced Configuration
This feature is used for sending location in Alert Mail, SNMP Trap, and Cisco Spark. Regex is used to extract locations from Alert vectors.
The user needs to provide pattern to fetch Indices Identifier ($INDICES_ID) and then test the provided indices identifier pattern on a string. After that, click the Test button to identify the results after applying pattern on the provided string.
Specify the indices identifier in the Subject Mail section along with the pre text and post text.
The email is generated in the below format.
Import and Export
The Import and Export features are also provided for Alert Action, which were earlier limited to Alert Rule, Alert Maintenance, and Alert Settings. This feature is used for exporting Alert Action and Policy from one machine and import it in another machine. It reduces the effort of adding same policy and action in machine.
Import: This feature is used to import data (policy and action) from server or the local machine. The changes can be viewed immediately after refreshing the UI.
Export: This feature is used to export data (policy and action) to local or other machines. Then, it can be imported to other machines to see the changes.
Export an Alert Action
A user can export one or more alert actions to server or to a local machine. To export an alert action, follow the below mentioned steps:
- Select an alert action and click the Export button.
2. This displays the Export window and prompts the user to export to local or to server.
Export to Server
- To export the alert action to server, select the Server option and click OK. This prompts the user for server IP and path.
- Server IP: Specify the IPV4 type IP address of another machine where to export the alert action.
- Path: Specify the path of another machine where to export the alert action.
4. Click OK. This exports the data at the specified server location and a confirmation message is displayed.
Export to Local
- To export alert action to local, select the Local option and click OK.
6. This exports the file to local machine and a confirmation message is displayed.
Import an Alert Action
A user can import one or more alert actions from server or from a local machine. To import an alert action, follow the below mentioned steps:
- Click the Import button.This prompts the user to import the alert action from server or from a local machine.
Import from Server
- Select the From Server option and click Choose.
3.Browse the file from the server and click Select.
4. This displays the added alert action in the Import window.
5. Click OK. A confirmation message is displayed for success import.
Import from Local
6. Select the From Local option and click Choose.
7. Select the file from local machine. The added file is displayed in the Import window.
8. Click Upload. A confirmation message is displayed for successful upload of file.
9. Click OK. A confirmation message is displayed that alert action is imported successfully.
Note: Import and Export operations can also be performed in Alert Policy, Alert Rule, Alert Maintenance, and Alert Settings.
Alert Settings
Alert settings are used to configure various parameters required for the configuration of alerts.
- Click the Alert Settings sub-menu within the Alerts menu. The following window is displayed.
Alert Settings window consists of following sections:
Rule Triggering Alert Settings
- Enable/Disable Alerts: Use the toggle button to enable/disable the capturing of alerts. On disabling, user can remove all active alerts. Once enabled, user can further enable capacity/behavior alert or both. On disabling (capacity/behavior alert), user can remove all active/behavior alerts or both. On enabling Alert Policy toggle, user can enable/disable the alert notifications (e-mail, SMS, SNMP trap, and Extension), Diagnostics (thread dump, heap dump, TCP dump, and CPU profiling), and remediation (run script).
- Enable Maintenance Window: To enable/disable the alert maintenance window, use the toggle button.
- Enable Alert History Logging: To enable / disable the saving of alert history logs in the log file.
- Minimum Baseline value criteria for Behavior Alerts generation: This is used to restrict behavior alert generation. The alert is generated only when baseline value is greater than this value.
- Skip number of samples on session restart: Specify the number of samples that need to be skipped upon restart of a session. For that number of samples, alerts will not be generated.
Rest API Triggered Alert Settings
To enable/disable Rest API triggered alerts, use the toggle button. User can specify the time after which the system clears all the Rest API triggered alerts. On disabling the toggle, user can remove all active Rest API triggered alerts.
Debug Settings
Here, user can configure the following:
- Debug log: A debug level is a set of log levels for debug log categories. Value range from 0 to 4
- Profiling log: Value range from 0 to 4
- Baseline view format: Either basic or extended. In ‘Basic’ view only average/sum values is displayed. In ‘Extended’ view average/sum with count is displayed.
Email/SMS Settings
This section is used to configure the email/SMS settings. To open this, click the Email/SMS Settings button at the top-right corner of the window. Email/SMS settings can also be configured from the top menu.
![]() |
![]() |
This displays the Email/SMS Settings window:
Provide the required details related to mail configuration and SMS career. Test the configuration and click the Apply button to apply the settings.
Alert Maintenance
Alert maintenance window configuration is required to disable alert generation at the time of maintenance period or a scheduled down-time.
Here, user can add a maintenance schedule and can view the applied maintenance schedules. In addition, user can search or delete a maintenance schedule based on the requirements.
Configure Maintenance Schedule
- Select first indices level (i.e. tier). Upon selecting ‘All’, maintenance schedule is applicable for all tiers. Upon selecting pattern, user needs to specify a pattern for the tier(s) to which maintenance schedule is to be applied.
- Select second indices level (i.e. server). Upon selecting ‘All’, maintenance schedule is applicable for all servers. Upon selecting pattern, user needs to specify a pattern for the server(s) to which maintenance schedule is to be applied.
- In the same manner, select the third indices (i.e. instance).
User can view the further elements of each level of indices by clicking the Test button. For example, if user selects pattern as *Cav at first indices, and upon clicking the Test button, the details are displayed based on the selected pattern.
This functionality can be used at any indices level based on the pattern or for specific tier/ server/ instance.
Note: If a user selects single Tier, the server level displays the details for that tier, and if pattern is applied, the indices or server list is displayed based on the applied pattern. When the user selects a tier, only that tier gets selected despite of other tiers starting with the same name.
4. Then, select the schedule type from the following list of options:
- Every Day of Every Month: This maintenance schedule is meant for every day of every month. Select the starting and ending hour at which the maintenance schedule is applicable for each day of each month.
- Day of Every Month: This maintenance schedule is meant for a particular day of every month. First, select the day from the list and then select the starting and ending hour at which the maintenance schedule is applicable for the selected day of each month.
- Last Day of Every Month: This maintenance schedule is meant for last day of every month. Select the starting and ending hour at which the maintenance schedule is applicable for last day of each month.
- Weekday of Every Month: This maintenance schedule is meant for a particular weekday of every month. First, select the week, then day, and then starting and ending hour at which the maintenance schedule is to be applied.
- Day of Every Year: This maintenance schedule is meant for a particular day of a month. First, select the month, then day, and then starting and ending hour at which the maintenance schedule is to be applied.
- Weekday of Every Year: This maintenance schedule is meant for a particular weekday of a year. First, select the week, then weekday, then month, and then the starting and ending hour at which the maintenance schedule is to be applied.
- Custom: This maintenance schedule is meant for a custom duration. Select the starting date and time and ending date at time at which the maintenance schedule is to be applied.
- Provide a description for the maintenance schedule and click Add. The system prompts to apply the maintenance schedule as soon as rule is applied. Click Yes.
- This adds the alert maintenance schedule and displays in the Applied Maintenance Schedule section.
Other Operations
- To view maintenance window history, enable the Show Maintenance Window History toggle button.
- To use the advance filters, click the
icon.
- To delete the selected maintenance schedule and make the schedule in-effective, first select a record and click the
icon.
- To delete the selected maintenance schedule and make the schedule effective, first select a record and click the
icon.
- To download the report in Word, Excel, and PDF format, select the corresponding icons.
Alert Policy
Alert Policies enable a user to take different actions when an alert condition occurs. User can configure one or more policies with different actions. Policies are attached to Selected or All alert rules configured.
Go to Alert Policy that is displayed in the Alert window.
There are following sections within this:
- Policy Name: Name of the alert policy
- When to Trigger: The policy is triggered when generated alert satisfied the given criteria. That means, if the severity of the alert gets changed from Critical to Major or Critical to Minor, then only the policy gets triggered.
- Enabled: Is the policy enabled
/ disabled
.
- Action(s) to Trigger: Actions to be executed based on trigger, such as taking heap dump, taking thread dump, alert notification via email etc.
Working with Policy
A user can perform following actions with a policy:
- Add a policy: To add a policy, click the Add
button.
- Edit a policy: To edit a policy, select the policy and click the Edit
button. - Delete a policy: To delete a policy, select the policy and click the Delete
button.
- Copy a policy: To copy a policy, select the policy and click the Copy
button
- Enable a policy: To enable a disabled policy, select the policy and click the Enable
button.
- Disable a policy: To disable an enabled policy, select the policy and click the Disable
button.
Add a Policy
- Click the Add button on the Alert Policy window. The Add Policy window is displayed.
- Type the policy name. To enable it, select the Enable Policy checkbox.
- AlertMail and SNMP Trap can be sent when Alert is generated using REST API. For that, turn on the Applicable Only for Rest API toggle key. If the user turns it off, policy will be applicable for both – Alerts and Alert through Rest API.
- In the Policy Events section, turn on the Enable/Disable all criteria toggle key.
- Select the intensity of the alert rule, such as Critical, Major, and Minor for both starting and ending the rule violation.
- Specify the alert rule – Behavior/Capacity/All. To specify alert rules, select Specified Alert Rule(s).
- Click Action.
Add Actions to Execute
This has already been covered in Alert Actions section.
Alert Rules
Alert rule is based on the alerts defined if a condition is met. To view the Alert Rule window, go to Alert menu and click the Rules menu item. The Alert Rules window is displayed.
There are following columns:
- Status: Represents whether the Rule is enabled/disabled.
- Rule Type: Represents whether the rule is configured for tier level or individual indices level.
- Rule Name: Name of the rule.
- Condition Expression: Displays the summarized view of the conditions. Upon mouse hover, the detailed condition is displayed.
- Alert Message: Message to be displayed when an alert is encountered.
- Alert Description: Description of the alert to understand the cause and other insights.
- Recommendation: Specify preventive measures for improvement and overcome from the reason of alert generation.
There are following actions a user can apply on rules:
- Edit: To edit a rule
- Add: A add a new rule
- Delete: To delete a rule
- Update: To update a rule
- Copy: To copy a rule
- Close: To close the Alert rule window
Creating an Alert Rule
User can create a rule based on the Tier level or individual indices level. For both the levels, there is different configurations. We will describe configuration for both the levels.
To create an alert rule, follow the below mentioned steps:
- Click the Add button on the Alert Rule This displays Alert Rule Configuration window.
2. Specify the rule name. To enable it, select the Enable Rule check box.
3. Specify the moving window timelines. For moving window advance settings, click the settings icon. This displays the Moving Window Advanced Settings window.
4. In this window, there are following options for calculating graph data value:
- Moving Average: Graph data value for alert is calculated on each new sample generated.
- Fixed Window Average: Graph data value for alert is calculated for fixed time as specified.
- Moving Advanced: Graph data value for alert is calculated for last / any stated percentage of samples individually with specified rule condition.
User can also enable checking of logs for a specified duration if alert remains in the same state. Also, initiate action when ‘Alert rule continues in the same state’ flag is enabled in the policy.
- To enable alerts for business health rules, select the Business Health Rule check box.
Applying Rule at Tier Level
Here, user can apply rule at tier level and alert is generated on the overall results of the tier based on the conditions applied.
- First, select the Tier radio button and then select the tier from the drop-down list. User can also apply a pattern for selecting the tiers from the list.
- Specify whether to consider all conditions or any of the condition (for the alert generation) by selecting ‘Every’ or ‘Any’ respectively.
- Provide the condition name.
- Select for which value (such as average, sample count, sum, minimum, and maximum) the alert condition is to be configured.
- Then, select the metric group and metric on which the condition is to be applied.
- Specify the condition and its threshold value. There could be one of the following conditions:
Comparing with Baseline Data: Absolute value / percentage specified for the alert condition is compared with the selected trend of the Baseline data.
- Increases from Baseline: If the current value is increased with the specified value / percentage as compared to the baseline data for that particular trend, then alert will be generated. User can also specify the minimum increase from the baseline.
- Decreases from Baseline: If the current value is decreased with the specified value / percentage as compared to the baseline data for that particular trend, then alert will be generated. User can also specify the maximum increase from the baseline.
- Changes from Baseline: If the current value is changed (either increased or decreased) with the specified value / percentage as compared to the baseline data for that particular trend, then alert will be generated. User can also specify the minimum change from the baseline.
Comparing with Current Data: Absolute value / percentage specified for the alert condition is compared with the current data.
- Increases: If the current value is increased with the specified value / percentage, then alert will be generated. User can also specify the minimum increase value / percentage.
- Decreases: If the current value is decreased with the specified value / percentage, then alert will be generated. User can also specify the minimum decrease value / percentage.
- Changes: If the current value is changed (either increased or decreased) with the specified value / percentage, then alert will be generated. User can also specify the minimum change value/ percentage.
- Greater than Equals to: If the current value is greater than and equals to with the specified value, then alert will be generated.
- Less than Equals to: If the current value is less than and equals to with the specified value, then alert will be generated.
User can also notify alert engine to consider the condition as true when data is not present for a common indices by selecting the Consider as true when data is not present for a common indices check box.
Adding Multiple Conditions
User can add multiple condition and specify whether to consider ‘all’ or ‘any’ condition for the alert generation.
- To add a new condition, click the Add
icon at the top-right corner. This displays the place holders for the new condition. User can either provide the configuration details manually or just copy the configuration from any / previous condition and edit later.
- To copy the same configuration from any condition, click the Copy
icon from the condition whose configuration is to be copied and click the Paste
icon corresponding to the condition where the configuration is to be copied.
- To delete a condition, user can click the Delete
icon.
Alert Severity
In this section, user can configure the alert severity based on the indices affected if the specified conditions are satisfied. The severity can be specified either for:
- % of Indices: If the condition is met, user can define the severity (such as critical, major, and minor) based on the percentage of affected indices. For example, if 12 indices are affected out of 20 then it means 60% indices are affected. For severity configuration, if we specify critical for 80%, major for 60%, and minor for 40% then major alert will be generated in this case as the affected indices are 60%.
- Number of Indices: If the condition is met, user can define the severity (such as critical, major, and minor) based on the number of affected indices.
- Aggregate value of Indices: If the condition is met, user can define the severity (such as critical, major, and minor) based on the aggregate value of indices.
- Individual Indices: If the condition is met, user can define the severity (such as critical, major, and minor) based on the individual indices.
- Specify the Alert message that is to be displayed when the alert is generated.
- Specify the Alert description that provides an insight about the generated alert.
- Specify the recommendation that is to be followed to overcome from the alert in production environment.
Applying Rule at Indices Level
Here, user needs to specify the condition at indices level corresponding to a severity. User can add multiple conditions for each severity.
- Select the Individual Indices level radio button.
- Specify the condition for Critical severity with the same process as defined in the Tier level alert configuration. In the same way, specify condition(s) for other severity, such as Major and Minor. User can apply multiple conditions for any severity and can mention whether to consider each or any condition for a particular severity.
- For adding a condition, user needs to provide the condition name, value on which the condition is to be applied (average, sample count, sum, minimum, or maximum), metrics group, metrics, and indices.
- For indices selection, user is having multiple options that are displayed upon clicking the Select Indices link.
Here user can have following options for indices selection:
- All: Condition is applied on all indices.
- Specified: Condition is applied only on the selected indices. Select the indices by using the
button.
- Advanced: This is an advanced level indices selection. User can also specify pattern for the tier, server, and instances.
- Specify the Alert message, alert description, and recommendation.
Baseline Data Viewer
Alert baseline viewer is used to view the baseline data condition-wise. User can view this if the condition is behavior type. Click the icon at the top-right corner of the window to view the alert baseline.
This displays the Baseline Data Viewer window:
User can download the baseline data in Word, Excel and PDF format.
Baseline
Baseline is a dynamic threshold that is derived from historical trend. A baseline can move or change over a period of time based on the current trend. A user can configure the trend-timeline to be considered to create a baseline. Behavior alerts work on comparing current data points (metric data) with the baseline data and upon it’s breach a behavior alert is triggered.
Baseline Trends
Normally, web applications do not have constant flow of data. Rather, most of the time, the data flow keeps on changing with time. There are times when high number of transactions take place whereas in some time interval, minimal transactions are happening. Due to this fact, a baseline trend needs to be created, which takes time into consideration and current time data is compared with similar time baseline data.
So, one baseline trend can be based on the different time of the day, different day of the week, or any special (event) day. The other baseline trend could be based upon the load on the system, as most of the system and application metrics behave differently at different load value. We provide support for both types of trends.
Baseline Creation: In case of behavior alert, alerts are generated based on trends and trends are defined in baseline. So, first need to configure baseline.
Type of Trends supported for Baseline in Behavior Alert:
No Trend: It is the most basic type of time-based baseline. In this case, an average of configured period is taken as baseline, and hence actually no consideration of trend. For example, you want to have a baseline for last 30 days irrespective of any time of day or any day of week.
In case, we need to find out No Trend or Single Threshold for last 14 days per tier/per metric. So we have to calculate average per tier/per metric, on basis of No Trend for last 14 days. For example – PVS/Number of nodes, we need to calculate average of averages of last 14 days.
If sample interval is 2 min then we receive 1 sample in every 2 minutes. so we have 30 samples in every 1 hour. In 1 day, we have 24*30=720 samples. For 14 days, we have 14 buckets of 720 samples as per one day calculated samples. Now, we need to find out overall average for last 14 days on basis of No Trend.
In above screens, select No Trend from Trends drop-down list. Define name of baseline in Name text. There are two types of time period. On selecting Moving Time Window, user needs to fill number of last days. That number of last days is counted from current date. On selecting Specific Time Range, user needs to fill from and to date with time. Once user applies the setting of baseline, then corresponding entry is displayed in Available Baseline table.
Daily Trend: It is a specific type of trend-based baseline where baseline data is computed as an hourly average of the days specified. For example, if user wants to compare the current hour data (let’s say it is currently 4:30 AM) with average value of all 4:00 AM – 5:00 AM hours for last seven days.
In case, we need to find out Daily Trend or 24 Thresholds per day for last 30 days per tier/per metric. So, we have to calculate average per tier/per metric, on basis of Daily Trend for last 30 days. For example – For Web store PVS/Number of nodes, we need to calculate average of averages of last 30 days for each individual hour. We receive 1 sample in every 2 minutes, so we have 30 samples in every 1 hour. In 30 days, we have 30*30=900 samples (approximately) for each individual hour. For 1 day, we have 24 buckets of 900 samples as per each individual hour which keep in two dimensional buckets (Days*Hours). Now, we need to find out overall average for last 30 days on basis of Daily Trend.
In above screens, select Daily Trends from Trends drop-down list. Define name of baseline in Name text box. There are two types of time period. On selecting Moving Time Window, user needs to fill number of last days. That number of last days is counted from current date. On selecting Specific Time Range, user needs to fill from and to date with time. Once user applies the setting of baseline, then a corresponding entry is displayed in Available Baseline table.
Weekly Trend with Annual override: Using this option, the baseline will be created based on the average hourly value as well as day of the week will also be considered. For example, if user wants to compare the current hour data (let’s say it is currently 4:30 AM of Sunday) with average value of all 4:00 AM to 5:00 AM hours of all Sundays for last 30 days. This trend has an additional option to override a particular week day with any special day.
In case, we need to find out Weekly Trend or 168 Thresholds per week for last 90 days per tier/per metric. So, we have to calculate average per tier/per metric, on basis of Weekly Trend for last 90 days. For example – For Web store PVS/Number of nodes, we need to calculate average of averages of last 90 days for each individual hour for each day of week.
We receive 1 sample in every 2 minutes so we have 30 samples in every 1 hour. In 90 days, we have 90*30=2700 samples (approximately) for each individual hour. For 1 day, we have 24 buckets of 2700 samples as per each individual hour which is kept in two dimensional buckets (Days*Hours). Also, we need to calculate an overall sum/avg. in 7 Days*24 hour two dimensional buckets for each of day in week. Now, we need to find out overall average for last 90 days on basis of Weekly Trend.
User can choose special day in following way
Day of a month (1-28)
Last day of every month
x`Weekday of a month (1 or 2 or 3or 4 or last) (Monday or Tuesday or Wednesday or Thursday or Friday or Saturday or Sunday) of month
Day of a year (1 or 2 or 3 or…30 or 31) of month (January, February, March…December)
Weekday of a year (1 or 2 or 3or 4 or last) (Monday or Tuesday or Wednesday or Thursday or Friday or Saturday or Sunday) of (January, February, March…December)
Event Day
Gold Baseline: Use Single Day as baseline – 24 Thresholds for each hour. But, each day is tagged with what golden baseline to use for specific day. If none is specified, default one is used. Or default may also be Weekly Trend. Create baseline buckets and no auto update of baseline.
Below screen is displayed for Gold Baseline Day.
Load Based Index Baseline: This is a highly advance type of baseline that is unique to Cavisson NetDiagnostics alert framework and it works based upon the load on the system instead of time-based trend. In this baseline, we learn the system behavior on all the loads and then utilize this learning to compare the current data at current load with the baseline value at current load. If we closely monitor an application behavior, then we can see that the Response Time of an application is proportional to the load (Page views per second) on the system. For example, if the load (Page views per second) is 10, then let us assume that the Response time is 100 ms. If load goes to 20, the Response time becomes high, let’s say it goes to 200ms and so on. Therefore, if on one particular time, the value of Response time goes to 500ms on load of 20 then ideally it should generate an alert.
Load indexed based baseline learns the system behavior for all loads
All System parameters – System/Network/Application parameters learned for different various Page View Per Second Load
Average values over last 90 days
In above screens, select Load Index Based Baseline from Trends drop-down list. There are two type of time period – Moving Time Window and Specified Time Range. On selecting Moving Time Window, user needs to fill number of last days. That number of last days is counted from current date. On selecting Specified Time Range, user needs to fill From and To date with time.
User needs to click the Add button to specify the load settings. The Add Load Setting window is displayed. Fill the following details:
- Tier Name
- Minimum value, Maximum value, and bucket size.
- Specify the derived graph expression
- Click Add
Once user applies the setting of baseline, then a corresponding entry is displayed in Available Baseline table.
Alert History
Alert history is useful in obtaining insights into past-generated alerts. Alert history is also used to understand how severity of specific alerts may have changed over a period of time. To open the Alert History page, click the Alert History menu item on the Alerts menu. This displays the Alert History window.
This window consists of two panes – left pane and right pane. On the left pane, filters are displayed, and on the right pane, alert details are displayed.
Alert Details
On the right pane, following alert details are displayed – alert type, alert severity, rule type, status, rule name, indices, alert value, time when alert is generated, condition expression (on mouse hover it displays the complete condition), and message displayed on alert generation.
To view detailed information, double-click the alert. A section for detailed alert information is displayed at the bottom of the window:
Disable this detailed section by double-click the alert again.
Show Graph
To view the graph of a particular alert, select the alert and click the Show Graph button. User can select multiple alerts and get their corresponding graphs displayed on the widget panel.
Row Grouping
This feature enables a user to group alerts based on certain parameter. For this, user needs to click the Apply Row Grouping icon on the top-right corner of the window. Post that, user needs to select the grouping parameter from the drop-down list (for example, Rule Name). The alerts are grouped and get displayed on the alert history window.
To remove the row grouping, click the icon again.
Advance Alert Filters
Time Filter: This section is used for time filter, such as last 10 minutes, 30 minutes, and so on.
Alert Severity: This section is used to filter alerts based on severity, such as new alert, continues alert, upgraded alerts, and downgraded alerts.
Alert Type: This section is used to filter alerts based on alert type such as capacity, behavior, and all.
Alert Rules: This section is used to filter alerts based on alert rules. Alert rules are displayed in a list. User can search for an alert rule using the search icon.
String Filter: This filter is used to filter alerts based a matching rule name, baseline name, or message.
Topology Filter: This filter is based on topology (tier, server, and instance).
Other: This filter is based on other parameters, such as rule changes, baseline changes, alert setting changes, maintenance window changes, and tomcat changes.
Actions on Alert Filters
- Apply: This applies the selected filter and displays the result accordingly.
- Reset: This resets the filters and selects only the default filter values.
- Clear All: This clears all the specified values for the filters.
- Select All: This selects all the filters.
Global Filter
User can apply global filters (in the form of a string) on the resultant alerts displayed. This is not specific to any column. The filter is applied on all columns of the displayed result.
Hide Filters
To hide all filters from the left pane, click the Hide Filter icon . To get it back, click the same icon again.
Records per page
By default, 20 records per page is displayed. User can change it to 50, 100, or 200.
Columns level Filter
User can also apply column level filter on the generated alerts. To do this, click the Show Filter icon on the top-right corner of the window. The column level filters get enabled and user can use those filters for filtering the alerts.
Delete Records
To delete a record(s), select the record(s) and click the Delete Record(s) icon on the top-right corner of the window. The record gets deleted from the alert history.
Download Alert History
User can download the alert history in word, excel, and PDF format by clicking the icons provided at the bottom of the window.
Active Alert
Active alert window is similar to Alert History window. This window displays all the active alerts generated. The details displayed here are – alert severity, rule type, rule name, alert message, alert time, time ago the alert was generated, indices, alert value, and condition expression.
Here also, user can perform certain actions:
- From the Alert Type drop-down list, user can specify for which alert type (capacity/behavior/all) the details is to be displayed.
- User can also view the alerts based on the severity, such as critical, major, or minor.
- User can search an alert from the Global Filter
- User can view the graphs of individual alert or for all alerts.
- User can also navigate to alert history by clicking the Alert History
- To clear alert(s) from the records, select the row and click the Force Clear button.
Active Alert Graphs
This section displays graphs of active alerts. To view Active alert graphs, follow the below mentioned steps:
Go to Alerts menu on the left pane and click the Active Alert Graphs menu item. The Active alert graphs are displayed
User can perform all the operations with alert graphs, such as widget settings, show graph data, add to custom metrics, show graph in tree, view reports, run command, download.
Alert Stats Report
User can view the stats report of the graph data as generated in the active alert graphs. To view the stats report, follow the below mentioned steps:
Go to Alerts menu on the left pane and click the Alert Stats Report menu item.
The Alert Stats Report is displayed.
In this report, graph details of alert graphs are displayed, such as graph name, minimum value, maximum value, average value, standard deviation, last value, and number of samples.
On clicking a graph link, the graph is displayed.
User can download a graph in PDF, word, and excel format.
Settings
Configuration settings is used to configure the system settings, such as logs settings, percentile settings, transaction details settings, favorite settings, graph settings, and graph tree setting. To open the configuration settings window, click the Settings menu on the left pane, then click the Configuration menu item. The Configuration Settings window is displayed.
Using the Configuration Settings window, user can configure the debug level via Logs tab, percentile settings for percentile graphs via Percentile tab, percentile settings for transactions via Transaction Details tab, favorite settings via Favorite Settings tab, graphs gradient settings via Graph Settings tab, and graph tree refresh settings via Graph Tree Settings tab.
Logs
This tab is used to configure the debug levels for logs. User can select the debug level from 0 to 4. This defines information level to be stored at trace level. The default level is 0, which contains the most basic information. Further levels (i.e. 1, 2, 3, and 4) contain more verbose information as the level increases.
To configure the debug level of server logs and dashboard server logs, select the debug level from the debug level section. For module level debug, first select the module name and then select the debug level for that module.
To save the configuration in config file, select the Save in Config File check box and click OK. This makes the settings permanent and applicable for every session. To apply the settings for the current session only, click the OK button (without selecting the Save in Config File check box). To cancel the settings, click the Cancel button. In the similar way, this operation can be performed for other tabs also, such as Percentile, Transaction details, Favorite settings, and so on.
Percentile
A percentile is the value of a variable below which a certain percent of observations fall. So the 20th percentile is the value (or score) below which 20 percent of the observations may be found. The pth percentile is a value so that roughly p% of the data is smaller and (100-p) % of the data is larger. Percentiles can be computed for ordinal, interval, or ratio data.
To configure the percentile view for a percentile graph, click the Percentile tab on the Configuration Settings window. The Percentile setting section is displayed.
The default percentile views are displayed. Select the percentile to view from the drop-down list. To select all percentile views, select the select All check box. To view default percentile, select the Select Default Percentiles button. To save the configuration setting in the Config file, select the Save In Config File check box. Apply the settings by clicking the OK button. To cancel the settings, click the Cancel button.
Example:
In case user selects 90, 85, 80, and 70 as percentile view and apply the settings.
The percentile graph is displayed with the selected percentile views.
To view the percentile views, open a graph and change the graph type to percentile graph from the widget settings. Once the percentile graph is displayed, double-click on the graph to view the lower pane. The percentile views configured are displayed in the lower pane.
Transaction Details
User can also configure percentile views for transaction as configured for graphs in the above section. To configure percentile view for transactions, click the Transaction Details tab on the Configuration Settings window. The Transaction Details section is displayed.
Select the Allow Percentile Data Creation for Transactions check box to enable the percentile data creation for transactions. To view the default percentiles, click the Select Default Percentiles button. User can select the percentile to view from the drop-down list. User can also select all percentiles to view by using the select All check box. To save the configuration setting in the Config file, select the Save In Config File check box. Apply the settings by clicking the OK button. To cancel the settings, click the Cancel button.
Example:
In case user selects 99, 95, 90, and 80 as percentile view for transactions and apply the settings.
The selected percentile views are displayed in the Transaction summary report.
To view the Transaction summary report, go to View menu, click the Transactions menu item. Then, click the All Transactions link or a particular transaction link.
Favorite Settings
User can specify whether to apply the default time of favorite or the time specified in the current view. User can also specify whether to load favorite with compare settings. It also provides a feature to enable dynamic web dashboard. For these settings, user needs to click the Favorite Settings tab on the Configuration Settings window. The Favorite Settings section is displayed.
If user selects the Load Graph Time Settings From Favorite check box, the graph time settings are loaded from favorite. If user doesn’t select this check box, no time period of the favorite is saved and system considers the last saved value (default value). For example, if the current view of the favorite is last 1hour and default view is last 4 hours, then on selecting this check box, the favorite current value i.e. last 1 hour is saved and on not selecting this check box, no value of the favorite is saved and system takes the last specified value (default value) i.e. last 4 hours on loading the favorite.
If user applies comparison and save it as favorite, then, on selecting the Load Favorite with Compare Setting check box, the favorite is loaded with compare settings.
Dynamic Web dashboard feature allows a user to view the most recent updates in the graphs. User can enable dynamic web dashboard by selecting the Enable Dynamic WebDashboard check box. On enabling this, the graphs are displayed based on the runtime changes.
To save this setting in configuration file, select the Save in Config File check box. Click OK to apply the settings.
Graph Settings
User can also specify whether to apply gradient color on chart types and up to how many decimals the value to be displayed on y-axis. For this, user needs to click the Graph Settings tab on the Configuration Settings window. The Graph Settings section is displayed.
Here, chart types are displayed. To apply gradient color on a particular type of chart, select that check box corresponding to that chart type and click OK.
For example, applied gradient color on Donut chart.
Graph Tree Settings
User can specify whether to refresh graph tree on the occurrence of run time changes. For this, user needs to click the Graph Tree Settings tab on the Configuration Settings window. The Graph Tree Settings section is displayed.
Reports & Templates
Overview
Reports is a reporting module available in the Cavisson products. These reports are generated using Templates. User can select reporting format as Word, HTML or Excel. It includes tabular data with or without graphical illustration.
A template is a well-defined structure of information gathered from graphs. User don’t need to recreate the file each time. Once a template is created, it can be reused further. So, if user finds creating similar reports over and over again, it might be a good idea to save one of them as a template. Then user won’t have to format reports each time to make a new one. Just use the template and start from there.
Layout of Template Management Window
Template Management window consists of two panes – left pane and right pane. Left pane consists of System templates and User templates. Right pane consists of description of the templates.
Left Pane
- System Templates: These are system defined templates. User cannot add/edit/delete such templates, but can view them only.
- User Templates: These are user defined templates. User can add/edit/delete/view such templates and can create a report(s) under them.
Right Pane
- Template Name: Name of the template.
- Report Sets: Number of reports under the template.
- Last Modification Date: Date and time when the template was last modified.
- Owner: Owner of the template.
- Template Description: Description of the template.
- Action: Actions that can be performed on the template, such as adding report (
), edit the template (
), and delete the template (
).
Creating a Template
User can create a template by following the below mentioned steps:
- On the Report Template Management window, click the icon next to available template. The New Template page is displayed.
- Enter the following Template details:
- Template Name: Name of the template.
- Template Description: Description of the template.
- Enter the following Report details:
- Graph Type: Select the graph type, such as Normal graph, percentile graph, or slab count graph from the list. Then, select the chart type based on the selected graph type.
- Chart Type: Graphs are most important part of any report. They need to be selected in all type of templates. All graphs are time X-axis based graphs.
- Simple: Simple graphs have graph data on Y axis. Simple graphs have single graph on each panel.
- Multi: Multi graphs are also called merged graphs. Multi graphs have multiple graphs on Y axis with same X axis.
- Tile: Tile graphs have multiple simple graphs in tiled form. Tile graphs can have two or more graphs’ graphical view of data at same time in multi panel from top to bottom.
- Correlated: These graphs show correlation with other graphs.
- Percentile: A percentile is the value of a variable below which a certain percent of observations fall. So, the 20th percentile is the value (or score) below which 20 percent of the observations may be found. The pth percentile is a value so that roughly p% of the data is smaller and (100-p) % of the data is larger. Percentiles can be computed for ordinal, interval, or ratio data.
- Slab count: Slab count graph is used to count number of samples in a particular time interval. User can change all graphs to Slab Count Graph. Earlier, user was able to convert only those graphs which have Percentile Data File (PDF) file associated with them.
- Frequency Distribution: This chart is similar to slab count chart, but instead of bars, system represents the data points via dots.
- Multi with Layout: This is similar to multi graphs, but instead of displaying all graphs on the same panel, system displays them in separate panels based on the reports. It is applicable for Word report only.
- Bar: In this case, line chart is converted to Bars.
- Pie: It is useful in case of multiple graphs. If one graph is converted into Pie, then a simple circle filled with one color is displayed. It is needed to take average value or Last value of graph. If graph type is cumulative then system displays sample value otherwise average value.
- Area: An area chart displays graphically quantitative data. The area between axis and line are commonly emphasized with colors and textures. Area chart is used to represent cumulative data using numbers or percentages over time. Use the area chart for showing trends over time among related attributes. The area chart is like the plot chart except that the area below the plotted line is filled in with color to indicate volume.
- Stacked Area: Stacked Area chart means area chart of each graph is stacked on each other.
- Stacked Bar: In this case, graph is converted to Stacked Bar chart. Stacked bar chart means bar of each graph is stacked on each other. It is useful in case of multiple graphs. If one graph is converted into Stacked Bar, then it looks like normal Bar graph.
- Dual Axis Area Chart: In this type of graph, one graph is displayed as area chart and another graph is displayed as line chart. It is also known as multi-line area graph.
- Donut Chart: Donut chart is represented in the form of a donut. User needs to specify the criteria either last or average.
- Line Stacked Bar: If user wants to see combined graphs of a stacked bar and line chart, then Line Stacked chart is used. In Line Stacked chart, first selected graph displays as line graph and other displays as Stacked bar graph.
![]() |
User can select scaling on graphs, such as Multi, Tile, Percentile, Slab count, Frequency distribution, Multi with layout, Bar, Area, Stacked Area, and Stacked Bar. |
- Graph Type
- Normal: If user selects Normal graphs, then graphs are selected as it is from the Graph metrics and user cannot perform arithmetic operations on the graphs.
- Derived: If user selects Derived graphs, then user can perform arithmetic operations on the graphs
- Report Set: Enter a valid report set name. Report name must start with alphabet. Maximum length should be 128. Allowed characters are Alpha, numeric, space and special characters (/%()_;:,.-).
- Group Metric(s): Select the Group metrics from the drop-down list. All the available graphs are displayed under Graph metrics .
- Graph Metric(s):
- In case of Graph type as Normal: Select one or more graphs and click the icon
, the selected graphs are added on the right side under Selected Graphs area.
- In case of Graph type as Normal: Select one or more graphs and click the icon
![]() |
To remove the selected graph, first select the graph from the Selected Graphs area and then click the ![]() |
In case of Graph type as Derived: Select a graph from the graph metrics and click the icon. Then, add any arithmetic operation (that needs to be performed) and select another graph with the same process.
This process can be repeated and user can perform any arithmetic operation (from the available operations) and create a derived graph.
![]() |
To clear the selected graphs, click the ![]() |
- Click Add Report, the report gets added.
![]() |
User can add any number of reports under a template by following the above mentioned process.
To Edit a report, click the To Delete a report, click the |
- Filter by Value Options: User can discard the un-desired graphs in the report by using the ‘Filter by Value’ option. There are following filter options:
- All Non- Zero: Upon enabling this, all non-zero graphs are involved in the report.
- All Zero: Upon selecting this, all zero graphs are involved in the report.
- Advanced: Upon selecting this, user can have some advance level options based on value, such as <, >, <=, >=, Top, Bottom, In between, which helps to achieve the desired output. User can filter graph values by Min, Max, and Avg value of its graph data. User can also use the Include or Exclude options to get the filtered results. Include/exclude options includes/excludes the graphs that lies within the specified values. For example – if we try to find the graphs that has values between 50-100, then on include, system displays such graphs that has values from 50-100 and exclude option skips those results that contains graph values from 50-100 and displays the rest graphs. To enable the filters, user needs to enable the toggle button.
- Click the Save button, to save the template. The saved template is displayed on the Report Template Management window.
Types of Reports
There are various types of reports, such as:
- Stat
- Compare
- Excel
- Hierarchical
- Summary Report
Stat Reports
Stat reports provide a core set of information about what’s going on in a particular business area. A Stat report is a manually designed report that presents data in a manually specified layout. Stat reports can be based either on Tabular data and graphical data.
Categorization of Stat Reports
Stat reports are further categorised into the following sub-reports based on the view type:
- Tabular Report
- HTML Report
- Word Report
Preset Options
Preset options are used to configure the specific options required to generate the report, such as time duration, view by mode, and time format. There are following options under this section.
- Presets: It is the time period for which the report needs to be generated. For example – Last 15 minutes, Last 8 hours and so on. The date and time is filled automatically. On selecting the Custom option, user needs to specify the Start date/time and End date/time.
- View by: In the View by option, user can specify the format of aggregated data (such as hours, minutes, seconds) based on the selection of Preset option. For example, if user selects ‘Last 15 minutes’ in preset option, then ‘View by’ option will contain ‘Seconds’ and ‘Minutes’ only. If user selects ‘Last 4 hours’ in preset option, then ‘View by’ will contain ‘Seconds’, ‘Minutes’ and ‘Hours’.
- Time Format: On selecting Custom, two options are displayed – Absolute and Elapsed. In case of Absolute, enter Start Date, Start Time, End Date and End Time. In case of Elapsed, enter start time and end time. The specified date/time should be in range with current session.
Metrics Options
Metrics are the source of capturing the report data. There are various options available to select the metrics:
- All Metrics: On this selection, all metrics are captured for reporting.
- Selected Metrics: On this selection, user needs to specify certain other details, such as Chart type, graph type, Report set name, group metrics, and graph metrics.
- Templates: On this selection, user needs to select the template from the drop-down list.
- Favorite: User can generate the report from a favorite also. To do this, select the Using Favorite radio button and select the favorite from the drop-down list for which the report is to be generated.
Include Charts: To include charts in the report, select this check box.
Override Charts: On selecting this, the original charts are replaced with the chart type selected from the drop-down list.
Filter by Value Options
User can discard the un-desired graphs in the report by using the ‘Filter by Value’ option. There are following filter options:
- All Non- Zero: Upon enabling this, all non-zero graphs are involved in the report.
- All Zero: Upon selecting this, all zero graphs are involved in the report.
- Advanced: Upon selecting this, user can have some advance level options based on value, such as <, >, <=, >=, Top, Bottom, In between, which helps to achieve the desired output. User can filter graph values by Min, Max, and Avg value of its graph data. User can also use the Include or Exclude options to get the filtered results. Include/exclude options includes/excludes the graphs that lies within the specified values. For example – if we try to find the graphs that has values between 50-100, then on include, system displays such graphs that has values from 50-100 and exclude option skips those results that contains graph values from 50-100 and displays the rest graphs. To enable the filters, user needs to enable the toggle button.
View Type – Tabular
A graph report is a representation of data in a visual format that can help to see overall trends easily, identify medians and so on. Tabular reports display information in a column and row format. Tabular reports include column and row headers forming a grid pattern that displays fields from one or more sources.
Categorization of Tabular reports:
- Other (Simple, Multi, Tiles, Bar, Stacked Bar, Area, Stacked Area, PIE and correlated graph).
- Percentile Graph
- Slab Count Graph
- Frequency Distribution Graph
In Stats reports, there will be a hyperlink on each graph name. After clicking on graph name, charts will show in new tab. User can also downloads the specific chart (in Word, Excel or PDF). In case of simple graph, each chart will show single series on chart. In case of multi graph, each chart will show multi series chart. In case of Simple, Multi, tiles, correlated and derived graph view type Tabular view have columns like Graph name and their description.
- Min
- Max
- Average
- Std-dev
- Last Value
- Simple Count
Creating a Tabular Report
- Right-click Stats on the left pane, then click the New Report.
- On the Report Management window, select the View Type as Tabular. Make sure the Report Type is Stats.
- Specify the Preset Options details, and select the template from the drop-down list.
- Click the Generate Reports button.
Output
- Stats Report for Others (Simple, Multi, Tiles, Bar, Stacked Bar, Area, Stacked Area, PIE and correlated graph).
- Stats Report for Percentile, Slab Count, and Frequency Distribution graph:
- Simple Graph
- Multi Graph
View Type – HTML
HTML report consists of a file in HTML format and pictures in PNG format.
Creating an HTML Report
- Right-click Stats on the left pane, then click the New Report.
- On the Report Management window, select the View Type as HTML. Make sure the Report Type is Stats.
- Specify the Preset Options details, and select the template from the drop-down list.
- Specify the executive summary and conclusion. (Optional)
- Click the Generate Reports
Output
Analysis Summary
Average Transaction Response Time
Transaction Time
Transaction Summary
View Type – Word
The word Report is “a pre-formatted file type that can be used to quickly create a specific file”. In the Report, everything such as Font, Size, Color, and Background pictures are pre-formatted but users can also edit them.
Creating a Word Report
- Right-click Stats on the left pane, then click the New Report.
- On the Report Management window, select the View Type as Word. Make sure the Report Type is Stats.
- Specify the Preset Options details, and select the template from the drop-down list.
- Specify the executive summary and conclusion. (Optional)
- Click the Generate Reports
Output
Summary
Metrics Summary
Average Transaction Response Time
Compare/Trend Report
Compare Report compares two or more Test runs or releases. Trend Report shows the trends of configured monitors over a specific time period. Compare/Trend Report gives an insight about how a particular website or application has performed during a particular trend period chosen by user.
Creating a Compare Report
Follow the below mentioned steps for creating a compare report:
- On the Report Management window, right-click on Compare, and then click New Report.
- Enter the report name.
- Specify the View Type, such as Tabular/Word/HTML (same as mentioned in the Stats report).
- Specify the View by
- Select the template from the drop-down list.
- Enter the measurement name. Measurement name is a unique name which is assigned for the test run measurement.
![]() |
User can also set the Preset value and Time format. |
- Then, click the Add button, a new row will be added.
- Specify the Measurement name, select the test run (for which comparison needs to be done), and specify other options, such as Preset, time format. Click the Add
![]() |
• User can add multiple test runs for comparison by following this process (Step 6).
• Measurement which is selected will include in report. • User can update/delete a measurement also. |
- Click the Generate Report button, the report gets generated as displayed. User can add further columns, such as Minimum and Maximum Value and Max sample timestamp in Compare Report.
In Compare Report, a heading named ‘Selection Mode’ is provided with a check box. This appends a check box preceding to each metric name, so that the user can perform further operations, such as view selected graphs, show selected, hide selected and so on.
![]() |
User can apply filter on the report.
User can download the report in PDF, Excel, and Word format. |
Creating a Trend Report
Follow the below mentioned steps for creating a trend report:
- On the Report Management window, right-click on Compare, and then click New Report.
- Select the Trend Compare check box.
- Enter the report name and select a template from the drop-down list.
- Next, select the project, sub-project, and scenario. Click the Apply The measurement details will be filled automatically.
- Select the measurement name for compare, and click the Generate Report The report gets generated and displayed.
Output
Transaction Time (Secs) Trend
Tabular View of Trend Report
Excel Template Based Reports
An Excel template is a report layout that you design in Microsoft Excel for retrieving and formatting your enterprise reporting data in Excel. Excel templates provide a set of special features for mapping data to worksheets and for performing additional processing to control how your data is output to Excel workbooks.
Creating Excel Based Templates
To create a template, create two sheets, one for Formula and another for Actual Report and save them as template. The formula sheet captures the data from graphs, perform mapping with report sheet, and post the data on the reports sheet. User needs to create certain columns and specify the formula for respective columns. A formula is a collection of Graph instances. The report sheet is called when scheduling is performed via portal.
User can change the layout of the Report sheet in the template also, such as color coding, setting labels, setting threshold values and so on based on requirements.
In a template, some headers are optional, such as Page Name, Metrics, Competitor, and Browser while others are mandatory to create, such as DC, Formula, Type, Graph Type, Decimal, Value 1, Value 2…., and so on.
Creating a Formula
User can get a formula either from derived graph feature of dashboard or navigating to the tree structure of the graph manually i.e. tier>server>instance in the left pane of the Dashboard GUI.
Creating a Formula Manually
To create a formula manually, use the following syntax:
- Group ID.Graph ID.[Vector Name] For example – 107.9.[Cavisson>NDAppliance]
- {Group Name}{Graph Name}[Vector Name] For example – {SysStats Linux Extended}{Process Waiting For Run Time}[Cavisson>NDAppliance]
Getting Formula from Derived graph
- Open the Dashboard GUI, right-click on Custom Metrics and click Add Derived Graph. The Derived graph window is displayed.
- Select the Group name, Graph name, and Indices.
- Click Add Graph. The formula is displayed.
- Copy the same and paste on the Formula sheet under Formula column.
![]() |
The graphs can also be added via providing Group ID and Graph ID. Format: Group ID.Graph ID.[Vector Name] |
Working with Formula Sheet:
- Mention the User name, current date time, report time period, and Aggregate.
- Create the column headers as Page name, Metrics, Competitor, Browser, and DC.
- Next, specify the Formula In this column, specify the Graph path.
Group ID.Graph ID.[Vector Name] or {Group Name}{Graph Name}[Vector Name]
- Create the Type This stores the type of value required (i.e. Min/Max/Avg and so on). Enter the value based on your requirement.
- Create the Graph Type Mention ‘Derived’ by default.
- Create the Decimal It specifies up to what decimal places, result is required.
- Create the columns in which the graph result is stored. These columns are prefixed with ‘Value’, such as Value1, Value2, Value3, and so on. The Value columns are created based on the specification of Aggregation.
For example, if user wants to see the data of last one week and aggregated it by one day. So, seven value columns need to be created. Each value column will capture the aggregated data of one day.
![]() |
Specify the column name in sequence.
There can be multiple formula sheets, each formula sheet is mapped with one report sheet. |
Working with Report Sheet
Report sheet contains the mapped results from formula sheet. Suppose, the data of last one week needs to be captured and it is aggregated by one day. So, seven value columns are created and mapped with the value columns in the report sheet.
Creating an Excel Report
Follow the below mentioned steps for creating an Excel report:
- On the Report Management window, right-click on Excel, and then click New Report.
- Enter the report name, specify the preset options, and select the template from the drop-down list.
![]() |
To upload a template, click the Upload button.
To download a template, click the To delete a template, click the |
- To aggregate the entire report in a single column, select the Aggregated over the entire Report Time Period check box. For example, if user want to capture the data of last one week and select this check box, then the aggregated data will be displayed in a single column.
- If Show Last week’s data day-wise option is selected and user needs graph data for two weeks aggregated by week-wise, then 8 value columns will be created one for a complete week data and rest seven for other week data day-wise.
- To show reports in either Column or Tab, select the same from Show report in If user selects the column option, then graph data will be captured in columns and if user selects the tab option, the graph data will be displayed in separate sheets/tabs.
- Click the Generate Report button, the report gets generated and displayed on the Report Management
- To view the report, click on the report link. The report gets displayed.
Hierarchical Reports
This report is based on the meta data component. There will be multiple drop-downs up to maximum level of hierarchy in test run. For example, if there are maximum 5 levels of meta data in test run, then there will be 5 drop-downs. Each drop-down will be filled by unique list of specific level of meta data component. Also drop-down should be filtered by the previous selection. Next drop-down is enabled only after selecting any value in the current drop-down.
Creating a Hierarchical Report
Follow the below mentioned steps for creating a hierarchical report:
- On the Report Management window, right-click on Hierarchical, and then click New Report.
- Enter the report name and specify the preset options.
- Select a template along with other options (if required).
- Select the meta data details (such as Tier > Server > Instance) and click Generate Report The report gets generated and displayed.
Summary Report
Summary report contains the summarized information of the test, such as test summary, transaction details, HTTP status code details, and other important information. Summary report contains some predefined sections and headers which are more relevant.
Generating a Summary Report
Follow the below mentioned steps for generating the summary report:
- On the Report Management window, right-click on Summary, and then click New Report.
- Enter the report name and specify the report percentile from the drop-down list.
- Specify the preset options.User can select multiple duration in summary report.
- Click the Generate Report button. The report gets generated. Click the report name to view its details
There are following section in the Summary report:
- Test Information: This section contains information on test run, scenario, machine, and so on.
- Test Summary: This section contains information on URL, page, transaction, and session.
- Other Information: This section contains information on TCP connections, SSL Sessions, HTTP hit rate, and throughput.
- Detail Transaction Report: This section contains transaction details, such as Min, Max, Avg, success, Failure, and so on.
- HTTP status Codes: This section contains HTTP status code that are occurred in the test run.
Summary Report with multiple selection presets
There’s a provision to generate Summary Report for multiple presets. Example: if user wants to generate Summary Report of all duration phases of Test Run like Duration1, Duration2, and so on, then user needs to select multiple duration from preset drop down list and generate Report.
For example, in below screen, we have selected three durations, such as Duration 3, 4, and 5 of Group-1.
Generated Report: Upon clicking the Generate Report button, the stats for the selected duration is generated and displayed in a report.
Ready Reports
Ready reports are those reports which do not require any process for generation. User just needs to click the respective report icon on the left pane and corresponding report will be displayed.
Progress Report
Click the Progress Report icon on the left pane. The Progress report is displayed.
Failure Report
Click the Failure Report icon on the left pane. The Failure report is displayed.
Page Dump Report
Click the Page Dump Report icon on the left pane. The Page dump report is displayed.
Page Detail Report
Click the Page Detail icon on the left pane. The Page Detail Report is displayed.
Page Breakdown Report
Click the icon to view the page breakdown report.
Scheduler
User can schedule a report by specifying certain information. Scheduling is performed to generate reports on continuous intervals.
- In the Reports window, go to Ready Reports and click Scheduler. This displays a window, which contains the generated reports.
- Click the
icon, the Scheduled Tasks window is displayed.
2. Click the icon to add a task, this navigates to the first step of task creation.
Step – 1: Select the Report Type and Template
- Select the Report Type (for example, Excel/Custom/Alert Digest Report). By default, it is EXCEL.
- Enter the Report name. By default, it is ExcelReport.
- Select the Template from the drop-down list.
![]() |
After template creation, user needs to upload it to the system. The uploaded templates are displayed in the list. Refer upload a template to know how to upload a template.
If the user selects the Report Type as CUSTOM, the default Report Name changes to CustomReport. Further, the user can choose from Using Template or Using Favorite to add a task |
- Click the Next button to navigate to the next step.
Step – 2: Select Preset Option
- Select the Preset option from the drop-down list. It is the time period for which the report needs to be generated. For example – Last 15 minutes, Last 8 hours and so on. The date and time are filled automatically.
![]() |
If user selects the Custom option, then needs to specify the Start date/time and End date/time. |
- In the View by option, user can specify the format of aggregated data (such as hours, minutes, seconds) based on the selection of Preset option. For example, if user selects ‘Last 15 minutes’ in preset option, then ‘View by’ option displays ‘Seconds’ and ‘Minutes’ only. If user selects ‘Last 4 hours’ in preset option, then ‘View by’ displays ‘Seconds’, ‘Minutes’ and ‘Hours’.
- To aggregate the entire report in a single column, select the Aggregated over the entire Report Time Period check box. For example, if user want to capture the data of last one week and select this check box, then the aggregated data will be displayed in a single column.
- If Show Last week’s data day-wise option is selected and user needs graph data for two weeks aggregated by week-wise, then 8 value columns will be created one for a complete week data and rest seven for other week data day-wise.
- To show reports in either Column or Tab, select the same from Show report in If user selects the column option, then graph data will be captured in columns and if user selects the tab option, the graph data will be displayed in separate sheets/tabs.
Step – 3: Schedule the Report
Hourly
For Report generation on hourly mode, follow the below mentioned steps.
- Specify the report generation frequency in hours. For example – 4 hours, 8 hours, and so on.
OR
- Specify the time for the report generation.
- Specify the schedule expiry date and time.
- Enter task description.
- Click the Next button to navigate to the next step.
Weekly
For Report generation on weekly mode, follow the below mentioned steps.
- Select the day of the week on which report needs to be generated.
- Specify the start time.
- Specify the schedule expiry date and time.
- Enter task description.
- Click the Next button to navigate to the next step.
Monthly
For Report generation on monthly mode, follow the below mentioned steps.
- Specify a day of a month on which reports need to be generated.
- Specify the start time.
- Specify the schedule expiry date and time.
- Enter task description.
- Click the Next button to navigate to the next step.
Step – 4: Enter Mailing Information
Note: A predefined variable is provided in Scheduler Report as ‘GenerationDateTime = $genDateTime’, which is used to pass the Date and Time when the report is generated. This predefined variable can be used in Subject or Body of the mail and is available in Hints section in Scheduler Report window. The date time format is in MM-DD-YYYY HH:mm:ss.
- Enter the mail subject.
- Enter recipients of the mail.
- Enter the description in the Body section.
User can pick Report name, Start date, and End date from the Hints section. This automatically fills the specified information in the mail body.
- Click the Next button to navigate to the next section.
Step – 5: Report Scheduling Summary
- This step shows the summary of the report specified by the user. It contains following specification user mentioned in earlier steps.
- Report name
- Template name
- Report type
- Report format
- Report duration
- Schedule
- Schedule expiry time
- Mail to
- Mail subject
- Mail body
- Click Finish, the task is added in the Scheduled Tasks Page.
There are following headers in Schedule Tasks page:
- Task Type: This denotes the type of the task. For example – Word/Excel/HTML report
- Description: This denotes the description of the task.
- Schedule Time: This denotes the schedule time for the task execution
- Schedule Expiry Time: This denotes the schedule expiry time for the task
- Status: This denotes the status of the task, either enabled or disabled
- Actions: User can delete /update/disable the task.
Viewing Generated Reports
User needs to follow the below mentioned steps for viewing the generated reports.
- Click the Available Reports icon [
] under Reports The Generated Reports page is displayed.
- Next, click the Report name that needs to be viewed. A dialog box to open/save the report is displayed. Save and open the report.
Uploading/Downloading Templates
User can upload the created template to the system and can download the uploaded template from the system. User needs to click the icon under the Reports tab.
To Upload a Template
- Click Browse button under Upload Options section.
- Locate the template from the system.
- Click Upload.
![]() |
If user selects the “Overwrite if exists” check box, the template will be overwritten if it already exists in the system |
To Download a Template
- Select the template from the drop-down list under the Download Options section.
- Click Download.
- A dialog box is displayed to open/save the template.
![]() |
User can make changes in the downloaded template and re-upload it after making changes. |