31st Jan, 2020 – Release Notes

Common Features

Support for Brotli (Br) Compression Type: Brotli (Br) is a compression type developed by Google and serves best for text compression. It uses a dictionary of common keywords and phrases on both client and server side and thus gives a better compression ratio. It can be used to compress HTTPS responses sent to a browser, in place of gzip or deflate. This compression type is now supported in NetStorm, NetCloud, and NetOcean (at Template level, Service level, and Configuration level).

Cavisson Processes: The user can see the details of all daemon services / processes (such as ndc, lps, tomcat, postgres, cmon, hpd, and nsu_system_health_check), which are in stopped, running, and sleep status on the server and can perform various actions (start, stop, restart, view logs, view configurations). Apart from this, the user can run commands from UI, such as top, iotop, netstat, and ifconfig (which earlier was limited to run from backend only.) A user can check the number of files accessed by each process.

It is implemented in NetStorm, NetDiagnostics, and NetCloud.

  • In NetStorm: View > Health > Cavisson Services
  • In NetDiagnostics: Advanced > Health > Cavisson Services
  • In NetCloud: View > Health > Cavisson Services



Protocol Buffer (protobuf): NetStorm supports Protocol buffers, which are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. It consists of two C-APIs:

  1. First API encodes the proto file and provides result in protobuf encoded format (binary).
  2. Second API decodes the response in normal text format.

There are following key features of protocol buffer in NetStorm:

  • Provide NetStorm API to support encoding of the provided data into Google’s Protocol Buffer format. The data can be provided through C variable or NS parameter.
  • Provide NetStorm API to support decoding of the provided data into Google’s Protocol Buffer format. The data can be provided through C variable or NS parameter.

Protocol Buffers are widely used at Google for storing and interchanging all kinds of structured information. The method serves as a basis for a custom Remote Procedure Call (RPC) system that is used for nearly all inter-machine communication at Google. The design goals for Protocol Buffers emphasized simplicity and performance. In particular, it was designed to be smaller and faster than XML.

Amazon Simple Queue Service (SQS): Amazon SQS offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS supports both standard and FIFO (ends with .fifo suffix) queues.

High Level Usage

  1. A producer sends message A to a queue, and the message is distributed across the Amazon SQS servers redundantly.
  2. When a consumer is ready to process messages, it consumes messages from the queue, and message A is returned. While message A is being processed, it remains in the queue and does not returned to subsequent receive requests for the duration of the visibility timeout.
  3. The consumer deletes message A from the queue to prevent the message from being received and processed again when the visibility timeout expires.
  4. Amazon SQS automatically deletes messages that have been in a queue for more than maximum message retention period.
  5. The message retention period is 4 days (default). However, you can set the message retention period to a value from 60 seconds to 1,209,600 seconds (14 days) using the SetQueueAttributes


  1. Better Performance: Message queues enable asynchronous communication, which means that the endpoints that are producing and consuming messages interact with the queue, not each other.
  2. Increased Reliability: Queues make your data persistent and reduce the errors that happen when different parts of your system go offline.
  3. Granular Scalability: Message queues make it possible to scale precisely where it is needed. When workloads are on peak, multiple instances of the application can add all requests to the queue without risk of collision.
  4. Simplified Decoupling: Message queues remove dependencies between components and significantly simplify the coding of decoupled applications.

Creating a New Flow in Existing Script:A new option ‘Create Flow’ is added in the ‘Edit’ menu to create a new blank flow in the script. On clicking this option, a pop-up window is displayed in which the user can select a project name and a sub-project name to create a flow file in that script. Until now, the user was getting recorded flow only, but after this enhancement, the user can create a blank flow and add pages/URL transaction manually into the flow file.

Breakpoint Functionality Enhancements: Shortcut keys to perform the Breakpoint related operations.

  • Continue – F5
  • Run Step by Step – F10
  • Remove Breakpoint – F9

A menu item (‘Remove All Breakpoint’) on right-click for removing all the breakpoints in the current Flow.

Breakpoints in RBU Type Script: Breakpoint feature for CLICK_AND_TYPE script is provided which earlier was limited to C-type script only. This feature allows a user to debug the RBU script by adding breakpoint, so that the user can analyze / verify the data or flow by printing the value of any variable / step execution to verify the condition.

Revert Option for Auto Correlation: A user can now revert an auto correlated string (selected text including curly braces) by right-clicking and selecting the ‘Revert Auto Correlate’ option in the pop-up menu.

There are two options for reverting:

  1. Replace
  2. Replace All

UI Improvements in Kafka Security Protocol Settings: Following UI improvements are implemented in Kafka security protocol settings window:

  1. The ‘Use SSL’ check box is renamed to ‘Use Security Protocols’.
  2. ‘Advance Settings for SSL’ dialogue box header is renamed to ‘Advance Settings for Security’.
  3. ‘Plain text’ option is removed from ‘Security Protocol’ field.

Generic Absolute Path of Input File in File Encryption Window: In the ‘File Encryption’ window, the generic absolute path, which is displayed on clicking the ‘Browse’ button for both input and output options, is fixed and is displayed as:/home/cavisson/Controller_Name(Current Script Manager)/scripts

Search Scripts in Script Manager: The user can search a script by entering either the complete script name or starting characters of that script in textfield. It highlights all the scripts with that name.

Searching Mechanism Enhancements in RunLogic: In RunLogic, the user can now search a string by suggested page name apart searching from exact page name. This functionality is only applicable for the page name not for Block or Condition (if, else ..)


Sync Point: Sync points means ‘meeting points’. A Sync point creates intense user load on the server at a time to measure server performance under load. For example, to test a bank server, the user could create a scenario that contains two Sync points as ‘Deposit’ and ‘Withdraw’. The first Sync point ensures that 1000 VUsers simultaneously deposit cash. The second Sync point ensures that another 1000 VUsers simultaneously withdraw cash. To measure how the server performs when only 500 VUsers deposit cash, the user can deactivate the withdraw Sync point, and instruct 500 VUsers to participate in the deposit Sync point only.

Earlier, if all the VUsers used to reach to a Sync point, they were released from the system at once. Now, some enhancement/customization has been implemented where the VUsers can be released based on release type and release schedule.


The user can apply Sync Points in two modes:

  • Offline Mode (Using Scenario): The user can configure Sync Points before the start of a test from Scenario UI. Sync Points can be applied either while creating a new scenario or on an existing scenario. These Sync Points are also displayed in the online mode. The user can make changes in Sync Points at run time, but those are not reflected in offline mode.
  • Online Mode: The user can add the Sync Points at run time (i.e. while a test is running) and make the changes there only. Here, the user can add / delete a Sync point and can update its properties.   

Release Policy

Sync point is released due to following reasons:

  • Release Target VUsers Arrive at the Sync Point: When target VUsers arrive at the Sync point, then they are released.
  • Overall Timeout: Maximum time allowed for each Sync point to reach the target after first VUser has entered the system. If the Sync point is not released within the timeout period, then all the VUsers are released from the Sync point.
  • Inter User Arrival Timeout: Maximum time allowed between two VUsers arriving at a Sync point.
  • Manual Release: Immediate release for any Sync point at any state (by using the ‘Release’ button).

Release Type

How the VUsers will be released. For example – either by Target Reached, on Absolute time, or on Relative time period. This is applicable for ‘Auto’ mode only.

  • Target VUsers: If the target VUsers (as mentioned in the ‘Release Target VUsers’ field) reached, the Sync point is released.
  • Absolute Time: Define the absolute date (in mm/dd/yyyy format) and time (in hh:mm:ss format) at which the VUsers needs to be released. The absolute date and time should be greater than the current date and time. An option “VUsers will be released when either Absolute time has occurred or VUsers target is met” is displayed to allow releasing the VUsers on either completing the specified absolute time or reaching the targeted VUsers.
  • Period: Provide the period (in HH:MM:SS format starting from the time as soon as first VUser reached to the Sync point). An option “VUsers will be released when either specified period is elapsed or VUsers target is met” is displayed to allow releasing the VUsers on either completing the specified period or reaching the targeted VUsers.

Release Schedule

It defines when the VUsers are released after they have reached their respective Sync point. There are following options within this category:

  • Immediate: Immediate release of all the VUsers that reach Sync point.
  • Duration: Releasing of VUsers within the duration (in HH:MM:SS format) as specified by the user. For example – To release all the VUsers within 2 hours and 30 minutes, the user needs to mention duration as 02:30:00.
  • Rate: Releasing of the VUsers at the specified rate of VUsers per minute. For example – If total VUsers are 120 and rate is defined as 20 users/minute, so total time require to release all VUsers is 6 minutes.

Updated Scenario Details after Runtime Changes: When the user applies runtime changes in running scenario from dashboard, those changes are now reflected in scenario (edit mode). The user can apply runtime changes in two ways:

  1. The user can apply runtime changes from Action > Update > Update User/Session Rate.
    • The user can increase or decrease the number of VUsers from Update User/Session Rate window.
    • The changes are saved in a configuration file.
    • These changes are also applied to scenario file.
  1. The user can apply runtime changes from running scenario or scenario in online mode, which can be opened while scenario is running from Action > Update > Running Scenario.
    • In running scenario or scenario in online mode, the user can apply changes in some configuration and change the phases.
    • These changes are also applied to scenario.

Addition of New Module Masks in Debug Logging: Since NetStorm is having different modules, so for each module, there is a mask specifying the particular module. Few more module masks, such as Parsing, NJVM, HTTP2.0, TR069, LDAP, RTE, User Trace, IMAP, DB_AGG, Runtime, JRMI, SVRIP, Authorization, WS, SP, Proxy, and MM Percentile are added in Group Based Settings > Advanced > Debug Logging. When the user selects a module mask, its value is saved in Scenario file.

Second Level Authorization in Starting a Test: To avoid unwanted hits on server, a feature is provided where the user (with required privileges) first needs to set (6-digit) authorization key (via Admin > Second Level Authorization) which is used while starting a test on that machine. On starting a test, the user is required to provide the authorization key. If the key matches with the one as set by a privileged user, the test starts and ‘Test Initialization Screen’ is displayed, else an error message is displayed. In addition, when a user clicks the ‘Save’ button in Scenario Schedule window to apply scheduling, the user is required to provide the activation key if second level authorization is enabled.

Real Device Testing (RDT)

New Attributes in Page Detail Report: When a user runs an RDT test, the following attributes can be viewed in the page detail report:

  1. Speed Index
  2. Visual complete time
  3. Visual comparison

 ‘Native App’ Script and ‘Network Speed’ selection while Running an RDT Test: While adding a scenario group, for the RDT type, a user now has the option to select ‘Native App’ script. In addition, the user now can set the network speed, such as 2G, 3G, and 4G.

Execution of RDT Test for Local and Cloud Setup: RDT test can be executed for local setup as well as Cloud Setup (BrowserStack) using the same script.

Real Browser User (RBU)

Right-Click and Double-Click for ClickAndScript: The user can now perform click operations, such as single-click, double-click, and right-click on the elements of a webpage in runtime while running NetStorm test with RBU/NVSM Configuration.

Discover CDN Requests Served: The user can now see requests getting served by cloudflare in Waterfall model. The web resources coming from cloudflare cache server can also be identified.

‘Bandwidth Simulation in Lighthouse’: An accordion tab – ‘Lighthouse Settings’ in Real Browser Settings is provided in Scenario. The user can enable / disable network throttle from there. Network throttle is enabled only if Lighthouse is enabled. It also enables download throughput, upload throughput, and network latency. These fields only accept value greater than 0. If user tries to provide value less than and equal to 0, an error message is displayed.

Configuration of Phase Interval: A user can now configure phase interval for the calculation of Page Load Time. The Page Load Time is measured as the time from the start of the initial navigation until there was 4 seconds of no network activity after Document Complete. This usually includes any activity that is triggered by JavaScript after the main page loads (onload event). If any request is initiated after 4 seconds, it is considered to be in another phase and its timing will not be included in page load time. Now it is configurable.


‘Trend’ Link for New CI/CD Report Format: Using this feature, the user can view the trend of all the test runs with respective scenario. Trend link displays the trend of all the test runs, which ran with the same scenario. It includes all the test runs of that scenario if baseline is not selected, else it takes all test runs between current TR and baseline TR.

Addition of ‘Page Detail Report’ Section in CI/CD Report: In CI/CD report, a new section ‘Page Detail Report’ is now added. Here, a user can view the list of all the pages and their details, such as host name, group, browser, session count, requests, browser cache, and others.

API-Wise Transaction Count Details in Jenkins Report: On selecting the ‘Show Transaction Count’ option in ‘Check Script’ window, the user can add a column ‘Count’ in the report which displays transaction specific counts.


MSSQL Monitoring Enhancements

  • Display of Actual Execution Plan & Deadlock Stats Fetched from Ring Buffer: Actual Execution Plan is the actual estimation of work done by the SQL Server to get the data of executed T-SQL queries or batches while Deadlock stats shows the Queries which are gone in deadlock state. Now, we are getting the data of actual execution plan and deadlock stats from the ring buffer instead from view (i.e. Sys.dm).

    When Actual Execution Plan monitor is configured for a DB Source, then the user can view Actual plan also for a selected query by clicking the ‘Actual Execution Plan’ icon. Upon clicking a node in the Actual Execution Plan, the user can view the node details with some extra ‘Actual’ parameters. The user can download the actual plan in XML and graphical format. A button is provided in DB Query Stats – Plan panel to download the Actual Execution Plan. Tool tip is also defined as ‘Actual Execution Plan’. When downloading the Actual Execution Plan, the user can get all of the nodes present in the plan with proper descriptions and arrows. Deadlock detail XML file is also available to download for each deadlock process. Complete Process query can also be viewed in a pop up by clicking the Query displayed in the table

  • Drill-Down from Memory Utilization Graph to Memory Stats: A user can now drill down from memory utilization graph of dashboard to memory stats of MSSQL monitoring UI.
  • Drill-Down Option from Dashboard to DB Connection Window: Support is provided for drill-down from DB Connection Stats graph of Dashboard to Connection Stats window of MSSQL Monitoring UI.
  • Multi-Column Filter Functionality: Multi-column filter functionality of tabular data has been provided in the following windows: Memory Stats, Connections Stats, IOByFile, Waits Stats, Session, Locks Stats, and TempDB Stats.
  • Display of Queries Corresponding to the Wait Type Selected from Graph: When aggregate mode is ON, the graph and tabular data is displayed on the screen. In this, the graph displays the Average Wait Time for a Wait Category.
    • Drilldown 1: When the user selects a Wait Category from the graph, the table below displays the stats for that wait Category only during that time period for which graph is displayed.
    • Drilldown 2: Further, when user selects a Session Id from the Wait Statistics Table, the Session Statistics corresponding to that Session Id is displayed below it in the Session Statistics Table.
    • Drilldown 3: Further, when there is any query present in the Batch Text column corresponding to the selected SessionID then the user can click the SessionID to see its Execution Plan and Complete query.

Application Attributes in ND: There was a requirement to capture and monitor performance metrics for one or more application running on different Tiers/Servers in different Regions/Zones. This feature enables ND to capture and show all metrics at Region > Zone > Application > Tier > Server > Instance level. It means, one or more levels are added in existing hierarchy above Tier.

Monitoring Files Count in a Bucket Mounted to a Server: The user can now monitor number of files in each Bucket (not all buckets). The user is notified with an alert message when the count of files exceeds 5000.

NoSQL Generic Monitor: Previously, in Generic NoSQL MongoDB monitors, only custom type is supported for vectors. Now, Dynamic Vector type is also supported that means vectors can be added/deleted at run time.

Generic REST Monitoring Enhancements: As Generic REST monitoring monitors any API or JSON response, therefore now it handles the following:

  • Token expiry response
  • Multi array and complex JSON response

Addition of Graphs to Validate ‘Fill up Disk Fault’ in NetHavoc: The ‘Used Disk Space Pct’ graphs along with existing ‘Available Disk Space Pct’ graphs are added in GDF files to validate ‘Fill Up Disk Fault’ in NetHavoc.

‘Exclude Server’ option: The user can exclude those servers for which monitoring data is not required on dashboard. Suppose, there are 10 servers in a tier and the user don’t want data for 4 servers, then by using this feature, the user can exclude those 4 servers and capture the data for rest 6 servers and get the monitoring data displayed on Dashboard.

Option to Add Controller Name from Scenario UI: A drop-down ‘Controller Name’ is added in Scenario > Monitors from where the user can select a controller to which NetOcean monitors are to be applied. The controller list is enabled only when the ‘Enable NetOcean Monitor’ check box is selected.

Option for Passing Token in Generic Rest based Monitors: In GenericRestMonitor framework, token is supported with the following modes:

  1. Token: The user needs to provide the token directly.
  2. TokenFilePath: The user needs to provide the token file path.
  3. TokenURL: With token URL, the user can get json response which contains the token. The user also needs to provide the path of token.
  4. TokenAPI: Along with tokenAPI, the user needs to provide token argument related to this tokenAPI.

Agentless Monitoring: Support of Server Health Extended Stats and Overall Server Health Extended Stats Monitor: Server Health Extended Stats monitor and Overall Server Health Extended Stats monitor have been supported now from Health Check Monitor UI. In addition, a new Help icon is provided beside each monitor name to show the stats of each monitor. On clicking this icon, a dialog box is displayed showing the monitor stats.

‘Refresh’ option in Monitor Up/Down Status Window: If a user clicks the ‘Refresh’ Button, the time period gets refreshed as per the previously selected time period.

Addition of New Monitor/Metrics Group


Monitor / Metrics




To collect all the Active MQ broker details, such as Consumer Count, Message Count, and others.



To fetching CPU Utilization container-wise in Kubernetes environment



To know the status of sealed from response of URL.



MaxwellMessageStats,  MaxwellTransactionStats,  MaxwellReplicationStats, MaxwellMessagePublishStats



PgpoolQueryDistributionStats: Number of times a query type (insert, select, delete, etc.) has been executed on a node.

PgpoolSessionWaitingStatus: Current waiting status of a session.


CRDB Redis

Incoming Traffic Compresses,  Incoming Traffic uncompresses,  Pending Writes Max,  Pending Writes Min



PostgreSQLDbTempSpaceStats, PostgreSQLDbHitRatio, PostgreSQLDbTableHitRatio,  PostgreSQLDbConnectionStatistics, PostgreSQLQueryResponseTime



Spark Message Stats,  Spark Job Stages Stats, Spark Stream Stats, Spark Executor Stats


Enhancements: Pause / Resume Logs

  1. ‘Pause/Resume Test Schedule’ option is displayed in ‘Advance’ menu of top panel and in drop-down of test run number to pause/resume a test schedule. Further, there are the following options:
    • Pause till resumed or stopped
    • Pause for specified time
  1. ‘Pause/Resume Dashboard’ icon is displayed in ‘Advance’ menu of top panel and in drop-down of ‘Last Sample Time’.
  2. In addition, the ‘Pause Test’ and ‘Resume Test’ button is displayed separately in ‘Pause Resume Log’ window. 

Dual Axis Stacked Bar Chart: A new chart type is introduced in Widget, i.e. ‘Dual Axis Stacked Bar’ chart, which provides the data in the widget as one of the data being Dual axis line and another being Stacked bar.

‘SystemProcessHealth’ Dashboard: A new predefined favorite, ‘SystemProcessHealth’ is added to show the system and process health. One can see the current state and the trend over last N hours or days or weeks.

  • First row displays the system performance like Current CPU Utilization, Current Memory Utilization, Avg of CPU Utilization, Load Avg of last 1 min and CPU I/O Wait.
  • Second row displays metrics for CPU Usage, Memory Usage, and Thread count for multiple processes.
  • Third row displays process IO stats and provides CPU usage of top five services. High IO by a process can make the system unhealthy.

Enhancement in Excel Template for Different Time Period: A check box ‘Override Preset from Template’ is added in Reports Generation UI to use time present in report’s template.

  • A delimiter ‘to’ is used between times.
    • HH:MM:SS to HH:MM:SS

In first case, specifically that date-time is used in report generation. In second case, nearest date in past is used for start and end time. Example: If period is provided as ‘23:45 to 2:15’

  • If current time is greater than 2:15, Date-time is selected as: (Today’s date – 1) 23:45 to (today’s date) 2:15
  • If current time is lesser than 2:15, then Date-time is selected as: (Today’s date – 2) 23:45 to (today’s date – 1) 2:15
  • In case user did not insert seconds in start time or end time, it is considered as ‘00’ seconds. Example: If the user provides input in template as: 12/17/2019 01:30 to 12/17/2019 03:30. Then, time is considered as: 01:30:00 to 03:30:00.

Enhancements in Widget Settings: Following options are implemented in Widget settings for usability point of view:

  • ‘Remove’ icon on data widget
  • Hidden title bar and other buttons from data widget
  • Changed font size of data widget
  • Changed background color of data widget

Scheduled Alert Report: Earlier we are having a report which gives the information of alert count per vector or per rule for a given time duration. One more grouping is added i.e. Tier. Now, the user can get the alert count per Vector/Rule/Tier.

Support to display Alert Graphs in Multi DC Environments: When DC is selected to ‘ALL’, all alert graphs are displayed in Dashboard.

Taking Heap Dump and Thread dump from Alert using Application Agent (Node and Java): The user can now take heap dump and thread dump from alert by using the Java agent and NodeJS agent.

Skip Samples for New Scaled VMs: The user can configure number of samples to be skipped from Alert settings for new scaled VMs / upcoming VMs. This helps in decreasing false alert generation.

Using the value generated from Graph A as a threshold for Graph B: An alert is generated on a metric by comparing with threshold value of another metric based on some conditions. A new condition ‘Percentage of’ is used to select a threshold graph. This is applicable for Increases, Decreases, Changes, Increases from Baseline, Decreases from Baseline, and Changes from Baseline condition. By using this feature, the user can change the threshold value dynamically at run time based on graph B.

Reporting in Alert: Whenever an alert is generated, a report (html, stats, and tabular) is sent in alert email as a link and an attachment. This helps the user to reduce manual processing time to analyze the alert. The report can also be generated by using a template (For example: Favorite).

Display of User Configuration Difference in Alert History: Now, on alert configuration changes, the details of all the fields that have been changed are displayed in alert history in a pop-up window on the click of message field. In alert history, message field of all rule changes and setting change rows are clickable (rest are same as it used to be). On clicking the message field text, a pop-up is displayed, showing all the changed fields in a tabular format.

Display of Satisfied Indices Details in Active Alert/Alert History/Alert Mail: As per the previous design, if any alert was configured with multiple conditions, there was a chance that alert got generated at a certain level of hierarchy which was common in both the conditions. Due to that, the user was unable to get the exact indices for which alert got generated. Now, complete details are provided for all the satisfied indices along with the partial ones. With this enhancement, details of all the satisfied metrics are displayed in the Active Alert/Alert History/Alert Mail.

Highlighting Alert Email Graph for Alert duration: The alert graphs are highlighted with a specified color and displays via Show Graph option and via alert email.

Auto Baseline in Weekly and Daily Trend at Widget Level: Support of Auto Baseline is provided in ‘weekly’ and ‘daily’ trend at widget level. In weekly mode, comparison is done on all the graphs with data of the previous week for same graphs, in same time frame. Similarly, in the daily trend mode, comparison is done with the previous day data.


Performance Stats Report: Performance Testing is defined as a type of software testing to ensure software applications will perform well under their expected workload. Once the performance test is executed, the user can generate its report by using the Cavisson Reporting feature. For this, first the user needs to define the configurations for the performance stats report and then needs to generate it. Once the performance stats report is generated, the user can view it for the detailed investigation of the results. Cavisson performance stats report is broadly categorized into two sections, Performance Report and Metrics Report. Performance Report is further categorized into Analysis Summary and Performance Summary. The Analysis Summary report contains details, such as overall analysis summary, executive summary, and statistics summary. The Performance Summary contains details, such as overview of the test, sessions, transactions, pages, network, and errors. Metrics Report is further categorized into Metrics Summary and other charts, such as URL, Page, Transaction, Session.

User Session Report: This report displays number of users and sessions per day, number of sessions per user and activities count of each activity and its module, done by user in between a specified period in three different tables as follows:

  1. Summary: Displays date wise number of active users and its session count. The details include columns Date, LOB, Environment, Application, Application URL, number of Unique Users, and number of Sessions.
  2. Top N Users: Displays number of sessions count per user (unique) between the specified period. The details include LOB, Environment, Application, Application URL, Users and number of Sessions.
  3. Top N Activities: Displays the count of per activity performed by user in the specified period. The details include LOB, Environment, Application, Application URL, Module, Activity, and Activity Count.

Key Enhancements:

  • Generation of User Session Report from Audit Logs: This report provides users count, sessions count, and activities performed in a time frame. It has three tables – Summarized table, Detail table, and Activity table. In case of Multi DC, all DCs’ audit logs are displayed in the UI and User Session Report. One more field ‘DC Name’ is added in ‘Audit Log Table’ to display the Data Center name regarding the log.
  • LOB, Environment, and Application Name in User Session Report: The user can now view Line of Business (LOB), Environment (Example, MosaicLLE), Application (Node name), Application URL with Summary, Top 50 Users, and Top 100 Activities in User Session Report. In Multi-DC environment, charts for number of unique users and sessions are always be displayed in a ‘Line’ chart. In Activity table, the entry for Login and Logout is provided and ‘Module name’ column is changed to ‘Module’.
  • Generation of Excel Report: An HTML link is provided on the number of unique users in the summary table of User session report. To see date-wise user list, click or ctrl- click on user count in summary table. On click, an HTML is displayed in the same tab and ctrl-click displays HTML in a new tab. This html contains the date-wise user list and it scrolled down to the date which was clicked on PDF for the first time.
  • Addition of Current Timestamp: Earlier, in the user session report (PDF), there were no date-time passed in URL. Due to this, when the same user generates report for the same machine, the values of the report got updated and this displays the incorrect data. Now, the current date and time is added at the end of report URL to get the actual data when the report is generated.

Flowpath Analyzer Report through Dashboard: A user can now view the Flowpath Analyzer Report via Drill Down Report menu. Flowpath Analyzer report contains Pattern stats, Flowpath Signature, Pattern Summary, Top Methods. Earlier, it was supported from ‘Application End-to-End View’ only, but now it supported from Dashboard too.

Compare Report: Support for All Formula Types: In compare report, support for all formula types, such as Minimum, Maximum, Average, Sample Count, Last value of sample is provided.

Audit Log
  • For Different Modules in Dashboard: Test Run Pause and Resume,  LDAP Setting, Dashboard Setting, Load Favorite, All widget time period apply,  Selected widget time period apply, Change viewby, Apply parameterization, Save layout,  Modified layout,  Save layout, Change widget setting, Show graph data, Open Merge, Open Related, Derived Graph Creation, Derived Graph Edit Mode, Multi Node Save and Apply, Pattern Matching, and Compare.

*Response time columns is added in the Audit log.

  • For All DDR Operations: Dashboard > DDR,  System Defined > DDR, Flowpath > Split View (All eight reports), Side Bar, Tier Status, Store view, BT Trend, IP Stat, NS-DDR Reports, Thread Dump, Heap Dump, JFR, Process Dump
  • For Event Logs in NetForest: Authentication successes and failures, Authorization (access control) failures that includes use of higher-risk functionality (Addition or deletion of users or changes in privileges) and invalid login attempts.


Handling of Havocs if Machine Agent is Down: After clicking ‘Configure Havoc’ or ‘Configure & Apply Havoc’, a havoc summary table displays a pop-up listing all those servers whose machine agent is not running along with two options: ‘Proceed with remaining servers’ and ‘Abort Havoc’. If the ‘Proceed with remaining servers’ option is clicked, UI traverses to ‘Report’ section and another havoc summary table pops up listing servers which are injected with havocs successfully. When ‘Abort Havoc’ is clicked, the whole havoc gets terminated.

End Time for Dynamic Server Selection Mode Havocs in ‘Report’ Section: For Dynamic server selection mode, the ‘End Time’ is now displayed in parent node along with child nodes, after the fault is completed.

Up Scaling of New Servers into the Running Tests: New servers can be up-scaled into the running tests of NetHavoc. The users can apply havocs to the up-scaled servers both specifically and dynamically. These servers can also be up-scaled to the havocs that are scheduled in running tests dynamically.

Viewing All the Havocs: An option ‘Show All Havocs’ is provided in the form of a check box. When a user selects this check box, all the havocs that are in ‘Ready to Apply’ / ‘Running’ / ‘Scheduled’ state in any test run are displayed.

Addition of ‘Mode’ Column in Overall Reports Section: In the ‘Reports’ section under the ‘Overall’ tab, a new column ‘Mode’ is added that displays the mode in which a fault was configured. There are two modes – UI and API.


Display of Number of Generators on Generator Information Window: On the Generator Information window, a user can view the count of number of generators used at the lower-left corner.

Getting Updated Scenario Details after Runtime Changes: Now, when a user applies RTC then changes are reflected in the original scenario only if user allows it (user is prompted to apply changes to original scenario on applying RTC). Changes are applied to original scenario only if RTC for all the groups are successful. The changes in ‘Number of VUsers’ and ‘Schedule Phases’ only are reflected in the original scenario.

Capturing TCP Dump When Delay in Progress Report: A user can now enable TCP dump in NetCloud and can capture TCP dump on controller/generator when there is a delay in progress report. Inputs required:

  1. Controller Mode
  2. Generator Mode
  3. Connection Type: Control connection, Data connection, or both
  4. Duration: Time in seconds, till which tcpdump is captured

Highlight in Row Border Color on Failure of Any Field: In ‘Generator Test Init Status’ window, in case of failure in any phase of test initialization or if test is not started on that generator, the border of the corresponding row is highlighted with Red color.

Addition of ‘Script’s Data’ Column in Generator Status Window: A new column ‘Script’s Data’ is added in the Generator Status window that displays the script’s external data file upload status.


Options for Duplicate Templates/Services: For any Service, which is already simulated in NetOcean, the user can create services according to their needs after recording the application URL. This avoids addition of templates into existing service with same URL.

Options available for the user:

  1. Delete the Existing Services and create new Recorded Services
  2. Disable the Existing Services and create new Recorded Services
  3. Create Backup (Disabled Mode) and Delete Existing Services and create new Recorded Services
  4. Create the template in the Existing Services

Server Push Support from UI at Configuration Level: A page is added in ‘Configuration Based HTTP Settings’ named as HTTP/2 Settings(1.X/2). By using this setting, the user can enable/disable server push globally. To make changes in Server-Push, the user needs to change the HTTP Mode from 1.X to 1.X/2 and then enable the Server-Push Mode. If server push is enabled by the user, a setting is enabled in static services’ configuration. In this page, a tab is added and named as Server Push. The user can enable server push from UI and has two options for uploading the Server-Push file:

  • Uploading a new file.
  • Uploading an existing saved file from same host as well as another hosts

It displays a drop-down containing all the hosts present in real directory and their corresponding URLs in a table. The user has a ‘Push’ option to upload the already saved URLs.


  • Less Response Time
  • Reduced Traffic
  • Low Bandwidth Consumption

Enhancements in Recording Window

  • ‘Client Certificate’ and ‘Endpoint Certificate’ Options in Recording Configurations: NetOcean uses SSL certificates in recording. It uses the default certificates present in the ‘cert’ directory of the controller. If any other certificate is required for specific domain, the user was supposed to copy that certificate in ‘cert’ directory from backend. The user can now provide inputs for the ‘client certificate’ and ‘end-point certificates’ from UI. If not provided, it uses the default certificates provided in the ‘cert’ directory of the controller.
    • A Client certificate is a variant of a digital certificate that is widely used by the client to make the systems authenticated so that trusted requests should go to a remote server. This certificate plays a crucial role in several joined authentication design, which offers a well-built guarantee of a requester’s identity.
    • Endpoint certificate are very small data files that digitally combine or join a cryptographic key to the company’s details and information. When SSL is installed on a web server, it triggers the security device and the https protocol (over port 443) allowing locked and safe connections from a web server to a browser.
  • Display of Total Recorded Service Count: On the Request Response List table header, a user can view the total recorded service count after a recording is completed.
  • Prefix in Service Name: A new field ‘Recording Name’ is added in the service recording window. The user is required to provide the recording name that work as a prefix while creating a service through recording.

Recorded Service name: ‘shoppingCart’
Prefix by user: ‘cavisson_’
Service name added in record: ‘cavisson_shoppingCart’

  • Dynamic Recording Configurations based on Recording Type: Recording configurations are dynamic based on the recording type, such as ‘Web Service Recording’ and ‘Web Page Recording’.
    • Web Service Recording: Provide recording name, recording port, protocol, service endpoint hostname/IP, service endpoint port, client certificate, and endpoint certificate.
    • Web Page Recording: Provide the recording name and recording port. Further, upload the certificate in respective browser and set proxy of machine (Machine IP and Recording Port).

Addition of Application/JSON Content Type: Content type signifies what kind of content the recipient is (supposedly) dealing with. Content-type: application/json; charset=utf-8 designates the content to be in JSON format, encoded in the UTF-8 character encoding. Content type ‘application/json’ support is provided from UI for normal mode and RTC mode.

Add / Delete New Correlation Directory: The user can now add / delete a correlation directory within the Configuration > HTTP Settings > Correlation Settings window.

Option to Change the Sequence Number or Priority of Specific Templates: In Templates UI, there is an option named as ‘Template Ordering’. Using this, the user can change the priority or sequence of templates inside a service.

Smart Editor: Smart editor is provided in ‘Add Service’ window, ‘Test Service’ window, and ‘JMS Message Information’ window. The user can now add color-codes and formatting to the content to make it easier to read and analyze.

Text Compression for Static Services: An option ‘Compression Type’ is provided in HPD Configuration > HTTP2 Setting Details for static services.

View/Add/Modify SMTP Protocol Services: A user can now View/Add/Modify SMTP protocol services from configuration level.

External Libraries in Callback Function: In Call Back Details section, ‘Use External Library’ option is added for uploading external C library.

Host (Domain) Column in ‘NetOcean Manage Services’ Window: A new column ‘Host’ is added in ‘NetOcean Manage Services’ window that displays the host name mentioned at the time of recording.


Application End-to-End View

Search Option to View Flowmap for Tier(s): In the Configure Flow Map section, a search option is added to filter the tier/integration points that helps a user to filter out the nodes.

Option to View Full Flowmap of Tier Status: A user can double-click a tier to view all the connected tiers and integration points up to ‘n’ level. In addition to this, the user can right-click any node to view the option ‘Show FlowMap for selected Tier > upto 1 level or upto n level’.

FlowPath Filters (by IP / Response Time) in any Integration Point: A user can now view ‘Flowpath By IP’ and ‘Flowpath By IP Response Time’ options in Drill-down through any Integration Point.

More Configuration Options in ‘Edit Current FlowMap’: A user can now configure Integration Point character length and calls per second / calls per minute in ‘Edit Current Flowmap’ option.

Support to Search a FlowPath Using Correlation ID from the Geo Map UI: A user can search a FlowPath using Correlation ID from the Geo Map UI just like the ‘Application End-to-End View’.

Addition of Apigee Metrics in KPI Dashboard: A new section to display Apigee metrics is added in the KPI dashboard. A user can view details such as Requests/Sec, Average Response Time (ms), Errors /Sec, Average Request Processing Latency (ms), Average Response Processing Latency (ms), and Cache Hit/Sec.

Aggregate Transaction FlowMap: An icon ‘View Tier Merge View’ is added under the Action column in BT Trend report. This enables a user to view details of all the flowpaths of particular BT in the form of Aggregate Transaction FlowMap (End-to-End View) and understand the BT flow at different levels across multiple Data Centers. It also helps the user to understand at which tier, BT is taking time and from which tier it is going to which tier (in case of tier-merge). All FlowPaths of one tier are merged and shown as a single node and in this way, all flowpaths are merged as tier node and call flow is shown between the tiers (in case of tier-merge).

Renaming and Undoing/Resetting Multiple Integration Points at Once: A user can rename multiple Integration Points at the same time by right-clicking an Integration Point and selecting ‘Rename multiple IP(s)’ option. When this option is selected, all the Integration Points are listed out at one place that makes it easier for the user to rename those.

In addition, the user can undo/reset the renamed Integration Points at the same time by right-clicking an Integration Point and selecting ‘Reset Integration Point Names’ option. When this option is selected, all the Integration Points are listed out at one place that makes it easier for the user to undo/reset their names.

Display of Inactive Stores in Grey Color: In GeoMap, if BT is configured and there is no traffic, the store is visible in grey color. The legend for these stores is displayed at the bottom of the map.

Infrastructure View

Changes in Upcoming Project, Running Project, and Released Details: In the sections Upcoming Projects, Running Projects, and Release Details, the names of the labels in the tables are changed.

  • Upcoming Project:
    • ‘Release’ is changed to ‘Release Number’
    • ‘Project’ is changed to ‘Project Name (Upcoming)’
  • Running Project:
    • ‘Detail Info’ is changed to ‘Description’
    • ‘Ongoing Project’ is changed to ‘Project Name (Running)’
  • Released Details:
    • ‘Ongoing Project’ is changed to ‘Project Name (Released)’

Color-Coding Based on Threshold Values of CPU Utilization: In Inventory sheet, the color-coding is displayed for CPU utilization (%) on reaching a specified threshold value.

  • 0-30 : Green color
  • 30-50 : Orange color
  • More than 50 : Red color

ACL Support for Infrastructure View: With read-only permission, the user can only view the configurations and reports and cannot edit the configurations and generate the manual report.

Agent Config

New Service Entry Points


Service Entry Point Type

Service Entry Point Name










HTTP Integration Type






JBOSS and Generic









React Netty and Spring Webflux

reactor.ipc.netty.http.server.HttpServerOperations.onHandlerStart()V|AsyncHttpService|1 (for 0.7 series)

reactor.netty.http.server.HttpServerOperations.onInboundNext(Lio/netty/channel/ChannelHandlerContext;Ljava/lang/Object;)V|AsyncHttpService|1 (for 0.8 series)

The entry points for Spring Webflux are:



Kafka Producer

Backend Rules: Host Port Topic


Kafka Consumer (Kafkac)








Cassandra DB

Backend Rules: Host and Port




org.apache.cxf.jaxws.JaxwsClientCallback.handleResponse(Ljava/util/Map;[Ljava/lang/Object;)V|AsyncHttpCallout|0 org.apache.cxf.endpoint.ClientImpl.doInvoke(Lorg/apache/cxf/endpoint/ClientCallback;Lorg/apache/cxf/service/model/BindingOperationInfo;[Ljava/lang/Object;Ljava/util/Map;Lorg/apache/cxf/message/Exchange;)[Ljava/lang/Object;|AsyncHttpCallout|0 org.apache.cxf.jaxrs.client.JaxrsClientCallback.handleResponse(Ljava/util/Map;[Ljava/lang/Object;)V|AsyncHttpCallout|0 org.apache.cxf.jaxrs.client.WebClient.doInvokeAsync(Ljava/lang/String;Ljava/lang/Object;Ljava/lang/Class;Ljava/lang/reflect/Type;Ljava/lang/Class;Ljava/lang/reflect/Type;Ljavax/ws/rs/client/InvocationCallback;)Ljava/util/concurrent/Future;|AsyncHttpCallout|0 org.apache.cxf.jaxrs.client.JaxrsClientCallback.handleException(Ljava/util/Map;Ljava/lang/Throwable;)V|AsyncHttpCallout|0 org.apache.cxf.jaxws.JaxwsClientCallback.handleException(Ljava/util/Map;Ljava/lang/Throwable;)V|AsyncHttpCallout|0











Backend Rules: Host, Port, and Operation











Liberty Websphere




#ch.qos.logback.classic.spi.LoggingEvent.<init>()V|LoggerMDC|0 #ch.qos.logback.classic.spi.LoggingEvent.<init>(Ljava/lang/String;Lch/qos/logback/classic/Logger;Lch/qos/logback/classic/Level;Ljava/lang/String;Ljava/lang/Throwable;[Ljava/lang/Object;)V|LoggerMDC|0 #org.apache.log4j.spi.LoggingEvent.<init>(Ljava/lang/String;Lorg/apache/log4j/Category;Lorg/apache/log4j/Priority;Ljava/lang/Object;Ljava/lang/Throwable;)V|LoggerMDC|0 #org.apache.log4j.spi.LoggingEvent.<init>(Ljava/lang/String;Lorg/apache/log4j/Category;JLorg/apache/log4j/Priority;Ljava/lang/Object;Ljava/lang/Throwable;)V|LoggerMDC|0 #org.apache.log4j.spi.LoggingEvent.<init>(Ljava/lang/String;Lorg/apache/log4j/Category;JLorg/apache/log4j/Level;Ljava/lang/Object;Ljava/lang/String;Lorg/apache/log4j/spi/ThrowableInformation;Ljava/lang/String;Lorg/apache/log4j/spi/LocationInfo;Ljava/util/Map;)V|LoggerMDC|0 #org.apache.logging.log4j.core.impl.ReusableLogEventFactory.createEvent(Ljava/lang/String;Lorg/apache/logging/log4j/Marker;Ljava/lang/String;Lorg/apache/logging/log4j/Level;Lorg/apache/logging/log4j/message/Message;Ljava/util/List;Ljava/lang/Throwable;)Lorg/apache/logging/log4j/core/LogEvent;|LoggerMDC|0 #org.apache.logging.log4j.core.impl.ReusableLogEventFactory.createEvent(Ljava/lang/String;Lorg/apache/logging/log4j/Marker;Ljava/lang/String;Ljava/lang/StackTraceElement;Lorg/apache/logging/log4j/Level;Lorg/apache/logging/log4j/message/Message;Ljava/util/List;Ljava/lang/Throwable;)Lorg/apache/logging/log4j/core/LogEvent;|LoggerMDC|0


mongodb Async



Custom Executor





Capture Integration Point Call External Transaction: Earlier, we were not able to capture any calls that occurred outside the normal transaction flow. This feature is designed to capture calls outside the normal transaction when entry points, which are generating callout, are not present. Go to Agent Config > Profile > Debug Tool. Define the number of calls to be captured for outside transaction(s).

NV Tag Injection for WebLogic, JBOSS, and Other Application Servers via ND Agent: For Real User Monitoring (RUM), NV tag needs to be injected into HTML content. With this enhancement, the Agent identifies HTML content automatically and inserts the NV Tag. Go to Configurations > Agent Config > Profile > NV-ND Auto Inject.

Thread Callout Detection using Executor, ExecutorService Interfaces: A ThreadPoolExecutor that can additionally schedule commands to run after a given delay, or to execute periodically. This class is preferable to Timer when multiple worker threads are needed, or when the additional flexibility or capabilities of ThreadPoolExecutor (which this class extends) are required. This feature captures thread calls from class implementing Executor, Executor service, Schedule Executor Service. Go to Configuration > Agent Config > Profile > Custom Configuration.

Capturing Method Thread ID: An option is provided as a check box for the user in Flowpath settings to capture method thread ID.

Merge Thread Callouts: An option is provided as a check box for the user in Flowpath settings to merge thread callouts.

New Integration Point ‘Custom Executor’: A new integration point ‘Custom Executor’ is added under the ‘Detection By Classes’ tab in the ‘Integration Point’ section, which is used to display the child flowpaths.

‘Python’ and ‘Go’ Agent Types: New agent types ‘Python’ and ‘Go’ are now supported.

  • ‘Python’ is an interpreted, high-level, general-purpose programming language. Python’s design philosophy emphasizes code readability with its notable use of significant whitespace.
  • ‘Go’ is a statically typed, compiledprogramming language, which is syntactically similar to C, but with memory safety, garbage collection, structural typing, and CSP-style concurrency.

ND Collector Settings Enhancements:

  • Option to Block a Server: In ND Collector Settings window of an application, an option is provided as a check box which, when selected, enables a user to configure settings to block a server.
  • Option to Delay Capturing of ND Data: An option is provided as a check box that enables a user to delay the ‘start instrumentation’ process so that the ND agent does not start capturing data. The default value is 60 seconds.
  • Option of Copying Profiles inside a Test Run: A user can select various option of copying profiles inside a test run. These are: Do not Copy Profiles, Copy profiles, and Copy profiles in thread mode (it is done by a separate thread that delays the number of seconds in copy. The default time is 60 seconds.)
Java Agent

Stack Trace Limit for Socket API: Earlier, there was no limit for stack trace. After this enhancement, stack trace limit can be defined. It now prints the stack trace in agent error logs per Transaction. Go to Agent Config > Profile > Debug Tool. Select the ‘Enable Capture Socket Trace’ option and define the limit for the stack traces to be dumped. If this option is selected, 100 stack traces are dumped by default. The user can provide the value in the area provided.

Option to Capture All Handled Exceptions Using Try Catch Instrumentation: An option is added under the Exception Capturing Settings as a check box by selecting which a user can capture all the handled exceptions using try-catch instrumentation.

.NET Agent

Exception Monitor Support: Exception monitor is now supported for .NET agent also.

Percentile Settings: Now, the percentile feature is supported from UI in .NET agent as well in which a user can capture BT percentile and IP percentile. The user can also store percentile data at the instance level.

Custom Callout: Support for custom callout is now provided, which a user can view on the Transaction Flow Map and Method Call Details.


Flowpath Report of Integration Points: A user can drill down a tier to view the Flowpath Report by going to Application Metrics > Integration Point Stats > Tier > Overall > Overall > Overall.

Maintaining Sessions While Applying Filters: In flowpath report, the sessions are not terminated when applying some filters in global search filter for visible columns.

Details of Integration Point Calls in Thread Hotspots Window: A user can view the details of Integration Point calls in tabular format in the Thread Hotspots window. The details include integration point name, discovered IP name, type of IP, duration (total, average, max, min), count (total, max, min), error (if occurred), network (average and total).

Download MCT Repeated Method Table and Display of Arguments in Text Area: While opening repeated method under method count in Method Calling Tree report, the user can now download the data. In addition, the data for argument is now displayed inside the text area of the Method Calling Tree’s repeated method table.

Support for Opening Flowpath Report from NF Query Logs: On doing a fetch query search in NetForest, a flowpath link is displayed in the search result. A user can click this link to open the flowpath report.

LDAP Callout Support from DDR Side in All Reports: LDAP callout support is now provided in Transaction Flow Report, Method Calling Tree, and IP summary Report.

Highlighted First Method in Asynchronous Thread Callout: When a user clicks ‘Asynchronous thread callout’, the first method of corresponding child tree of the new thread is now highlighted so that the user can easily view the new thread tree and analyze it.

Support for Multi-DC in NV-ND Case: In new design, the user can get the data in Multi DC environment from below combinations based on the availability:

  1. First level search from NVSessionID + NDSessionID + PageID
  2. Second level search from NVSessionID + NDSessionID
  3. Third level search from NDSessionID + Custom Time

If data is not available in Master DC, it is searched in child DC one by one and displayed on screen.


Performance Timing 2: Performance timing Level 2 is used for retrieving high-resolution performance metric data. At present, unload time is used to show the time taken by any web page to load, but in the case of single page application it fails. So, to counter that, Performance timing 2 is introduced. Performance timing 2 can return the render time of a page when the page is actually available for the user to interact.

Store Availability and Active Store Metric: In NV Dashboard, ‘active’ and ‘available’ fields are displayed along with each store in ‘PageStats by Store’ metrics. This helps to know the registers, which are in ‘active’ and ‘available’ state, to produce timely and detailed report in UI.

  • Active register: Those registers are in activate state in which some activities are still going.
  • Available register: There is no activity for these registers, however count for this is provided which will be monitored over timer basis. If there is no session with any of the available register at certain time interval, the count of total available registers can be decreased.

Dynamic Addition of Events: Earlier when a new event was added, RTC was applied and NetVision had to be restarted to reflect the new event on the UI. However, now when a user adds a new event, it is immediately reflected on the UI.

Data Search in Active Session for Current Date: In case of deep link, a user can search data in Active Session based on current date too.

Capturing Static Resource of Web Apps: Until now, while generating the replay, the presence of resources was checked on the server. If existed, they were fetched from the server, else a request was sent to the web-page. The process of checking at NV server and then fetching the resources, if not present at NV-server took a lot of time. To optimize / decrease the pauses, the concept of capturing these static resources before replay generation comes into play. Now, all these resources are captured with the help of chrome extension and store at NV server. Later, these resources are used to generate the replay.

Capturing Mouse Move: In general, for an E-commerce site user, the user actions consist of ~60% to ~90% of mouse movement. To provide a more realistic view and accurate information about user activity, mouse movement of user are captured.

Display of HTTP Protocol Version in Resource Timing Waterfall Table: A user can now view the HTTP Protocol Version in the Resource Timing Waterfall table under the ‘Protocol’ tab.

Using ‘AND’ Condition on Applying Same Type of Filters: In Session filter, when a user applies two or more filters of same type to filter out events, the ‘AND’ condition is applied instead of ‘OR’ condition.

Business Process – Funnel Edit Mode: ‘Funnel Edit Mode’ is an edit function of funnel used to check the number of users (that can be entered into a flow), conversion rate, and revenue according to the flow if there is any progress in flow.

Admin Control to Disable Store on High Usage: In Admin control, the user can enable / disable the stores from the provided list according to the usage. 

A/B Testing: A/B testing (sometimes called split testing) is comparing two versions of a web page to see which one performs better. You compare two web pages by showing the two variants (let’s call them A and B) to similar visitors at the same time. The one that gives a better conversion rate, wins. Now, a user has the option to filter A/B testing sessions. The filters currently available are A/B Testing, All, and Non AB-Testing.

Admin Menu: Change in Menu Items: The menu items ‘Access Control’ and ‘Store Admin Control’ are now moved inside the ‘Admin’ menu item on the left navigation bar.

Icon Added to View the XHR Details: An icon is added that enables a user to expand HTTP entry in HTTP Request tab. This helps the user to view the XHR details more effectively.

Navigation to HTTP Request in Page Details Tab for Corresponding Sessions: When a user performs DDR to Ajax calls under the Action tab in the Replay section, it navigates to the HTTP request in page details tab of that particular session.

Addition of ‘Referrer URL’ Column: A new column ‘Referrer URL’ is added that displays the complete address of the previous webpage from where the user has navigated to the current webpage. The user can view via two ways:

  • Go to Search > Sessions > Entry Page > Pages tab > Referrer URL
  • Go to Search > Page > Referrer URL

Display of Different Colors for Different States of a Session: For a session, its three different states – already visited, currently opened, and last viewed are now displayed in three different colors – light grey, white, and dark grey respectively.

Detailed Event Data for Similar Events: For similar types of events, now a user can view the detailed event data by clicking the aggregated event data.

Edit Option Added for a Goal: For a variation, now a user can edit the already existing goal by clicking the ‘edit’ icon.

Extended Geo Map: Filter Based on Health Type and Specific Pattern: In Extended Geo Map, a user can now filter the stores based on health type and specific pattern. The options for health are Critical, Warning, and Healthy. When the user selects the option ‘By specific pattern’, a text box is displayed where the user can enter the specific pattern. In this case, only specific pattern related Stores are displayed.

Sessions Page: Filter Based on Event Type: A user can now filter the sessions page based on event type. This enables the user to check the pages that have any event.

Synthetic Monitoring (SM)

Capture Header Size: In waterfall, request and response header size is being captured now. It is shown alongside Request Headers and Response Headers in brackets.Example – (Size: 123 bytes).

Improvements in Video Clip Settings Window: The user needs to provide the value of frequency in milliseconds (ms). For quality, range/slider input field is provided instead of number input field. When a user slides or selects the value, it is reflected beside slider when the user finishes selecting the quality value.

‘Browser’ Column in the Page Average Report: A ‘Browser’ column in Page Average Reports is provided. It now becomes easy to identify from which browser the request was initiated.


NetTest is a Cavisson methodology used to identify the unique functional test cases that needs to be automated for coverage of application functionality as observed in production deployment by using NetVision captured data.

Key Components:

  • Test Case Generation: It processes real user sessions captured in NetVision, uses ML techniques (which consumes knowledge base collected so far), and generates quality test cases. NetTest’s core concept like Template, Page state etc. helps in generating test suite which covers all possible flow taken by real users.
  • Test Plan Management: Before execution of actual test case, the user has to create a test plan. Test plan may contain test cases of same category (feature), or tests of same priority. NetTest provides easy to use interface to manage test plan.
  • Test Plan Execution: There are two things, which are needed for test plan execution.
    • Execution Profile: Guides NetTest how to execute a test plan.
    • Test Data Mapping: It is another big challenge, which is very well taken care in NetTest. User can upload data set in known formats like csv, xls etc.
  • Dashboard and Reports: NetTest has a feature rich and real time dashboard to monitor a project-testing life cycle closely.
  • Test Database Management: Execution of a test case requires some user inputs, such as name, password. These user inputs are stored in the test database. Test database management provides interface to upload test dataset, define test data selection rule, and define form mapping.

Key Enhancements

  • Partial (Substring) Replacement in Deprecated Selector With New Selector: Earlier, while execution of a test case, the complete value of ‘oldSelector’ was replaced with ‘newSelector’. Now, if there is some small substring, the user can search for it. If it is found in previous cases, it can be used in that ‘oldSelector’.
  • Multiple Selectors With ‘ns_js_checkpoint()’ API: Earlier, only CSSPATH was supported as selector with ns_js_checkpoint() API, whereas other click&script APIs support following selectors: ID, XPATH, CLASS, DOMSTRING, CSSPATH, NAME, SRC, HREF, ALT, TITLE and TEXT. All these selectors are now supported with ns_js_checkpoint() API.
  • Importing Multiple Test Cases: A user can now select multiple or all test cases and can import them at once.
  • Display of Script Name: The user can now view the name of the script for the imported test cases.
  • Stopping a Running Test: The user can now stop a test from UI after its execution by clicking the ‘Stop’ button.
  • Page Dump While Exporting Test Case: The user can now view the page dump while exporting a test.
  • Exporting of Imported Test Cases: A user can now export the imported test cases in an excel file.
  • Addition of Check Boxes in the ‘Imported TestCases Detail’ Table: Check boxes are added in the ‘Imported TestCases Detail’ table. This enables a user to select multiple test cases and edit them.
  • Option Added to Restart a Test: In Test Execution History table, an option ‘Resume Stopped Test’ is added under the Action column to restart a test from where it was stopped.
  • Timewise-Option Added to Specify Time Period: In the Dashboard of the Test Plan Execution section, while selecting the ‘Timewise’ option from the drill down list a user can now specify the start time and end time. This displays all the executed graphs for the selected time period.


Sparkline for All the Metrics: Sparkline is used to display charts in the search results. It is designed to display time-based trends that helps the user to view the variations within a particular time range. The Sparkline is now created for all the metrics – count, count/sec, and average response time.

Sparkline Support in VIS Query: While doing a VIS query search, a user can view sparkline along with table chart by clicking the ‘Sparkline’ check box.

Custom Time Interval in VIS Query: A user can now set the custom time interval like other time intervals in Date Histogram Aggregation.

Example: VIS count() by @timestamp[interval=Custom customInterval=2h customLabel=time]

For using the custom interval, interval value is ‘Custom’ and interval time is provided in ‘customInterval’.

Example: @timestamp[interval=Custom customInterval=2h customLabel=time]

Hyperlink on Extracted Fields: Some extracted fields contain data of URL type. The user can now configure hyperlinks on such extracted fields.

Auto Suggestion in Search Query While Typing Any Alphabet: While doing any search query, when a user types any alphabet, all fields starting with that alphabet are auto-suggested. The user can select a field using the arrow keys by pressing the ‘SHIFT’ key.

Support of Superuser: A superuser is now created that has the option to delete other admin users. When the UI loads, the superuser is created.

Token Meter: The token bucket algorithm is based on captured logs of a fixed capacity bucket into which token, normally representing a unit of bytes or a single log line, are added at a fixed rate. The token bucket algorithm can be conceptually understood as follows:

  • A token is added to the bucket at every interval.
  • The bucket can hold at the most ‘n’ tokens. If a token arrives when the bucket is full, it is discarded.
  • When a packet of n bytes arrives,
    • If ‘n’ tokens are in the bucket, ‘n’ tokens are removed from the bucket, and the packet is sent to the network.
    • If fewer than ‘n’ tokens are available, no tokens are removed from the bucket, it is  discarded.

Token meter can be handled using these settings:

  • bucket.size – for bucket size.
  • token.size – for token size.
  • bucket.refresh – for adding tokens into bucket at some interval.
  • token.bucket.enable – to enable token meter.
  • action.on.bucket.full – to decide delaying events or dropping events.
  • delay.in.millis – denotes sleep in millisecond.

Coloring in Charts for Specific Values and Column: Color-coding for individual table columns is provided to add context or focus to the visualization. Following color modes are available for column coloring:

  • Values
  • Ranges

To design the Data-table charts using the Query, include ‘enableColor=true’ in the query. To color either/both metrics and bucket aggregation, the query syntax would be like: 

…|VIS [Aggregation](enableColor=true) by <fieldName>(enableColor=true)

The rest of the coloring options can be chosen in the paintbrush icon of the table chart.

NF Agent

Delay between Timestamp of Logs and Filter at NFAgent Pipeline: With this enhancement, the user can see the delay between timestamps of incoming logs and filter at NfAgent with a field name as delay_in_sec with pipeline metric.

Implementation of NF Agent’s Monitoring API to Monitor Basic Pipeline, Hot Threads Metrics: For monitoring NFAgent without JMX, the user can call some NFAgent monitoring APIs to get information about NFAgent. The user can monitor NFAgent, pipeline information, and hot threads information. The user needs to set value monitoring value to true in the NFAgent configuration file.

  • Pipeline information includes event in, event out, event filter, duration in milliseconds, delay in seconds, and so on.
  • A hot thread is a Java thread that has high CPU usage and executes for a longer than normal period of time.
NF Alert

Display of Environment List in Index Pattern Field: A user can now view environment list in Index Pattern and read environment list from UI.

Clone / Copy of ‘NFAlert’ Rule from Existing ‘NFAlert’ Rule: The user can now create a clone / copy of an ‘NFAlert’ rule from an existing rule via UI. As and when the users need similar type of rule file with some changes, it is easy for them to create (by copying / cloning), instead of making a new rule file from scratch.


Fetchlog Construct Implementation: Fetchlog construct is implemented for fetching logs corresponding to the selected metrics for the given time range.

Support of Full Time Stamp Format in Search Query: A user can search a query with full timestamp (that is, YYYY-MM-DD HH:mm:ss.SSS) in the startTime and endTime parameters of fetchlog.

‘Top’ Construct: Top command finds the most common values available in a field in terms of count and percentage of its occurrence.

Example: *|top 3 loglevel

Resultant contains top 3 log level values in terms of its occurrence count and percentage.

Log Generation for Deletion of Index: If any index is deleted from the database, a log is generated for the same.

‘Merge’ Construct: It merges the other fields based on one or more fieldnames.

Syntax: *|merge [include|exclude]={values} [first|last] <field_name> by  <field_name>

Example: *|merge exclude={infolog} type by server

This construct should be used when there is a high level of redundancy in data. That is, when the majority fields’ values are something like ‘NA’ or null. Therefore, to remove redundant values (e.g. NA, null), ‘merge’ can be applied to one or more field and remaining fields shall get merged row wise based on given <[VALUE|value]>. After merge, given fields should contain only unique values and for the remaining fields rows should be merged.

‘As’ and ‘By’ Options in Streamstats Query Search: While doing a streamstats query search, a user can now use ‘as’ and ‘by’ options. If ‘as’ option is used in query, alias name is used in place of cumulative stats. If ‘by’ option is used in query, first events are grouped according to fieldlist, and then streamstats function is performed.

‘now()’ Function in Search Query: The function ‘now()’ is now supported individually. The now() function takes no arguments and returns the time that the search was started. It is often used with other data and time functions. The time returned by the now() function is represented in UNIX time, or in seconds since Epoch time.


Migration in Angular 7: User Interface is migrated from JSP to Angular 7 including some new features. Apart from all the existing features, two new features are introduced in migrated application:

  1. Network Interface Configuration: The user can add, delete, and update the Interface Name, Interface IP, and Interface NetBits configured through Network Interface configuration.
  2. NetChannel Setting Configuration: The user can define the Maximum number of Tunnels, Maximum size of Debug File and Error file, and Debug level from NetChannel setting configuration.

Bug Fixes

The following issues are fixed in 4.2.0.

Bug ID





Test was running on three generators after ramp down.



Issues with Page Dump.



Scripts User Distribution is not correct to Generators.



Not able to apply RTC on both linearly and simultaneously.



Transactions details link disabled in web dashboard.



Pagination dropdown menu is not user friendly in 4.1.15 #66 |



Test is getting failed due to shared memory issue with 12 NVM.



Elapsed time is not being increased for first 10 samples (5 minutes) in NetCloud test.



Logs regarding “Discarding unhealthy generator” showing in Progress Report.



In Unified Dashboard | When we Reload DDR_Flowpath page now getting some error message.



Pie graph is not working properly



Compare with Baseline trend graph lags behind the currently running graph



UI usability issue| Full flowpath instance name is not showing as space is less horizontally



Application End to End View is blank even if load is present due to gdf change of dmesg monitor.



Not able to add profile in Netdiagnostics Settings from Scenario UI



Not getting data for Kubernetes monitors owing to region and zone check



BCI Exception || WebSocketException: Flushing frames to the server failed



Not getting all BT transaction list whenever we are going back to click Tier status from Application End to End view.



On opening exception report from Application End to End view, the output of query gets saved. Thus, it modifies the output of Drill Down from BT trend report.



In case of graphical query builder i.e. Widget Settings> Advance Filter> Enable Graph Sample Filter, Drill down to BT trends is giving “No Record Found” output.



BT Category is passing in query when we drill down from one tier after traversing back from normal transaction scorecard.



Dashboard not loading (



Unusual behavior in Alert Maintenance Configuration from UI and REST API.



Slowness in ND Boxes i.e., SNB,CNC,ACC Due to high Load Average.



We are seeing dummy machines in the KPI (



Issue of Application End to End View in 4.1.15 #74 build || (UAT)



Compare with Baseline trend graph lags behind the currently running graph.



Performance Issue: Probable performance issue cause in newDataPacket call (e.g. getTestDuration() method on top of stack for multiple threads).



[For 4.2.0]Handle multiple rendering of highcharts on panel on operations like show/hide graphs from lower panel.



Performance issue in Show Related members.



Transaction UI Issues.



Disk Space is getting Full Due to ndp Core||



Control connection for snb-ui tiers is not establishing and agent logs are not updating.



Showing no data found in BT Trend Window || (MultiDC).



ndLogger threads observed in Hotspots- ||BigData|| Production||



NDC|NDC giving start_nd response in 15 minutes/getting stuck.



When doing ddr from NFUI to Unified dashboard then it is not redirecting.



Getting “java.lang.IllegalAccessException” Exception in Agent Error logs.



No data is coming for View Thread Dump for multiDC.



Heap dump analyser is not working with ALL, Need to provide extension while taking dump.



Alert counts is not updating in Application End to End View.



Total Server counts is showing inaccurately in Application End to End View.



Tool tip should show in total HotSpot duration in which meaningful message should come.



Metadata not available pop up appears while drilling down to BT by IP for an IP from Application End to End View – In case of ALL DC selection.



 Monitors | Getting garbage node in application tree metrics.



NDC had restarted Automatically and without core.



Method monitor are not reflecting in GUI.



Multiple scheduler report generated in same time.



Getting Dip in Request and Response for ACC UI ( due multiple heap dump are taken configured in Alert Policy.



Not able to disable putDelayInMethod. Delay is still present even after successfully disabling the keyword.



Getting Blank screen after login the machine.



Not able to add task from scheduler.



Method is not changed after applying RTC.



Method Monitor|Total Method Invocation Count cumulative graph is going down.



Order and Revenue data is not updating in nde.cavisson.com due to file “SCS_kpi_data.fav” content mismatched after build upgradation.



Incorrect plotting of graphs as some extra VM’s are showing for cnc service because ndc did not deleted server from topology.



Test got restarted on CNC and Solr Machine ||MOSAIC||



MS SQL Monitor | DB Monitoring Option Not Getting Enabled in case of Outbound Connection.



Not able to Apply RTC as Control Connection getting closed Continuously in OverOps feature.



Sparkline is not getting generated  on selecting multiple  metrics aggregations.



Services name started with Underscore (__) are not displayed in GUI.



Transaction error window taking huge time to open.



Slowness in ACC( because of Gdf updating frequently due to HAProxy monitors vectors getting up and down frequently



Thread_Mode | Users became negative in rampdown phase.



Error while opening search parameter in script manager.



Not able to Launch Script manager if project/subproject directories not available in scenarios directory.



Not able to Launch Script manager if project/subproject directories not available in scenarios directory.



Slowness in Opening Page Dump || Online Test



Page dump look and feel issues from GUI in web dashboard.



MultiDC|Wrong start time and end time is passing while applying custom time in HTML Report|



Users not Ramping Down in Group Based Duration/Session Test.



Transaction error window taking huge time to open.



While running test from GUI, parsing error is showing incomplete error message.



Scheduler reports showing blank.



Not able to drill down for selected metric in tabular widget (vector based).



Getting zero data in report after Graph type changes from derived to Normal|



Not able to add task from scheduler.



Scheduler reports showing blank.



In Page Performance Detail, The value of Avg Onload time and Avg TTDI are same for all the pages



Web Dashboard | Getting less data in NV Metrics.



No records are showing up under “connection type” column in session table and while applying filter for connection type then in filter instead of showing connection type name it showing its index value.



In case of drill down by page, we are not getting event name or particular error message name after selecting any particular error message.



BRAND names on Catalog page is not displaying in NV Replay.



[4.2.0] Metric baseline compare feature is not enabled for NV Product.



Store and terminal filter is not working in Active Session.



Observing unexpected fluctuations in Graph Master and Slave Machine.



Page Dumps Missing in NV Ecomm.



Page Detail is getting Cached.



Page Performance Overview | operating system is showing only Windows but there are lot of sessions from Linux also.



Cavisson DevOPs|Need to configure network metrics graph.



User Id is not Masked on Login Page.



Unable to add test case.

Performance Issue Fixes in NetDiagnostics

General Slowness Issues





Unresponsive UI on applying time period for more than last 30 days

We have made some changes in configuration file. Aggregation and transpose settings are set default in application.


Slowness in Multi DC Environment

Optimization in showing templates/generated reports listing, when large number of files are available.


Observing Dashboard slowness in UI trees drag/drop graphs and open/merge operations

Server becomes unresponsive and sometimes very slow due to updating the UI Metrics/New Metrics. So, we provided extra configuration, which helps to control the updates after specified interval.


Performance Issues in dashboard operations:

  • Taking time in applying time period. (Affected features are preset/time period).
  • Straight line in graph due to some samples missed in cache.
  • Cache is taking time to build on server start/restart.
  • Server unresponsive due to lots of API calls.

Fixed the issue of straight line coming in graphs sometimes due to  packet loss in cache.

Added the limit in fetching BT Transactions data API, as it is hogging server due to lot of API calls concurrently. This limit helps to control the flooding of  API request on server, which sometimes is main reason of slowing UI operations.


Slowness on applying ‘Time Period’ and ‘View By’ Operations

Optimize the time by reading only last Graph Definition File (GDF) file per partition. It helps huge performance improvement in operations like time period and change view by.


Favorite loading taking much time due to slow generation of percentile data

  • Generating past data and meta data using the cache with show discontinued data option in time period UI, if available.
  • Making Memory Mapped setting as the default setting for reading data files.
  • Controlling logs using debug levels on generating percentile data.
  • Removal of redundant work from percentile data generation using normal data files.


Slowness due to high Load Average and high CPU usage in NDP process.

Controlled the maximum CPU usage by NDP.


Slowness while fetching metrics data which is deleted in past

  • Discontinued data creation does not using currently cached data (Normal sample).
  • Used Linked List structure instead of array due to dynamic memory allocations and avoiding time spending in increasing array size.


Probable performance issue caused in new sample update in UI

  • Fetching discontinued data using normal cache (Sample wise cache)
  • Avoided calculating test run duration in multiple places as it is so intensive.
  • Used Linked List structure instead of array due to dynamic memory allocations.


ND responding very slow due to multiple users concurrently using the system

  • Internal data structure using map is not initialized with capacity, due to which, it is allocating new memory on every overflow.
  • Splitting string is taking time due to  usage in multiple place, now combined and used only once.
  • Alert logging was enabled, we disabled that.
  • DeepCopy was using serialization based call instead of in-memory cloning of objects due to default keyword set as off.


Favorites having discontinued enabled taking time on loading

On every favorite loading, discontinued data is created by default. Now we are creating discontinued data only if it is enabled/active in the current favorite.

In Specific User Operations





Performance issue while doing Advance open/merge operations (Merge with 74419)

  • Removed redundant code of split method calling in the loop while creating data on open/merge operation.
  • Checked the alert overlay setting in favorite before going to fetch alert overlay data, this has improved favorite load/sample update as well.


Tree filter options to show only limited Metrics/Groups in MultiDC

We have implemented a functionality to allow users configure only those groups that need to be displayed in UI graph tree.


Dashboard Page become unresponsive while show/hide graphs from lower panel

On selecting/hiding graphs from lower panel of dashboard, the chart is rendering multiple times on selected panel, now it renders only once on doing lower panel operations.

Due to High Memory Consumption and GC





Dashboard become unresponsive due to high memory utilization by Server and server is frequently stuck in GC

It was due to very large number of entries added to cache map by alert thread consuming all Tomcat memory.

Due to Large Favorites (Target 4.2.1)

Large number of panels (>50) with each panel with large number of graphs

  • Issues
    • Lot of data points need to be picked from metric DB and sent to GUI takes time
    • Large data transfer (> 30 MB) over Network takes time
    • All panels displayed together after data received – rendering takes time
  • Solution – Target to show all panels in Viewport in less than 5 secs
    • Panel-wise data request and rendering
    • Lazy loading of data for panels not visible