Release Note 4.11.0
This section contains enhancements and new features added to the common product modules.
The end-to-end widget will show the health of the connection links based on error%, calls/second (calls made to the integration point), and average response time. Based on these metrics, the color of the connection link will change. In the case of large deployments, this feature will be beneficial as the categorization being done will help the users to identify problematic components. The connection’s color will change to red in case the change detected for these three metrics, in the last two successive 10-minute cycle, is greater than 20%. For e.g. if the error % is causing the health of the connection to be classified as erroneous at 4 PM, then two previous ten-minute cycles of 3:40 to 3:50 and 3:50 to 4:00 will be analyzed to check if the error % value at 4 PM had increased by more than 20%.
The end-to-end widget now supports drilldown for both links and integration points. A link denotes the connection between a node and an integration point. Users will now be able to check the metric(s) affecting & being affected by a link, in case it has been categorized as erroneous, according to the enhancement mentioned above (Link Health). With this drill down, you will be able to get all the metrics causing the issue and also all of those metrics that are being affected by the link health.
An integration point is the end-point of the tier hierarchy that stores/provides information on the requests sent to the server, such as DB calls, coherence calls, and so on. Integration points store information and service requests sent by the application. Users will now be able to see all the flowpaths/business transactions being served by the selected integration point. Furthermore, we have provided an option to filter flowpaths/business transactions based on response time. This option will be highly useful in case you want to identify all or slow/anomalous transactions that are being served via a specific integration point.
- Rename an integration point
- Map an integration point to a tier
- Change the icon of the integration point
- Show/hide integration point
- Rename/reset multiple integration point names.
Users will now be able to directly go to the alerts being generated on a particular node via the end-to-end widget. This feature helps users in getting tier/server-specific alerts and reduce the mean time to detect (MTTD), an important metric to identify and remove issues.
A new feature has been added to denote a node’s health at the end-to-end widget. A node can be a collection of one or more servers. The health will be determined based on the following:
- The type of alert that is currently active over a particular node
- The number of servers on which the alert is active.
- In case of multiple level of alerts on a server, the health will be categorized based on the highest severity i.e. in case a critical and major alert is active, the critical alert will be considered to denote the health of the node.
The node’s health will be denoted in the same color as the alert level.
Users can view metrics of all business transactions processed on a particular node, via the end-to-end widget. With this feature, users will be able to instantly analyze the performance of all business transactions of the selected node and identify anomalous transactions easily. All supported business transaction metrics will be shown in the resultant dashboard.
User can now view a custom excel report with several time formats, including preset, custom, and for the past time period. Earlier, the user only had the option to view data via mentioning the time range. With this enhancement, users can provide time range values like “2 Hours Back”, “24 Hours Back” etc. and the report will be generated accordingly by computing the time range provided with respect to the current time.
Users will be able to apply RCA when the alert overlay is applied on the widget. RCA can be applied on individual plot bands of the overlay.
Following are the list of technologies which are available in dashboard:
- Mongo DB
- MY SQL
- MS SQL
- HA Proxy
- Apache Tomcat
- IBM DB2
- AWS EC2
- AWS Kafka MSK
- AWS Logs
- AWS RDS
- AWS SQS
- AWS DynamoDB
- AWS API Gateway
- AWS Cloudfront
- AWS Lambda
These technologies are beneficial to the users because they don’t have to start from scratch as all the metrics and data are available by default in the dashboard.
In the auto monitor feature, important system, network, and JMX monitors are automatically configured. Following is the list of monitors configured with the products:
- Windows System
- Windows Network
Earlier user had to configure these monitors manually for each product.
Revamp of log pattern & Log Data is done now. Now user interface for configuring monitors is more user friendly. Online help is also provided to help the user understand it in a better way.
Users can configure a generic rest-based monitor through a generic Rest UI. Earlier user has to configure the monitor by manually editing the file.
Now Load test support HTTP3 for performance testing. The third major version of the Hypertext Transfer Protocol, or HTTP/3, is used to transmit data across the World Wide Web. Similar request methods, status codes, and message fields are used by HTTP/3 in comparison to earlier iterations of the protocol, but they are encoded and the session state is maintained differently. But as compared to earlier versions, HTTP/3 has lower latency and loads more quickly in real-world usage: in some circumstances, it is over three times faster than HTTP/1.1. This is partly because the protocol adopted QUIC.
Upload/download Project subproject support added from Project administration UI. Through this, the user can download or upload a complete project or subproject with test assets like scripts, scenarios, test suites, check profiles, etc.
When the user runs a JMeter test from the run JMeter script feature, then he can download/view the HTML report of that JMeter test from the log viewer.
The user now has the following option in Advance Settings in DataDir:
- Version Commit
- Version Logs
The screen of the Replay Access Log is revamped. A unique type of scenario called Replay Access Log allows users to replay or re-run the scenario using the logs recorded in the Access Log File. All requested URLs’ logs, including requests, responses, Method types (Get/Post), HTTP Headers, Client IP, Remote IP-address, and JSESSIONID, are kept in Access Log Files. The Replay Access Log scenario lets you recreate the production scenario and the complete layer of faults by replaying the access logs produced during live traffic (which are not reproducible in the Test environment).
Now we are supporting all the parameters in Scenario UI as well. The following parameters are now supported:
- index dataset using file
- random number parameter
- unique number
- scratch array
- unique range
- dataset using query
Now from the load test, you can add 1024 generators in a test. Previously it was limited upto 255.
The controller collects test metrics and percentile digest data for all load generators, and aggregates, and save them in a time series database(TSDB). Using generator-wise test metrics, the user can select how many generators he wants to save generator-wise data. This helps in analysis when specific locations are having higher response times or errors. Generator-wise health metrics like CPU and memory are always saved in TSDB. The maximum number of generators is limited to 50 i.e, we can save generator-wise metrics for 100% generators from each location for upto 50 generators.
Record websocket script through script manager– Script manager
Now in the Script recorder user can record WebSocket APIs as well. WebSocket enables fast, secure, two-way communication between a client and a remote host without relying on multiple HTTP connections. It supports full-duplex, bi-directional messaging which is great for real-time, low-latency messaging scenarios. Below is the WebSocket API which is recorded in your script:
Note: The user can also add these APIs manually from add API screen.
We have given support for Outbound Connection. An outbound connection is required for creating a connection between NDC and CMON It is in Additional Settings -> NetDiagnostics Settings in Scenario UI.
Puppeteer is an agent used for headless browser testing. From headless browser testing, users can run lightweight tests which will take fewer resources like CPU, memory, etc. By using a headless browser user can run more virtual user tests than normal RBU tests.
Now user can set drill-down settings for the failed session and normal session in the load test scenario.
Previously the service used to reside in correlation path/home/cavisson/Controller_4/hpd/correlation/default/default/services>>>previous path
Now if the project/subproject are shared with other machines, services too will be moved with scripts and scenarios.
Monitoring and Search Based on Tags in business transactions. Tags organize data coming from a large number of sources and/or to a large number of accounts. It identifies teams, roles, environments, or regions to know who’s responsible for what and also organizes and searches dashboards and workloads to Query and chart APM data. Tags are useful for organizing data at a high level to add more fine-grained detail, like capturing user names or other high-cardinality values.
Support for AutoDiscovery Feature is given in PHPAgent. Users can get all application methods that can be instrumented. The Auto Discover service minimizes user configuration and deployment steps by providing clients access to Exchange features. For Exchange Web Services (EWS) clients, auto discover is typically used to find the EWS endpoint URL.
Support of Request Method-Based Business Transaction in Pattern-based BT
Support for request method and URL Parameters based on Business Transaction is given in PHPAgent. With this feature, users can use the requests according to their requirements that is whether they need method-based or URL Parameter based.
CouchBase DB issues with PHP Agent
Support for the latest version of Couchbase DB is now provided. Couchbase is a distributed NoSQL cloud database. Couchbase started as an in-memory cache and needed to be architected to be a persistent storage system. It brings the power of NoSQL to the edge and provides fast, efficient bidirectional synchronization of data between the edge and the cloud.
ND-NF integration support is provided. The server logs, such as access logs and error logs, which are captured in NetForest, can be drilled down further to get additional important information about every request. By doing this, users can have a better understanding of how the application works. Users can also find the errors and check if the services are running properly or not.
The feature of integration ND NV based on http cookies is supported in the Python agent. Now ND is integrated with NV and the system helps the user to navigate and get into detailed reports in NetVision’s page. It also helps the user to navigate to NetVision’s session replay based on the http cookies and header.
Support of thirdparty GO Library on grpc server i.e new frameworks has been supported. Some go applications use these frameworks. Supporting the framework means the user can capture the entry point to generate records, as GRPC being a modern open source high-performance Remote Procedure Call (RPC) framework can run in any environment.
Users can capture transactions by AKKA as it is a set of libraries for designing scalable, resilient systems. For details on Akka, refer to the website Introduction to Actors • Akka Documentation.
Adding Visitor Support to NV
We are adding visitor support to NV. Now you can see the no. of visitors, the pages they navigated in a certain period of time, and different information which our agent captures. You can generate a report of the overall visitor’s journey.
DDR support for frustrating/satisfying/tolerating user metrics dashboard based on UX sore
We have previously given support in UX, by segmenting the users into 3 types depending upon their user journey in the application. Now users can drill down the report in the metrics dashboard depending on the user type. We are currently supporting the frustrating user, satisfying the user, and tolerating the user ddr report on the metrics dashboard. This enhancement explains the requirement for DDR and filters for the user’s application experience score in Netvision. The UX score provides a single metric that combines performance analysis and error detection. User should be able to
- Filter the sessions based on the user experience score.
- DDR from the dashboard to Sessions based on experience score.
Filter For User Sessions
- NV filter out sessions based on UX Score, i.e. Satisfying User, Tolerating User, and Frustrating User.
- NV session screen had filters: All, Struggling, Healthy, here now we have
Frustrating, Satisfying, and Tolerating.
- On clicking any of these, NV sessions will be filtered out according to the selected UX Score.
Drill down to User Sessions from Dashboard
- In Dashboard, w.r.t Channel, we have metrics for frustrating, tolerating, and satisfying sessions, for which we can see data in graphs.
- For these graphs we have the following ddr options available:
- All sessions
- Session with Events
- Session with struggling Event
- Session with high duration
- Session with high page duration
- Bounce Session
- Session with Order value
The following is now added:
- Satisfying Session
- Frustrating Session
- Tolerating Session
Removal of NV Monitors from Netstorm Source
Previously NV monitors were to be configured from netstorm source. Now the code migration is done. you can configure the NV monitor from the NV source.
Support for New Alert in Netvision
There are many new alert introduced in netvision, which are specific for session and funnel.
Domain Device Query Optimization
Earlier in Netvision, the server gets two queries run, based on time duration and frequency count for domain and devices. But whenever servers get up these queries run, sometimes it took lots of time due to multiple reasons. To avoid the query to run again, we have kept this information in the cache so as next time,
- if the difference between the current time and end time is within the duration range or
- if the current frequency is less than the previous frequency saved in the file, it will take data from the cache rather than running the query.
When you drill down the session parser, we have now support for store id and terminal id in the session parser per session. In this enhancement, two new dimensions store ids and terminal ids were introduced in the session aggregate table apart from other basic dimensions like browser ids. For custom report enhancement, these dimension has been introduced.
Support for DDR on Derived Graph in NV Metrics
NV considers the vector name to be specific regarding any graph for ddr, but in derived graphs, vector names are not fixed. And even the vector name will be “NA”. So handling is done for ddr when we do not have any filters for drilldown.
User Segment comparison in page performance
In Page performance, the user will now be able to differentiate its page performance based on user segment. During the same time period, for different user segments, page performance can be compared.
The open causal metric gives the cause and effect metrics for any metric. Previously it applied to custom metrics only. Now with this enhancement, the feature applies to derived metrics also.
By using this feature, the user finds cause metrics for Root cause analysis using the classification of metrics. Also using the classification which will be stored with metadata in TSDB, the user will able to do integration point root cause analysis in case of a spike at integration points.
The curator needs the following configuration before it can be used with nfdb for retention management of indices:
- Setup of host and port to connect with nfdb.
- Setup of cronjob to execute curator action at scheduled intervals.
- Setup of the path to curator.yml and delete_indices.yml for curator command execution in cronjob which will be dependent on the path of nfdb installation on the target machine.
This feature automates all the necessary configurations needed for the curator to work with nfdb, thus making the process simpler for the end user when using the curator.
Automation of AIOPS installation. With this enhancement, AIOPS will not require any manual configurations and will be installed along with ND. Now if the user installs Cavbin with ND, AIOPS will be installed and start working automatically. Now if the user upgrades the build version then AIOPS will be updated with the new code and RCA will auto-start.
Support of Root cause analysis for derived metric. Now derived metrics API will get called to get the metrics that are used for the derived metric. We will start our RCA with those metrics.
Query builder will help the user to create the query as per his requirement with clicks by selecting the fields that are required to make a query.
Loki integration support allows NFDB to integrate with Loki which is a lightweight and performance-improved search engine to execute search queries with millions of records. We have implemented the required enhancements in the following NFDB constructs i.e Search, Rex, Where, and Makemv. The purpose of Loki integration is for Lightweight and better performance to execute the NFDB query.
The auto-discovery log parser feature provides the user with zero configuration support, which means NFAgent will automatically determine parser configurations such as input, filter, and output. Zero configuration support for cmon_nfagent. -Automatically opt logs file and directory for well-known processes such as (Tomcat, postgresql, IIS, etc.).
Whenever the user wants to debug a particular widget panel, there will be an option to debug in the dropdown menu. This option visibility can be turned on/off based on the global configuration section setting. The debug menu will have the following sub-options:
- Start: on click of start, the UI will start sending the requests with the additional parameter to debug set to true in the request header.
- Stop: The UI will stop sending the debug flag in the header.
This feature helps to set a retention policy for the NF Curator from UI.
- Status:Resolved, Built, Verified, Closed
- Target Milestone:11.0
- Severity:Blocker, Critical, Major
- Business Unit:Client/Presales
Scenario UI- RBU| Able to add alphanumeric character or special character in all the Video Clip Settings field and for TimeOut for capturing page stats field.
Getting error “cat: /proc/old/sessionid: No such file or directory” after build upgradation
Test initialization is taking almost 10 mins from NC machine–install.sh should consider machine configuration while tuning
Data is not showing in test status report
BCIAgent |CPU time is not coming in method stats if that particular method is not instrumented using instrumentation profile
Getting issue while opening Dashboard to Default.
In Dashboard Open Casual Metrics Does not open
Getting URL error after we parameterize the URL.
Presales| While clicking on show logs option “Telecom” is coming in subject.
Presales | Data is coming but showing Database as DOWN
NDEDefault Dashboard | While clicking on DrillDown Report sub-option is not coming .
UAT ||4.10.0#31| Getting 4xx error while executing PAS test from Script UI.
Wrong Icon is coming while collapse/expand in account usage report.
Column name is mismatched in the choose column option and table in the account usage report.
For elapsed time format, when we zoom a graph for particular time range the time is starting again 0.
Getting issues while rename a flow file
PreSales| P2 |taking time while showing data from nv to nd
Unable to delete a file in a script.