Query Language Introduction

NetForest is used to search, view, and interact with data stored in NFDB. You can visualize your data in a different ways and analyze it for better decision making.
You can quickly create visualizations, dashboards and searches and share them.

Discovering Your Data (Search)

To search through your data click on Search in the side navigation.
In the query bar at the top, you can enter a query to search your data. Current Index Pattern is displayed beneath the search bar. Different index pattern are provided to avoid search in only relevant indexes.
New Index patterns can be created from settings menu in the left. Also list of fields can be seen by selecting any particular index pattern.
Searches can be constructed by using the field names and the values you’re interested in. Comparison operators such as greater than (>), less than (<), or equals (=) and range queries can be used with numeric fields.
Multiple queries can be linked with the logical operators AND, OR, and NOT, all in uppercase, to create more complex queries.
httpstatuscode:>500 AND resptime:>10
This query returns all the logs where server error occurred and response time was greater than 10.

In the top right corner you can select the time ranges you want to search for.
It has multiple options:

  • Quick: Most frequent time ranges used e.g. Today, Yesterday, etc.
  • Relative: You can specify ranges relative to current time e.g. Seconds/Minutes/Hours
  • Absolute: You can specify absolute date time ranges.

Clicking on this time option also opens Auto-refresh option which you can click and set auto refresh interval. By default it’s off due to performance reasons.

By default, all fields are visible in the result (just do single click). To see field wise value click on the twistie besides time in any log entry.

You can change the sample size for bar visualization on top of search results (default value is auto).

You can customize the field orders and line in which they should appear by changing the _sourceformat in Settings > Advanced Settings > sourceformat.
For example: tier,server,instance,type,<br>,message
Above format will display tier, server, instance, type in their specified order and message in new line. <br> stands for line break.
By clicking any field in available fields (in left pane) we can see the different values and their count and percent usages in the result set (see above screen-shot).
Besides every log entry, we can see the link for NetDiagnostics (if it has flow path data) and a search icon (see above screen-shot) clicking it returns all type of logs with same Flow Path ID.

Quick Searches

Clicking any field in the result adds that field and its value in the search bar and gives result for all the logs having same values for that field. Clicking multiple fields keeps appending the field and its value joined with AND logical operator.
In case user want to search for specific string appeared in any log entry, select it and it click on search (appears besides the text after selection) it automatically appends that string in the search bar query.
Note: Click on clear button besides the search bar, to clear the query in search bar.

Save and Share Your Searches

  • Clicking on Save button on top right save the searches for future reference.
  • Clicking on Share button on top right provides you link to share the search with someone else.
  • Clicking on Open button on top right provides the list of saved searches.
  • Clicking on New will allow you to create/save different searches.
  • You can also add these searches in the dashboards.

Customize and Export Search View

In case there is too much information in the result returned and you just want to see some fields’ data, you can create your custom view and set it as default view.
To create a custom view hover on available fields in the left pane and click on add button it will add that column into view. After adding all the desired columns click on Save button it will save the view along with the search tick set as default in case you want to make it default view.
Clicking on Export button exports the current view and search data into excel-sheet.

 

NetForest Queries

NF can support following type of queries:

  • String query
  • Field-Based Queries
  • Logical Operators
  • Regular Expressions
  • Ranges Queries
  • JSON Based Queries
  • Other Queries

String Query

A query may consist of one or more words or a phrase. A phrase is a group of words surrounded by double quotation marks, such as “test search”.
Syntax:- “Search string”
To search for all the transactions that contain the following message: “Data audited successfully”
Note: To search for an exact string, you need to wrap the string in double quotation marks. Without quotation marks, the search in the example would match any documents containing one of the following words: “Data” OR “audited” OR “successfully”

The asterisk “*” can be used to match the preceding shortest pattern zero-or-more times. For string “aaabbb”:

Field-Based Queries

NFUI allow you to search specific fields:
Syntax:- type:value
To view accesslog transaction only:
Ex:- type:accesslog

Logical Operators

Logical operators (AND, OR, NOT) allow combining multiple sub-queries through logic operators.
AND Operator: The AND operator matches documents where both terms exist anywhere in the text of a single document. This is equivalent to an intersection using sets. The symbol && can be used in place of the word AND.

OR Operator: The OR operator is the default conjunction operator. This means that if there is no Boolean operator between two terms, the OR operator is used. The OR operator links two terms and finds a matching document if either of the terms exist in a document. This is equivalent to a union using sets.

NOT Operator: The NOT operator excludes documents that contain the term after NOT. This is equivalent to a difference using sets. The symbol ! can be used in place of the word NOT.

Regular Expressions

Netforest also supports searching for regular expressions by wrapping the search string in forward slashes,
Example: Env:/stress[123]/

  • The period “.” can be used to represent any character.
  • The plus sign “+” can be used to repeat the preceding shortest pattern once or more times.
  • The asterisk “*” can be used to match the preceding shortest pattern zero-or-more times. For string “aaabbb”:
  • The question mark “?” makes the preceding shortest pattern optional. It matches zero or one times.
  • Parentheses “()” can be used to form sub-patterns. Ex:- (ab)+

Ranges Queries

Range queries allow a field to have values between the lower and upper bounds. The interval can include or exclude the bounds depending on the type of brackets that you use.
To search for slow transactions with a response time greater than or equal to 10ms:

resptime:[10 TO *]
OR
resptime:>=10
To search for all failure transactions
httpstatuscode[201 TO *]
OR
httpstatuscode:>200

NetForest Query Language (Detailed)

Field Based Queries


All the fields available for search are listed in left panel of the searchview or (settings => Index Patterns)
To search for a value of any specific field syntax is:
key: value
For example:
server:h121618vaps2307
server:h*vaps*   

Logical Operator Based Queries


Logical operators for example: AND, OR and NOT also can be used in queries.
For example:
server:h121618vaps2307 OR server:h121618vaps2307 OR host:h121618vaps2307 AND instance:kls-api-07
Server is must instance is optional:
server:h121618vaps2307 AND instance:kls-api-07
To exclude logs that match match a specific criteria, you can put a minus (-), an exclamation mark (!) or the word NOT. Example,
server:h121618vaps230 AND -instance:kls-api-07

Range Queries


On number fields:
Greater than or equal to 10 ms:
resptime: [10 TO *]
resptime:>=10
Between 10 and 20:
responsetime: [10 TO 20]

Regex Queries


To write regex-based queries, the user has to write queries in between slash (/)
server:/server[13]/
Above query gives logs having server as server1 or server3.
type:/a{3}b{3}/
Above query search for type having aaabbb.
type:/a{3,4}b{3,5}/
Above query gives logs for type having at least 3 a’s and at most 4 a’s followed by at least 3 b’s followed by at most 5 b’s.
type:/[abc]+/
Above query filters out logs in which type field starts with one or more a’s or b’s or c’s.
type:/[a-e]+/
Above query filters out logs in which type field starts with one or more characters in range a to e, both inclusive.
type:/[abc\-]+/
Above query filters out logs in which type field starts with one or more a’s or b’s or c’s or -. Since – is used as range character in regex it needs escaping.
type:/[^abc]+/
Above query filters out logs in which type field does not start with one or more a’s or b’s or c’s.
type:/[^a-e]+/
Above query filters out logs in which type field doesn’t start with one or more characters in range a to e, both inclusive.
name:/john~athon<1-3>/
Above query contains two operators COMPLEMENT i.e. ~ and range <>. This query return logs where name starts with john followed by some character other than a and ends with thon1 or thon2  or thon3.
type:/a~(cb)def/
Above query returns logs where type field starts with a followed by any character other than cb and ends with def.
type:/aaa.+&.+bbb/
Above query filters out logs where type field starts with aaa followed by one or more occurrences of any character and ends with one or more occurrences of any character and ends with bbb.
type:/@&~(foo.+)/
The at sign “@” matches any string in its entirety.  Above query filters out logs where type field contains anything except string beginning with “foo”.

Removing Duplicity


User can remove duplicity from the indices using an inbuilt functionality of NetForest. Below is the syntax along with an example to illustrate this feature.
Syntax: *|dedup| <Field Name>
Example: *|dedup tier
Output: If query is executed successfully and there is any duplicate record (tier in this case), the result would be displayed as follows:


The number of unique records (hits) is displayed at the top. In this case, it is 1 as displayed.

Dedup Query

Dedup query is used to remove duplicates based on value of a field in document.
NOT(“error”) | dedup 1 server
Above query filters out logs where error string is not present and returns unique logs based on value of server field.
Number after dedup keyword tells the number of duplicates allowed. 1 means unique values.

Custom/Dynamic Field Extraction using Rex


Rex query doesn’t work on analyzed fields.
“/product?id=10” | rex api = “^\/([a-zA-Z]+)\/.*$” path
Above query extracts api field from path field (already extracted).
ciscreensize|rex screen=”.*\”(\d{2,6}\sx\s\d{2,6}).*” message.keyword
Above query extracts out screen field from message. In message it searches for pattern having integer of length 2 to 6 followed by x and ends with integer of length 2 to 6. e.g. 1024×768.
uripath:\/search*|rex keyword=”^/.*?keyword=([a-zA-Z0-9+%]+)&.*$” uripath|rex corrid=”^/.*&corrid=([a-zA-Z0-9\:\-\.]+)&.*$” uripath | table uripath,keyword,corrid
Above query filters logs where uripath starts with /search and extracts multiple fields e.g. keyword, corrid from uripath and plots a table from these fields.
_exists_:type|rex abh=”(cloud_alert)” type
Above query is used to match the string which has “cloud_alert” text.
Output

_exists_:type|rex abh=”(cloud.*)” type
Above query is used to match the string which starts with ‘cloud’ followed by any character.
_exists_:type|rex abh=”(.*alert)” type
Above query is used to match the string which starts with anything but should end with ‘alert’.
_exists_:type|rex abh=”(.*oud_ale.*)” type
Above query is used to match the string which starts and ends with anything but should contain ‘oud_ale’.
_exists_:type|rex abh=”([a-z]+)” format
Above query is used to match the string from a to z of any length with at least one or more occurrences.
_exists_:type|rex abh=”([a-z]*)” format
Above query is used to match the string from a to z of any length with zero or more occurrences.
_exists_:format|rex abh=”(def{1}ault)” format
To match the string of ‘default’ where the number of character occurrences of ‘f’ is one.
_exists_:format|rex abh=”^([a-z]+)$” format
To match the string from a to z of any length with at least one or more occurrence, where ^ and $ indicates the start and end of regex pattern.
*|rex abh=”([^0-9]+)”format
To match the string of all numbers except 0 to 9.
*|rex abh=”([^0]+)”format
To match the string of all numbers except the numbers starting with ‘0’.

Custom/Dynamic Field Extraction using Eval


Eval query doesn’t work on analyzed fields.
OrderNo:3410 | eval status=case(doc.httpstatuscode.value==200, “Ok”,doc.httpstatuscode.value>=400,”Error”,”Success”)
Above query returns logs where OrderNo field is equal to 3410 and creates custom status field based on value of httpstatuscode field. E.g.  If its value is equal to 200 status will be, Ok, if its value is greater than or equal to 400 status will be, Error otherwise status will be, Success.
uripath:”/product?id=10″ | rex api = “^\/([a-zA-Z]+)\/.*$” path | eval apisubstr=substr(api,1,4)
Above query filters out logs where uripath is /product?id=10 and extracts out custom field api from path field and further extracts out abisubstr field from extracted field api.
*|eval num=$$return doc.amountInWallet.value$$ | eval x = abs(num)
Above query gives all the logs and create a custom field num and  x with absolute value of field num.
type:accesslog | eval channel=case(KEY like “1c9d5%”,”Kiosk”,KEY like “%8c718%”,”Mobile Apps”, KEY like “%9694d”, “Old Mcom”, KEY like “KXnAJ%”, “Tablet”, KEY like “d1c2a%”, “Ipad App”,KEY like “oAEItD%”,”New MCOM”)
Above query filters out logs where type is accesslog and create a custom channel field based on value of KEY field.
type:accesslog | eval ceilRespTime=ceiling(doc.resptime.value)
Above query filters out logs where type is accesslog and creates a custom field ceilRespTime which is ceiling value of resptime field.
type:accesslog | eval floorRespTime=floor(doc.resptime.value)
Above query filters out logs where type is accesslog and creates a custom field floorRespTime which is floor value of resptime field.
type:accesslog | eval responseStatus = if(doc.resptime.value>4,”Slow”,”Fast”)
Above query filters out logs where type is accesslog and creates a responseStatus field based on value of resptime field.

*|eval status = exec (“if(doc[‘thread’].value.isNull()){return ‘stuck_thread’} elseif(doc[‘FP’].value.isEmpty()){return ‘With Flow Path Info’} else(doc[‘TOPO’].value.isEmpty()) {return ‘With Topology Info’} else {return ‘Without Topology and Flowpath Info’}”)

Above query gives all logs and create a custom field status based on value of FP and TOPO field.
OrderNo:3410 | eval indexvalue=exec(“return doc[‘path’].value.indexOf(‘log’)”)
Above query filters out logs where OrderNo is 3410 and create an indexvalue field based on the index of “log” string in path field.
uripath:”/product?id=10″ | eval squareAmount =pow(amount,2)
Above query filters out logs where uripath is /product?id=10 and extract a custom field squareAmount from amount field, which is square of amount field.
“ERROR” | eval randVal = random()
Above query filters logs having ERROR string and creates a custom field called randVal which is a random number.
OrderNo:3410 | eval pathSubstr=substr(doc.path.value,1,4)
Above query filters out logs where OrderNo is 3410 and creates a custom field pathSubstr from path field.
“log” | eval roundedPrice = round(doc.price.value, 3)
Above query filters logs having log string and create a custom field roundedPrice from price field by rounding off upto 3 decimal places.
OrderNo:3410 | eval pathLength = len(doc.path.value)
Above query filters out logs where OrderNo is 3410 and creates a custom field pathLength from path field length.

Eval almost Anything


OrderNo:3410 | eval trans=$$doc.SessionID.value + ‘ ‘ + doc.OrderNo.value$$
Above query filters out logs where OrderNo is 3410 and creates a custom field by concatenating value of SessionID field followed by OrderNo field.
Note: Inside $$ any NetForest script can be entered.
* | eval respInMillis=$$doc.resptime.value * 1000$$
Above query returns all the logs and create a custom field respInMillis which converts resptime in seconds to milliseconds.

Eval


Eval command is used to evaluate the expression of a field and save the same value into another field.
Syntax: eval <evaluated field name> = exec(“<script query>”)
where,
<evaluated field name>: To store the data after evaluation of a script query.
<script query>: It is a painless query. In this case, whatever the query is written, it is added same as inline query.
Example: * | eval nf = exec(“return ‘NetForest'”)
Output: This generates a field with the variable name and value is assigned within that variable.

When a user hits a query, its result is stored in cache. If the user hits the same request within a minute, the response is provided by cache only, not by the server.

Eval Construct


This query is used to evaluate the expression of field and save the same value into another field
exec(): This function used to take one parameter as argument and treats it as script query to be evaluated.

  • Syntax: eval <evaluated field name> = exec(“<script query>”)

where,
<evaluated field name>: field where we want to store the data after evaluation of script query.
<script query>: This query will be the painless query. In this case whatever the query will be written we will add the same as inline query.
Note: If we want to evaluate field, we have to define the field name within $<field >$ whether the field will be extracted field in case of rex or document field. We internally decide whether the field is extracted field or document field and convert the query accordingly.
Examples

  • Add the value in a field.

NFUI Query:  * | eval nf = exec(“return ‘NetForest'”)
Above query creates custom nf field which contain values NetForest.

  • Evaluate the document field

NFUI Query:  * | eval nf = exec(“return $tier$”)
Above query creates custom nf field based on value of tier field.

  • Evaluate use of length function.

NFUI Query: * | eval nd = exec(“return $tier$.length()”)
Above query creates custom nd field based on length of value of tier field.

  • Evaluate combination of extracted and document field and use of replace function.

NFUI Query: * | rex nf = “([a-z]+)” tier| eval nd =  exec(“return $nf$.replace($tier$, ‘atg’)”)
New field nf will be created with field value containing only alphabet from tier field and another new field nd will created containing value of nf with replacing the tvs with atg.

  • Evaluate combination of extracted and document field and use of substring function.

NFUI Query: * | rex nf = “([a-z]+.*)” tier| eval nd =  exec(“return $nf$.substring(0,3)”)
New field nf will be created with field value containing only alphabet from tier field and another new field  nd will created containing  substring 0-3 of nf.
Example: if nf field contain mosaic then nd field will contain mos.

  • Evaluate combination of extracted and document field and use of Uppercase function.

NFUI Query: * | rex nf = “([a-z]+)” Env| eval nd =  exec(“return $nf$.toUpperCase()”)
New field nf will be created with field value containing only alphabet from tier field and another new field  nd will created containing  value in upper case of nf.
Ex- if nf field contain prod then nd field will contain PROD.

  • Evaluate use of concatenate function.

NFUI Query: * | eval nf = exec(“return $tier$.concat(‘condata’)”)
New field nf will be created with value of tier field concate with string “condata”
Ex- if nf field contain mosaic then nd field will contain mosaiccondata

  • Evaluate Date format conversion

NFUI Query: * | eval nf = exec(“def sf = new SimpleDateFormat(‘HH:mm:ss MM-dd-yyyy’); return sf.format(new Date(doc[‘@timestamp’].value));”)
New field nf will be created with value having changed date format from field @timestamp.

  • To find timestamp in specific zone

* | eval nf = exec(“def sf = new SimpleDateFormat(‘HH: mm: ss  MM-dd-yyyy’); sf.setTimeZone(TimeZone.getTimeZone(‘GMT’));return sf.format(new Date(doc[‘@timestamp’].value));”)

New field nf will created which contain changed time format (‘HH:mm:ss. MM-dd-yyyy’) with timezone ‘GMT’.

  • Extraction from numeric field.

* | eval nf = exec(“if (doc[‘httpstatuscode’].empty) {return null} def m = /([0-9]+.*)/.matcher(String.valueOf(doc[‘httpstatuscode’].value)); if ( m.matches() ) { return m.group(1) }”)
New field nf will be created which type_cast the numeric field “httpstatuscode””value in string.

Using Evaluation (eval) Feature


This feature is used to evaluate a formula or a regular expression. Below is the syntax (to add two numbers) along with an example to illustrate this feature.
Syntax: *|eval <variable_name1>= exec(“return $<existing_field_name1>.<function_name>()$”)|eval <variable_name2>= exec(“return $<existing_field_name2>.<function_name>()$”)|eval <variable_name>= exec(“return $variable_name1$+$variable_name2$”)
Example: *|eval a = exec(“return $tier$.length()”)|eval b= exec(“return $server$.length()”)|eval c = exec(“return $a$+$b$”)
Output: After successful execution of this query, results are displayed in the below format.

Combined Query, Rex and Eval


We can write complex queries by combining queries with dynamic field extraction.
OrderNo:3410 | rex api1 = “^/([a-zA-Z0-9]+)/.*$” path | eval api1SubStr = substr(api1, 1, 3) | rex api2 = “^/([a-zA-Z]+)/.*$” path | eval api2Length = len(api2)
Above query extracts logs where OrderNo is 3410 and creates custom fields api1 and api2 from path field and creates api1SubStr field which extracts substring from api1 and api2Length, which is length of api2 field.

Creating Table from Query


User can create a table from the query itself. Below is the syntax along with an example to illustrate this feature.
Syntax: *|table <field1>, <field2>, <field3>…..
Example: *|table server,tier,Env
Output: If query is executed successfully, then the result is displayed in tabular format with the fields specified as headers.

JOIN Query


Join queries are used to combine results across different types. E.g. we want to join logs based on flowpath id.
_exists_:FPI AND FPI:4774974341223289035|dedup type|join FPI[type:errorlog AND _exists_:FPI]
E.g. above query search for a specific flowpath id delete duplicates based on type and join on FPI.

VIS Query


This query is used to draw any visualization (charts). A user can use * for using VIS query in all the logs. For using in specific logs, first filter it out and use the VIS query.
Note:  VIS query will work only on charts not on search.
Syntax: *|VIS <Function name(parameters)>by<Field name[sortBy order= Ascending/Descending size=<value> customLabel=”name”]>
Example 1: tier:updated|VIS Count() by @timestamp[interval=Auto]
Description:
Here, the tier is filtered out and then it calculates the count on that particular tier with respect to timestamp.
Output:

Example 2:
tier:updated|VIS Count() by @timestamp[interval=Auto],resptime[]
OR
tier:updated|VIS Count() by @timestamp[interval=Auto],resptime[sortBy=1 order=Ascending Size=5]
Description: Here, user can use multiple buckets as per the usage.
Output

Syntax: *|VIS <aggregation_name>(fieldname) by @timestamp[interval=Auto]
Different Supported Aggregations along with keywords:

  • Count(): It provides the count of documents
  • Countrate():It provides the count/sec value with respect to time
  • Average():It provides the average value with respect to time
  • Sum():It provides the sum of documents
  • Min():It provides the minimum value of documents
  • Max():It provides the maximum value of documents

Example 3: tier:updated|VIS Average(resptime) by @timestamp[interval=Auto]
Description: To perform an operation on a particular field.
Output

A user can also provide custom label in VIS query to change/modify the label of columns and rows as per requirement. It is applicable for charts.

For count/countrate: *|VIS count(customLabel=query_count) by server[size=5 order=Descending orderBy=1 customLabel=server]

For others: *|VIS Average(resp_time customLabel=query_count) by server[size=5 order=Descending orderBy=1 customLabel=server]

Note: ‘customLabel’ does not support space in label name.

Range Query in Visualization


User can also search a range query from the query bar. Below is the syntax along with an example to illustrate this feature.
Syntax: *|VIS Aggregation() by range(field_name[])
Example: *|VIS Count() by range(log_message.reqLen[(0-100)])
Description: After successful execution of the query, results are displayed in the below format.

If user leaves the range empty, the default range values 0-1000 and 1000-2000 are used to display the results.

Stats Query


This query is used to find the statistics/aggregation for the field, such as sum, count, avg, min, max. It can be grouped by another field.
Different Supported Aggregations along with keywords:

distinct_count(Y)Returns the count of distinct values of the field Y.
perc(Y)Percentiles show the point at which a certain percentage of observed values occur. For example, the 95th percentile is the value that is greater than 95% of the data.
sumsq(Y)Returns the sum of the squares of the values of the field X.
var(Y)Returns the sample variance of the field X.
stdev(Y)Returns the sample standard deviation of the field X.
earliest(Y)Returns the earliest seen occurrence of the value of field Y on the basis of timestamp
Latest(Y)Returns the latest seen occurrence of the value of field Y based on timestamp.
min(Y)Returns  minimum value of the numeric field Y.
max(Y)Returns  maximum value of the numeric field Y.
count(Y)Returns total number of occurrences of the filed Y.
Avg(Y)Returns average value of the field Y.

Simple Stats

If the stats command is used without a BY clause, only one row is returned, which is the aggregation over the entire incoming result set.
Click Search on the left panel and enter the query in the search field.
Syntax: ..| stats <fieldname>
Example: *|stats resptime
Output: This generates all the possible statistics of a field, that is, count, max, min, avg, and sum.

Syntax: ..|stats <aggregate_function>(<fieldname>)
Example: *|stats max(resptime)
Output: It calculates the maximum value of response time. The output field is highlighted.

Note: A user can use any other such aggregation function in query same as ‘Max’.
Syntax: ..|stats <aggregate_function> (<fieldname>) AS <aliasname>
Example: *|stats max(resptime) AS rtime
Output: This generates the statistics of a field based on the aggregation function, such as, count, max, min, avg, or sum and the resultant field is based on <aliasname>.

Stats with BY Clause

If Stats command is used with BY clause, one row is returned for each distinct value specified in the BY clause.
SyntaxBY <field-list>
Description: The name of one or more fields to group by. The user cannot use a wildcard character to specify multiple fields with similar names. Specify each field separately. The BY clause returns one row for each distinct value in the BY clause fields. If no BY clause is specified, the stats command returns only one row, which is the aggregation over the entire incoming result set.
Syntax: ..|stats <aggregate_function> (<fieldname1>) BY <fielname2>
Example 1: *|stats max(resptime) BY server
Output: This generates the statistics of a field based on the aggregation function, such as, count, max, min, avg or sum, and is grouped on the field2 basis.  The resultant field is based on the aggregation function and the field name, such as <aggregate_function>_< fieldname>.

Note: A user can use more than one comma separated field names in group by clause.
Syntax: ..|stats <aggregate_function> (<fieldname1>) AS <aliasname> BY <fieldname2>,<fieldname3>
Example 2: *|stats max(resptime) by user,query
Output: This generates the statistics of a field based on the aggregation function, grouped by multiple field names.

Syntax: ..|stats <aggregate_function> (<fieldname1>) AS <aliasname> BY <fielname2>
Example 3: *|stats max(resptime) AS rtime BY server
Output: This generates the statistics of a field based on the aggregation function, such as count, max, min, avg, or sum, and is grouped on the field2 basis. The resultant field is based on <aliasname>. Here, <fieldname> is the field name for which a user wants to aggregate data. <aggregate_function> is the aggregate function to be applied. Currently, count, max, min, avg, and sum functions are applicable.

Stats Query with Span Clause

This command aggregates and make buckets based on a field and the span mentioned. A user can group the result based on interval, such as year, quarter, month, week, day, hour, minute, or second.
Syntax: ..|stats <aggregationFunction>(<fieldName>) span=<timeInterval>
Example: * | stats max(offset) AS maximum SPAN=1y
Output: This generates the log results grouped based on the time intervals. The different time intervals that can be used are milliseconds, seconds, minutes, hours, days, and so on.

Stats Query with Span and BY Clause

This displays the result based on the provided span and field mentioned in BY clause.
Different Time-Intervals

  • milliseconds (ms): Fixed length interval; supports multiples.
  • Seconds (s): 1000 milliseconds; fixed length interval (except for the last second of a minute that contains a leap-second, which is 2000ms long); supports multiples.
  • Minutes(m) — One minute (1m) is the interval between 00 seconds of the first minute and 00 seconds of the following minute in the specified timezone.
  • Multiple minutes (nm) are intervals of exactly 60×1000=60,000 milliseconds each.
  • Hours(h): One hour (1h) is the interval between 00:00 minutes of the first hour and 00:00 minutes of the following hour in the specified timezone.
  • Multiple hours (nh) are intervals of exactly 60x60x1000=3,600,000 milliseconds each.
  • Days(d): One day (1d) is the interval between the start of the day and the start of of the following day in the specified timezone.
  • Multiple days (nd) are intervals of exactly 24x60x60x1000=86,400,000 milliseconds each.
  • Weeks(w): Multiple weeks (nw) are not supported.
  • Months(M): Multiple months (nM) are not supported.
  • Quarters (q): Multiple quarters (nq) are not supported.
  • Years (y): Multiple years (ny) are not supported.

Syntax: ..|stats <aggregation>(fieldName) span=<DatetimeInterval>
Example : * | stats count(offset) max(offset) SPAN=1y BY server,type
Output: This generates the log result based on the given aggregated fields for a provided span of time grouped by multiple fields’ name.

Stats latest along with BY Clause

This provides the result with the first occurrence of the field specified in BY clause.
Syntax: ..|stats latest(<filedName1>) as <aliasName> by <fieldName2>,<fieldName3>..
Example: * | stats latest(@timestamp) BY server,type
Output: This result in latest occurrence of @timestamp field grouped by multiple fields provided.

Stats earliest along with BY Clause

This provides the result with the latest occurrence of the field specified in BY clause.
Syntax…|stats earliest(<filedName1>) as <aliasName> by <fieldName2>,<fieldName3>..
Example: * | stats earliest(@timestamp) BY server,type
Output: This result in earliest occurrence of @timestamp field grouped by multiple fields provided.

Docstats Query


This query finds the statistics based on the aggregation function and the grouping of data will be based on another field. Based on the group by field, it appends the generated field (key:value) in the document source.
Syntax: docstats <aggregation>(<fieldname>) AS <aliasname> by <fieldname>
Example: *|docstats max(resptime) AS rtime BY server
Output: This generates the statistics of a field based on the aggregation function, that is, count, max, min, avg or sum and will be grouped on field2 basis.  The resultant field is based on <aliasname>. The resultant field value (key:value pair) is appended in the source based on the <fielname2>. Here, <fieldname> is the field name for which a user wants to aggregate data. <aggregate_function> is the aggregate function to be applied. Currently count, max, min, avg, and sum functions are applicable.
In this case, first server wise max response time is calculated then the resultant value (key:value) is added in the source based on the matching server.

Example: *|eval a= exec(“return $tier$”)|rex b = “^([a-z]+)$” server|docstats max(resptime) AS rtime BY server
Description: Docstats query can be used along with Rex and Eval
Output

Streamstats Query


This query finds the cumulative sum of the field based on the timestamp. Currently, the default timestamp interval is 5 seconds.
Syntax: streamstats <numeric fieldname>
Here, <numeric fieldname> is the numeric field name.
Example: streamstats resptime
Output: This generates the cumulative sum of the resptime that is calculated based on the timestamp.

Example: *|eval a= exec(“return $tier$”)|rex b = “^([a-z]+)$” server|streamstats resptime
Description: Streamstats query used with Rex and Eval.
Output

Multistats Query


This query is used to find the statistics for more than one field, such as sum, count, avg, min, and max, at the same time. It can be grouped by another field.
Syntaxstats <aggregation>(<fieldname>) <aggregation>(<fieldname>) <aggregation>(<fieldname>)
Example: *|stats max(log_message.status) min(cp) max(cp)
Output: This find the maximum value of field log_message.status as max_log_message.status, minimum of cp as min_cp, maximum of cp as max_cp.

Syntax:stats <aggregation>(<fieldname>) AS <aliasname> <aggregation>(<fieldname>) AS <aliasname>
Example: *|stats max(log_message.status) AS higher min(cp) AS lower
Output: This generates the result as shown below.

Syntaxstats <aggregation>(<fieldname>) AS <aliasname> <aggregation>(<fieldname>) AS <aliasname> BY <fieldname>
Example: *|stats max(cp) AS higher min(cp) AS lower BY log_message.status
Output: This generates the result as shown below.

Lookup


Lookup enriches the document data by adding field-value combinations from lookup tables. NetForest uses lookup to match field-value combinations in document data with field value combinations in external resources. If it finds those field-value combinations in the resources, it appends the corresponding field-value combinations from the resources to document data in the search result. A user can use it along with REX and EVAL queries.
Syntax: lookup <lookup-filename> <field-name> output <destfield-name>
where,
<lookup-filename> refers to the lookup file name.
<field-name> refers to a field in the index to match against the events.
<destfield-name> refers to a field in the lookup file to be copied into the events.
Example: * | lookup file cp output sname
Output: This generates the result as shown below.

Where


Where command is used to filter search results by comparing two different fields and values and use logical expression (AND / OR / NOT) for multi-fields operation.
Note: It runs after pipe(|)
Syntax: *|where <expression>
where,
<expression> can be any combination of field name, comparison operator, and logical operation.
Example: *|where resp_size<11000
Output: This generates the results where the response size is less than 11000.

Example: *|eval a= exec(“return $tier$”)|rex b = “^([a-z]+)$” server|where resptime<=1
Description: Using ‘Where’ along with ‘Rex’ and ‘Eval’.
Output

Append


Append query appends the results of a sub-search to the current results. The append command runs only over historical data.
Note: It does not produce correct results if used in a real-time search.
Syntax: <search>|append [<subsearch>]
Here search and subsearch refers to <string> or <query> or <queryWithPipe>
Example: *|append[type:accesslog]
Output: This generates the results by adding documents where type:accesslog to the output of ‘*’.

Example: Append query with multi rex and multi eval.
*|append[*|rex a=”([a-z]+)”server|eval b=exec(“return $a$”)|rex c=”([0-9A-z]+)”message|eval d=exec(“return $c$”)]
Output: This generates the result by adding the rex/eval evaluated query to the output of ‘*’.

Multi-Value Constructs


Multi-value construct is used to perform operations on multi-value field. NetForest supports the following multi-value constructs:

  • Nomv
  • Makemv
  • Mvexpand
  • Mvcombine

Below is the description of each construct in detail along with the syntax and examples.

Nomv Construct

This construct is used to convert multivalue field into single value field.
Syntax: …|nomv  <field_name>
Here <field_name> is a field, which is required to convert into single value.
Example:
When multivalued fields is converted into single value field using nomv.
server:”1.2.3.11”|nomv users
Output: This generates the result by converting array into a single string value field.
i.e “users”:[“a”, “b”, “c”,”d”,”e”] to “users”:“a,b,c,d,e”
Before applying nomv construct

After applying nomv construct

Makemv Construct

This construct is used to convert single value field into multivalue field. When ‘makemv’ operation is performed on any single-valued field separated by any delimeter, it converts the values into array format.
Syntax: …|makemv [delim=<string> / tokenizer=<string>] <field_name>
Here,
delim=<string> is a delimiting string, which is used to split the value of a field. Its value is provided within double quotes. This is an optional argument and its default value is comma.
tokenizer=<string> is  a regex, which is used to capture groups with the field value. The value of field is splitted on occurrence of this regex. This is an optional argument.
Note:
Either delim or tokenizer can be used in query, both can’t be used simultaneously.
Example – 1:
When single-valued fields is converted into multi-valued filed using makemv.
Output: This generates the result by converting a single string into array value field.
i.e “users”:”a,c,d” to “users”:[“a”,”c”,”d”].
Before applying makemv construct

After applying makemv construct

Example – 2:
Using makemv with delim.
Example: *|makemv delim=”,” users
Output: This generates the result by converting a delimeter separated single string into array value field.
i.e.  “users”:”a,c,d” to “users”:[”a”,”c”,”d”].

Before applying makemv construct

After applying makemv construct
 
Example – 3:
Using makemv with tokenizer.
Example: “users”:”a,c,d” to “users”:[”a”]
Query: *|makemv tokenizer=”([a]+)” users
Output: This generates the result by matching the tokenizer value(regex) from given field and convert that single string into array value field.
i.e.  “users”:”a,c,d” to “users”:[”a”]
Before applying makemv construct

After applying makemv construct

Mvexpand Construct

This construct is used to expand values of a multi-value field into separate events, such that the result contains one event for each value in the multi-value field.
Syntax: …|mvexpand <field_name>
Example – 1:
When multivalue field is converted into separate events using mvexpand.
{“server”: “1.2.3.9”,”httpmethod”: “GET”,”sizeinbytes”: 150,
“resptime”:1,”httpstatuscode”:202,“users”:[“a”,”b”,”c”,”d”]}

To

{“server”: “1.2.3.9”,”httpmethod”:”GET”,”sizeinbytes”:150,
“resptime”:1,”httpstatuscode”:202,“users”:“a”}

{“server”: “1.2.3.9”,”httpmethod”:”GET”,”sizeinbytes”:150,
“resptime”:1,”httpstatuscode”:202,“users”:“b”}

{“server”: “1.2.3.9”,”httpmethod”:”GET”,”sizeinbytes”:150,
“resptime”:1,”httpstatuscode”:202,“users”:“c”}

{“server”: “1.2.3.9”,”httpmethod”:”GET”,”sizeinbytes”:150,
“resptime”:1,”httpstatuscode”:202,“users”:“d”}

Query: server: 1.2.3.9|mvexpand users
Output: This generates the result by expanding all the array index of field users into individual documents (i.e. every array value split into individual documents).
Before applying mvexpand construct

 After applying mvexpand construct

Mvcombine Construct

This construct is used to group multiple events, which are identical except for the specified field, which contain a single value. The resultant of this operation is a single event in which specified field becomes a multi-value field.
Syntax: …|mvcombine <field_name>
Example:
When multiple events are grouped (which are identical except for the specified field), which contains a single value using mvcombine.
{“server”:”1.2.3.21″,”httpmethod”:”PUT”,”sizeinbytes”:150,”resptime”:5,”httpstatuscode”:202}
{“server”:”1.2.3.21″,”httpmethod”:”POST”,”sizeinbytes”:150,”resptime”:5,”httpstatuscode”:202}
{“server”:”1.2.3.21″,”httpmethod”:”PUT”,”sizeinbytes”:150,”resptime”:5,”httpstatuscode”:202}
{“server”:”1.2.3.21″,”httpmethod”:”GET”,”sizeinbytes”:150,”resptime”:5,”httpstatuscode”:202}
To
{“server”:”1.2.3.22″,”httpmethod”:[“PUT”,”POST”,”GET”],”sizeinbytes”:150,”resptime”:      5,”httpstatuscode”:202}
Query: *|mvcombine httpmethod
Output: This generates the result by combining all the different values of field users into a single document (i.e. every different value of documents combines into one document).
Before applying mvcombine construct

 After applying mvcombine construct

Merge Query


This feature merges other fields based on one or more field name.
Syntax: *|merge <[VALUE|value]=[String|number]> <fieldNames>
<fieldNames>: Multiple field name will be separated by comma(,)
Example:

  • *| merge value=”NA” field_name
  • *|merge value=0 field_name

Use Case
This construct is used when there is high level of redundancy in data. For example, when majority fields’ values are something like “NA” or null.
Therefore, to remove redundant values (e.g. NA, null), ‘merge’ can be applied to one or more field and remaining fields get merged row wise on the basis of given <[VALUE|value]>. After merge, given fields contain only unique values and for the remaining fields rows are merged.
Limitation
If the value given in ‘value=””’ does not exist in any of the rows, then rows are overwritten consecutively. Therefore, this construct is meaningful only when data has sufficient null or undefined values.
Example : * | merge value=”0” kpi
Output: This generates the result remove duplicate from the field ‘kpi’ and replace the corresponding value of the fields with the overwritten value(in our case i.e ‘0’).
Before Merge:

After Merge:

Fetch Query

This query is used to fetch data directly from other products’ databases and create visualizations using that data.

Syntax: fetch [<MetricHost>] [Fields] WHERE <logical-expression>

Fetchlog Query

This query is used to retrieve logs from indexes with some filters specified in fetchlog command or filter based on previous results in the pipeline. Fetchlog command retrieves records from NFDB.

You can use Fetchlog command in two ways:

  1. When Fetchlog is the first command: To retrieve logs from indexes with some filters specified.
  2. When Fetchlog is the not the first command: When the fetchlog command is not the first command in the pipeline, the fetchlog command is used to fetch logs filtered based on the results of the previous command.

Syntax: fetchlog INDEX:” indexpattern” <logical-expression>

Example: fetchlog index:”tier1-env2-20180503″ where httpmethod.keyword==”GET” AND dc.keyword==”test”

Sort Query

This query sorts the results of a search in ascending or descending order based on the order specified by the user.
Syntax: ..|sort <Orderoption> <fieldName>
Example : * | sort asc offset | table offset.message
Output: This generates the result by sorting the values of field ‘offset’ in ascending order.

Time Function


This function returns current time, with microseconds resolution. The result is changed for every different search.
Syntax: ..| eval <fieldName>=time()
Example: * | eval sample=time()|table sample,@timestamp
Output: This generate the result by adding a field ‘sample‘, which contains the fetched document time in milliseconds.

strftime(X,Y)

This function takes field as input which should have format epoch-millis and returns Date according to Y specifier format.
Syntax:  . . | eval <fieldName>=strftime(doc.<fieldName>.value,<”datetimeFormat”>)
 Example: * | eval sample=strftime(doc.@timestamp.value,”%H:%m”)|table @timestamp,sample
Output: This generate the result by adding a field ‘sample‘which contains the converted date/time value of field @timestamp in given datetimeFormat.

strptime() function

This function takes string X as first argument and second argument Y as date format to represent String X.  The strptime function returns correct UNIX time of String X. String X value must be January 1, 1971 or later.
Syntax..| eval <fieldName>=strftime(doc.<fieldName>.value,<”datetimeFormat”>)
Example: * | eval sample=strptime(doc.@timestamp.value,”%yyyy-%MM-%dd”)|table @timestamp,sample
Output : This generate the result by adding a field ‘sample‘  which contains the converted date/time value of field @timestamp in epoch-milliseconds format.
Note: Here the <datetimeFormat> must be same as given field format.

Monitoring Server Health

It is used to monitor NFDB server health, such as CPU, memory, disk, and so on. It also monitors, number of user logged in, number of queries run by the user, query response time, max used query.

Monitoring number of users along with query and response time