Logs Management Scenarios

In the Logs management scenarios, you can find out the requirement cases of the 12 functions that are normally managed by the Logs management in your organization from three main aspects (build configuration, monitoring, and search). These scenario demonstrations can serve a role to better assist the enterprise in applying these scenarios in its own organization.

Case Study:

XX Information is a technology company that provides “information system integration and build services”. It also develops enterprise information systems and terminal equipment integration information platforms. And through IT system monitoring and analysis, it helps to maintain the company’s regular business and services and achieve stable system operation.
Manager Wang is the manager of XX Information’s information technology department. He is in charge of the technology development and MIS departments. Multiple sets of software systems and hardware equipment are used to meet the maintenance and development needs of the company.
Manager Wang discovers that IT is often the last to be notified when system anomalies occur, which is too passive. Most of the time, due to the exceeding amounts of devices in the system and the nature that Log data are stored in a scattered manner. Much time is wasted to discover the problems manually. In addition to that, the amount of data in question, inconsistent field formats, and little correlation among the settings greatly increase the difficulty of discovering the problems concerning time and effort.
To locate the sources of system anomalies with speed and accuracy, Manager Wang decided to introduce the digiLogs centralized management platform as the company’s Logs management center so that he can manage system data records of various software and hardware easily through a single interface, reduce the time cost for the team to discover the problems and improve management efficiency effectively. Among all, MIS, the development team, and the management are members of the organization that need to use the Log centralized management platform more often.
digiLogs supports flexible deployment architecture

Aspect 1: Build Configuration

Scenario 1: Platform Access Permissions

In this scenario, you can find out how to use digiLogs to construct an organizational chart and grant various access permissions and functional modules of the platform to different levels of users.

Use Case

The IT development and MIS departments hope to use digiLogs as a centralized Logs management platform for their software systems and hardware equipment. Both departments hope to have their own organizational access regulations and each needs different functional modules for members of different job responsibilities (Manager, Developer) to execute corresponding tasks.
Organization & Roles

Logs management platform for their software systems and hardware equipment. Both departments hope to have their own organizational access regulations and each needs different functional modules for members of different job responsibilities (Manager, Developer) to execute corresponding tasks.

The authorized permissions for BU-1 Manager include:

Permissions Authorization:

  • Permission Management: user maintenance/personal information maintenance/logout
  • System Function Management: function maintenance
  • Monitoring Management: All
  • Log Query: All
  • Transaction Monitoring: All
  • Index Management: All
  • Transaction Path: All

Role Authorization (as BU_Manager, the designated person has the right to authorize his subordinates with the following roles):

  • BU_Manager
  • BU_Developer

Function Description

Integrated System Authentication Login: It supports Oauth2, LDAP, AD, and integrated SSO mechanisms. The process mainly uses AD Server for identity authentication and achieves successful login if the AD Server returns the authentication result. In digiLogs, if the management environment is partitioned and a quick environment switch is preferred, you can click on the “Redirect Page” on the login screen or the “User Icon” on the upper right-hand side of the platform to select the intended environment.
"Integrated System Authentication Login" diagram

Steps

Step 1: Organization Maintenance
Click on “Access Management” > “Organization Maintenance” to create a new organization.
Enter the information regarding the organization to be built (BU-01, bu01_mgr) and click [Create] to create a new organization node.
Step 2: Create New Roles

Click on “Access Management” > “Role Maintenance.” Click “Create” to create a new role.

Enter “Role Code” and “Role Name” (BU-Manager) and click on the “Function List” to choose new functions for this role. Then, click [Create].
Step 3: Set Up Role Lists
Click on “Access Management” > “Set Up Role List” to grant assignable roles to “BU_Manager”. List of Assignable Roles: BU_Manager, BU_Developer
Step 4: Create New Users
Click on “Access Management” > “User Maintenance” to create a new user: bu01_mgr under BU-01 organization and grant the newly added role to the new user for him to use the functions assigned to the role.
*With LDAP authentication, no password is required.

Scenario 2: Platform Functions Set Up

In this scenario, you can find out how to perform a quick search on the “associated users” from the “function bar” in digiLogs, and how to adjust and update the “function description” according to the industry’s idioms or preferred terms.

Use Case

As the director of the MIS department, Alex needs to compile a report regarding the open users on digiLogs management platform and their permissions to the corresponding functions for the company’s information security audit. He also thinks that the “transaction monitoring” function names on the platform do not relate to their functional meanings intuitively. Therefore, he wants to make some adjustments according to the company’s or internal idioms so that all the listed functions in digiLogs and the information of users assigned to each function can be understood with a quick look through “Function Search.” If he needs to modify the functions’ names and descriptions, he can do so on the platform interface by himself as well.

Steps

Step 1: Function Search
Log in as bu02_mgr. Click on “System Function Management” > “Function Maintenance” and enter the information to be searched (Function Code or Function Name, EA0007). Click on “Action” > [Associated Role] to get a list of qualified roles.
Step 2: Update

Click on “System Function Management” > “Function Maintenance” and enter the information you want to search (Function Code or Function Name, EA01). Click “Action” >[Update].

Enter the name to be changed in the “Function Name” field. Enter the description (Monitoring Report, Monitoring System Report) to be changed in the “Function Description” tab and click [Update] to complete.

Aspect 2: Monitoring

Scenario 1: Platform System Operation

In this scenario, you can find out how to monitor the operation (health) status of digiLogs through the search interface provided by the platform.

Use Case

IT developer Joe felt the search response time was longer recently when he worked on the platform. So he wants to check on the status of the system operation performance. With “digiLogs Server Dashboard”, he can now quickly review system health status and other indexes, including Heart Beat, Heap, CPU, and Thread Pool.

Steps

Step 1: General Search

After logging in as bu01_dev, click on “Monitoring Management” > “digiLogs Server Dashboard” to select the time interval (Week) to be searched, and click [Search].

Step 2: Advanced Search (custom search range)
Click on “Monitoring Management” > “digiLogs Server Dashboard” to select the time interval (Month) to be searched. Select “Start Time” and “End Time” (using Absolute) in the function box on the right-hand side of the “Calendar” icon and click [Update].

Scenario 2: User Path

In this scenario, you can find out how to use the search interface to search the operation behavior records of all users on the platform.

Use Case

Due to the management needs, MIS director Alex needs to monitor the usage status of the digiLogs platform and spot-check the detailed operation behaviors of specific users randomly. It is time now to perform a routine spot check. He can search either all users or specific users from the “Audit Log” on the platform with a list prepared in advance. For this time, he wants to search and review the overall usage status in December and spot-check the usage tracks of IT developers, such as Joe et al.

Steps

Step 1: General Search (without criteria)
After logging in as bu02_mgr, click on “Monitoring Management” > “Audit Log” to select the “Start Time” (2021-12-01) and “End Time” (2021-12-31) as the time interval to be searched on, and click [Search] to obtain all behavior details.
Step 2: Quick Search

Perform a quick search with the shortcut “Log Search, Login and Index Management” and compile a report after exporting the search results to a file.

Step 3: Advanced Search (specific users or criteria)

Click on “Monitoring Management” > “Audit Log” to select the starting and ending time (2021-12-01~2021-12-31) as the time interval to be searched. You can also search with additional criteria such as “User, Return Code, Transaction Code” (user=johnmanager). Click [Search] to obtain the behavior details of the person searched for.

Scenario 3: Alert Notification

In the following scenarios, you can find out how to set Alerts in digiLogs to help IT turn from passive to active in order to find and deal with the problems immediately when an anomaly occurs so that the issue of passive notification can be eradicated.

Use Case

MIS director Alex notices a situation where he is often notified of system anomalies unexpectedly and has to urgently dispatch developer Tony in his team to take care of it. Alex wants to improve the process. After digiLogs was introduced, he found that in addition to the default Server Node on the platform, he could set the alert using keyword mode, with alert criteria and designated contacts to meet the requirement. The designated contacts will receive a notification automatically when the same anomaly occurs in the future and they can work on the issues in digiLogs afterward.

(You can follow the steps on this page to do the setups If you already have alert settings to be created.)

The following is a brief explanation of the meanings of “Alert Settings” in the field on the page:

The following situations are anomalies reported by the users. After some investigation, it is suspected that unstable system operation status has caused unknown operation interruptions and failure. As a result, the end users couldn’t complete the operation process smoothly. Alert mechanisms need to be set for the reported anomalies in order to track and work on them in the future.

- Scheduling

MIS developer Tony received anomaly reports from the users. After some investigation, the main issue identified was that unstable “scheduling” system operations caused operation failures. As a result, the end users couldn’t complete the operation process smoothly. Due to the recent frequent occurrence, he asked the Logs strategy team to include it on the alert notification list to monitor the situation.

Steps

Step 1: Add A New Alert
Click on “Monitoring Management” > “Alert Settings”. Click “Create”.
Step 2: Set Up Alert Criteria

Scenario Description: When “failure” occurs twice in a 30-second interval during scheduling, all the group members with Role: BU-Developer and BU-Manager will receive an alert notification.

Please follow the “Example Data” below to fill in the fields on the page, and then click [Create] to complete.

Verify that it is set up successfully.

 Anomaly Notification Letter

- Sending Process

IT developer Tony found that there were recent anomaly events in the “Sending Process” resulting in failing to send mails. To avoid the reoccurrence of similar issues, he decided to ask the Log strategy team to include it on the list of alert notifications spontaneously in order to monitor the situation.
Steps
Step 1: Add A New Alert

Click on “Monitoring Management” > “Alert Settings”. Click “Create

Step 2: Set Alert Criteria

Scenario Description: When “failure” occurs once during the sending process, all the group members with Role: BU-Developer will receive an alert notification.

Please follow the “Example Data” below to fill in the fields on the page, and then click [Create] to complete.

Verify that it is set up successfully.

Anomaly Notification Letter

- Event Viewer

IT developer Joe had to monitor the operation status of individual models on the “API management platform (digiRunner)” manually every day mainly because the anomalies of modules make great impacts and his manager is very serious about it. When digiLogs was introduced, Joe wanted to turn from passive to active, so he decided to ask the Logs strategy team to add “Event Viewer” (results of all execution events) to the alert settings, which allows him to check on the system only when anomaly notifications are received.

Steps
Step 1: Add A New Alert

Click on “Monitoring Management” > “Alert Settings”. Click “Create”.

Step 2: Set Alert Criteria
Scenario Description: When the event viewer receives “one” “unknown error”, all the group members with Role: BU-Developer will receive an alert notification.

Please follow the “Example Data” below to fill in the fields on the page, and then click [Create] to complete.

Verify that it is set up successfully

Anomaly Notification Letter

Scenario 4: Transaction Analysis Chart

In this scenario, you can find out how digiLogs can help develop customized monitoring reports according to the enterprise requirements and use graphic reports to illustrate the operating status with a quick look as well as make simple analyses.

Use Case

In the past, IT director Bill had to report the operation and usage status of each system to manager Wang on a regular basis. The data regarding the operation and usage status are considered key items of concern, especially after the major systems were launched. Due to the constraint of meeting time, Bill hopes to present the content of the report in a clear and easy-to-understand fashion. In addition, he wants to be able to monitor the indexes on the report in real-time to see if there are problems such as overtime, anomalies that need to be improved. In this regard, digiLogs provides a service for enterprises to develop customized reports with a graphic representation according to task requirements so that people can understand the information easily and make quick analyses.

- Cash Flow (Online Banking) System Scenario

Bill believes that the dimension of concern for the “Cash Flow (Online Banking) System” is between system actions and hostname, and their correlation with time/counts. So he proposed a customization requirement to present the information of the system action with its average usage counts and time as well as the hostname logged into the system and its average duration in tabular format, bar graph, or line graph.

Steps

Step 1: General Search (overview of reports)

After logging in as Role: bu01_mgr, click on “Transaction Monitoring” > “Average Online Banking Transaction Time Analysis” to select the time interval (Month) to be searched, and click [Search].

Step 2: Advanced Search (customized date range)

After logging in as Role: bu01_mgr, click on “Transaction Monitoring” > “Average Online Banking Transaction Time Analysis” to select the time interval (Month) to be searched. Select the “Start Time (2021-12-01)” and “End Time (2021-12-15)” (using Absolute) in the function box on the right-hand side of the “Calendar” icon and click [Update].

Step 3: Real-time Monitoring (evaluate indexes for improvement)

Pull down on “Search Result” to locate the target result. In the “Average Online Banking Transaction Time Analysis Table”, the first item has the longest average action time (6 sec), which is obviously exceeding the average time. This information can be used as a reference to evaluate whether the data is reasonable and whether corresponding improvements are needed.

- App System Scenario

Bill sets the “App” dimension between system actions and hostname, and their correlation with time/counts. So the customization requirement is to present the information of the app action with its average usage counts and time, as well as the hostname logged into the app with its average duration in tabular format, bar graph, or line graph.

Steps

Step 1: General Search (overview of reports)

 After logging in as Role: bu01_mgr, click on “Transaction Monitoring” > “Average APP Transaction Time Analysis” to select the time interval (Month) to be searched, and click [Search].

Step 2: Advanced Search (customized date range)

Click on “Transaction Monitoring” > “Average APP Transaction Time Analysis” to select the time interval (Month)  to be searched. Select the “Start Time (2021-12-01)” and “End Time (2021-12-15)” (using Absolute) in the function box on the right-hand side of the “Calendar” icon and click [Update].

Step 3: Real-time Monitoring (evaluate indexes for improvement)

Pull down on “Search Result” to locate the target result. In the “Average APP Transaction Time Analysis Table”, the first item has the longest average action time (2 sec). This information can be used as a reference to evaluate whether the data is reasonable and whether corresponding improvements are needed.

- API Management System Scenario

With the recently launched “API Management System”, Bill hopes to present the complete information of Traffic Analysis, Response Time (Max/Min), Usage Counts (Success/Failure respectively), API Counts – Time Analysis (Success only), average API time (Success only), Client-API Usage Counts, Bad Attempt connection report in tabular format, bar graph, or line graph.

Steps

Step 1: General Search (overview of reports)

After logging in as Role: bu01_mgr, click on “Transaction Monitoring” > “Average API Time Calculation Analysis” to select the time interval (Month) to be searched, and click [Search].

Step 2: Advanced Search (customized date range)

Click on “Transaction Monitoring” > “Average API Time Calculation Analysis” to select the “Start Time (2021-12-01)” and “End Time (2021-12-15)” (using Absolute) in the function box on the right-hand side of the “Calendar” icon and click [Update].

Step 3: Real-time Monitoring (evaluate indexes for improvement)

In table “7. TSMP API traffic analysis”, the peak value of API traffic in this time interval is “15:46” with 50 hits. This information can be used as a reference to evaluate whether the data is reasonable and whether corresponding improvements are needed.

Scenario 5: Transaction Path

In this scenario, you can find out how digiLogs can help enterprises accomplish Log path mapping, customize the connection of monitoring systems, and present it in a one-page graphic manner so that you can learn about the system anomalies quickly and the methods to contact the party concerned.

Use Case

IT director Bill finds a correlation between the subsystems of online banks. If the Logs of the subsystems can be integrated and connected, it will be faster and clearer to discover the issues before appropriate handling can be taken when system anomalies occur in the future. In this regard, digiLogs helped their department develop path mapping so that the subsystems can be connected and integrated according to their correlation, and presented in a “one-page web page” format so that the monitoring personnel can quickly check on the operation status between systems in real-time.

Steps

Step 1: General Search (overview of the system)

Click on “Transaction Path Mapping” to see if data in each system operate normally.

Step 2: Anomaly Inspection And Search For Contact Information (specific systems)
When anomalies occur, the “average time” in the (personal online banking) system will appear “in red”.

Click on the “average time” of the system in question to confirm its “System Type” and “Transaction Type” status.

Click on the “icon” of the system in question to see the contact information of the responsible person.

Aspect 3: Search

Scenario 1: Dynamic Query Field

In this scenario, you can find out how digiLogs uses the “Dynamic Query Field” to find the target Log data accurately.

Use Case

One day, an anomaly occurred in the “centralized API system”. After receiving the task request, MIS member Tony initially believed that the anomaly was caused by one of the APIs. He needed to perform a “Dynamic Search” based on the predicted condition to verify and confirm the source of the anomaly in order to complete the subsequent handling. There were not a lot of clues when he first received the report. Since he was told to search the target data source in Log files, what he could do was explore the full texts in command lines, and narrow down the scope gradually in order for him to collect the critical information, but the entire search process was very complicated. After digiLogs was introduced, he found that the inconvenience in the past was immediately solved. In addition to using the “click mode” provided by the platform to perform a full-text index, he can also customize keywords or dynamic criteria to search the content, which makes the operation much easier and obtains better search results with accuracy.
Function Description
  • Full-text Index: Search the entire Log file
  • Keyword Query: Find clues to lock the target
  • Dynamic Query: Find out the log anomalies quickly
  • Shield Protection: Add security protection to sensitive data. You can set shield criteria to ensure automatic protection with hidden code masking when the content involves issues such as personal information etc.

Steps

 (If you already have the criteria for log query, you can also follow the steps on this page to set them up) 

Step 1: Full-Text Index Query

After logging in as bu02_dev, click on “Log Query” > “Log Query” to select the starting and ending time (2021/12/27~2021/12/31). Select the data source to be searched (Index= dgr_sit_api_log_*) and click [Search].

Step 2: Keyword Query (results of this example are presented as cards)

Following the previous step, enter “rtnCode” in “Keyword Query” and click [Search]. Check “No., cid, ResHeader.rtn, CodeResHeader.rtnMsg, mbody” in the field.

(ps: When entering a keyword, ” must be added before and after the string. For example: “test”.)

Step 3: Dynamic Query (results of this example are presented in tabular format)

Following the previous step, click “Add” in “Add A New Query By Field Criteria”.

Pull down and select “Field” = ‘ResHeader.rtnCode’, “Operator” = ‘is not’, “Value” = ‘1100’ and “Field” = ‘ResHeader.rtnCode’, “Operator” = ‘is not’, “Value” = blank. Click [Search] to complete the data search.

Step 4: Select A Display Info Field

Following the search result from the previous step, select the field content to be displayed in the info field (located between “Table” and “Export”). You can also export the data (.xlsx). Check “No., cid, ResHeader.rtnCode, ResHeader.rtnMsg, mbody” in the field.

(* Data protected by shield is presented as cid in this picture)

Display switch (tabular/card format)

Scenario 2: Associate Query

In this scenario, you can find out how digiLogs finds clues of correlation with other data sources through “Associate Query” and learn about Log transaction information passing through the systems according to its context.

Use Case

One day, MIS member Tony came to look for the cause of anomalies in the “cash flow (online banking) system” on the platform after receiving a search assignment. Because the system in question involves other systems, he needs to analyze specific keywords of every Log column in the process and index other sources according to the clues in order to find the associated Log data and its transaction information passing through the systems. In the past, a designated person would come to the system in question to search Log in order to find the cause of anomalies. If several systems were involved, he would have to switch between the systems to fully grasp the associated information and their correlation. The whole process is troublesome and lengthy. This situation can be greatly improved by using digiLogs. It provides services integrated from multiple sources. Once the target criteria are set for the search, it can find clues of associated data from other sources in the result.

Steps

(If you already have the criteria for log query, you can also follow the steps on this page to set them up)
Step 1: Set Up Query Criteria

After logging in as bu02_dev, click on “Log Query” > “Log Query” to select the starting and ending time (2021/12/29~2021/12/31). Select the data source to be searched (Index=pb-*) and click [Query]. Check “No., PmtID, logtime, action, source, message” in the field.

Step 2: Detailed Information Query

Move to the “Action” tab on the farthest right from the search result and click [Message Detailed Data].

Look for the associated information in the systems one by one in “Message Detailed Data”. In this example, clues about Call API are found in the fifth data field. Lastly, depending on the requirement, search with “Keyword Query” and “Dynamic Query” to have even better search results with accuracy.

Scenario 3: Continuous Log Query

In this scenario, you can find out how digiLogs can find the continuous logs of the Log file by customizing time logs through “More Query” so that you can discover the root cause of system anomalies.

Use Case

One day , after receiving an assignment, MIS member Tony came to look for the cause of anomalies in the “cash flow (online banking) system” on the platform. Because he wanted to know more about the root cause of system anomalies, he obtained the complete continuous logs of the Log file using the “Time Range” function provided by the platform for him to propose an appropriate solution. In the past, a designated person would use time as the center axis to search the entire continuous logs of the Log file in order to better understand it. digiLogs provides the same function as well. It can achieve the search goal by simply clicking or customizing the time frame to find the root cause of anomalies.

Steps

(If you already have the criteria for log query, you can also follow the steps on this page to set them up)
Step 1: Set Up Query Criteria

After logging in as bu02_dev, click on “Log Query” > “Log Query” to select the starting and ending time (2021/12/29~2021/12/31). Select the data source to be searched (Index=pb-*) and click [Query]. Check “No., PmtID, logtime, action, source, message” in the field.

Step 2: More Query

Select PmtID=2020122531279601 from “Actions” and click [More Query]. Drag and drop to set the time range (Time Range(s)=200, File Size=1) and click [Query]

Drag and drop to set the time range (Time Range(s)=200, File Size=50) and click [Query]

Scenario 4: Historical Data Query

In this scenario, you can learn how to do real-time queries directly on the platform through “Log Query” after reactivating the archived “Historical Data” in digiLogs.

Use Case

In the past, the MIS team would package and compress historical data. When you need to search past data, you would need to decompress the entire package and search the data files with a target date one at a time. Not only did the search take a long time, but the process was also troublesome. digiLogs now provides an easier solution for enterprises as the platform divides the data into cold and hot data according to the elapsed time of the data storing date. When you need to query cold data, you can reactive the setting through “Query Index” in order for the data to be queried temporarily.

Function Description
  • Hot data: Recent data that can be queried directly in digiLogs.
  • Cold data:Historical data that has a longer elapsed time. digiLogs archives unused data. You can convert them into temporary “hot data” for direct query through operation settings.
MIS member Tony received an assignment in January 2022. His supervisor asked him to verify the data because the API usage rate for that particular month was suspiciously too high. Therefore, he needed to query the API Log data in December 2021 and January 2022, compare the results of the data, and compile a comparative report. The data in December 2021, however, has become “cold data”. So he reactivated the setting of “cold data” function provided by the platform to query and export Log data in the “Query Index”. Afterward, he can summarize and compare the data to complete the report.

Steps

Step 1: Set Up Query Criteria

After logging in as bu02_dev, click on “Index Management” > “Query Index” to select the starting and ending time (2021-12-01~2021-12-31). Select the data source to be queried (index=dgr_sit_api_log_*) and click [Search].

Step 2: Reactivate Cold Data Setting

Select the date to be searched (2021-12-01~2021-12-31). Select the data source to be queried (Index=dgr_sit_api_log_*) and click [Search]. Select the date you want to open and click [Open].

Step 3: Use The Reactivated Cold Data

Click on “Log Query” > “Log Query” to select the starting and ending time (2021-12-01~2021-12-31). Select the data source to be queried (Index=dgr_sit_api_log_*) and click [Search].

You can use the above-mentioned method to quickly reactivate the historical data for a short period of time in order to obtain files to be compared. After exporting, you can use them for MoM data sorting and draw charts.

Scenario 5: Read File Hosting

In this scenario, you can find out how digiLogs can assist enterprises in doing “Read File” hosting. When it is set, you can use it directly out of the box on the platform, and monitor the enterprise log files in real-time.

Use Case

Manager Wang introduced the digiLogs centralized management platform to easily manage system data records of various hardware and software through a single interface, reducing the time cost of query for the team and improving management efficiency effectively.
digiLogs provides the hosting function “Read File” to help enterprises fulfill the expectations of “Logs Management Center”. It can eliminate the traditional process of entering the entire data of configuration (IP, Port, ID, PWD, etc.) one at a time so that you can quickly switch and monitor the real-time data of Log files in a single interface provided after a one-time setup.

Function Description

cat mode: It is a command used by Lunix to view the content of a file. It is often used to display the content of the entire file or to merge multiple files.
tail mode: It is also a command used by Lunix to view the content of a file. It is mainly used to display the last few lines of a file. Whenever the file content is updated, it recompiles automatically so that the file content data always reflect the latest update.

Steps

(If you already have a host you want to set up, you can also follow the steps on this page to set it up)
Step 1: Set Up The Host

Click on “Read File Management” > “Host Maintenance” and click [Add A New Host]

Enter the “hostname, IP, and directory path to store the Log file” to be used in the corresponding field and click [Create].
Step 2: Set Up The Categorized Log

Click on “Read File Management” > “Categorized Log Maintenance”. Click [Create]

Enter the “main system, business type, hostname, path, and other data” to be used in the corresponding field and click [Create].

Step 3: Search past files that have occurred - cat mode

 (To see the obvious difference, please choose an earlier date as the starting and ending dates for the search)

Click on “Log Query” > “Read File” to select the target “system” and “business”, and then select the target “Directory (host)” (host=apiModule(digirunner)). Select the starting and ending time for the search, or enter the content to be searched in “Keyword Search” (dgr-cus-etb_cg-v3. 8.4.24.log) and click [Search].

Select cat mode. After locating the target file name (need to select a file with “.log” as its extension name), click [Preview] in the “action” tab. This action performs a complete content search in a single window.

Step 4: Search files in real-time - tail mode

(To see the obvious difference, please search the current date as the start and end dates)

Click on “Log Query” > “Read File” to select the target “system” and “business”, and then select the target “Directory (host)” (host=apiModule (digirunner)). Select the starting and ending time to be searched, or enter the content to be searched in “Keyword Search” (dtsmpc-v3.10.0) and click [Search].

Select tail mode. After locating the target file name (need to select a file with “.log” as its extension name), click [Preview] in the “action” tab. This action will perform a content search of the last few lines in a single window. Whenever the file content is updated, it automatically updates and displays in this window.

You can switch between cat and tail modes according to your needs in the search results, which support multiple character encodings (BIG 5, UTF-8, ASCII).