In the rapidly evolving landscape of cybersecurity, organisations face increasing challenges in protecting their networks from sophisticated cyber threats. As the frequency and complexity of cyber attacks continue to rise, investing in robust network security monitoring (NSM) tools has become essential to detect and respond promptly to potential intrusions.
This blog post aims to explore the hurdles that Security Operation Centre (SOC) Analysts encounter when seeking actionable insights from diverse data sources. We will focus on addressing these challenges by harnessing the power of Elasticsearch.
Network Security Monitoring (NSM) Using Elasticsearch
Companies nowadays are investing more in network security monitoring tools to detect and respond to intrusions on their networks. Cyber attacks in Australia in 2022 saw a sharp rise, with various incidents taking place, including the IPH Cyber attack and the Hafele Cyber attack. This makes adopting the right way to search and analyse data generated from the various toolsets of vital importance to reduce blind spots. Challenges are acute when the Security Operation Centre (SOC) analysts want to gain actionable insights from events generated from diverse sources. This blog post explores how to overcome these challenges and boost the effectiveness of your network security monitoring (NSM) using the powerful Elastic Stack.
Network Security Monitoring Tools Landscape and the Challenges They Create
Realising that the landscape of cyberattacks is increasing, companies are implementing a broad set of network security monitoring tools to detect and prevent threats from both outside and within their networks. While the ever-growing number of safeguards provides increased complete security coverage, SOC teams also face challenges when trying to make the best use of those NSM tools. The top two challenges are:
Complexity in creating monitoring points
One undeniable truth is that the sophistication of network architecture has increased the complexity of network monitoring. Companies may use a variety of tools to detect malicious activities in the early stages, such as Arkime (formerly known as Moloch) for packet capturing and Zeek (formerly known as Bro) for network analysis, intrusion detection and prevention. Security teams require full visibility of indicators generated from these various tools, which, in turn, require a scalable solution to collect and visualise relevant monitoring data from various tools.
Difficulty in generating insightful information
Network Security Monitoring data can be generated from diverse sources. Even if all the data is gathered and stored in a central repository, a security analyst will need to find correlations between events to search, analyse and investigate network traffic for potential attacks. Additionally, with the high volume of data collected, SOC teams can be overwhelmed by too many alerts generated by different security tools.
A centralised SIEM platform that is customisable, scalable and can work with a variety of tools, such as the Elastic SIEM, is the solution that can provide the capability to search, analyse and manage network security data and triggered alerts. Elasticsearch is a diverse tool with many capabilities for adding layers of security; Python basic authentication, Elasticsearch Keystore, and the exploration of Elasticsearch swagger can all be utilised in the protection of sensitive data.
Centralised Visibility to Scrutinise Network Security
Elastic SIEM, with its powerful log management capability, out-of-box detections and scalable Machine Learning, is one of the best options to fulfil the requirements to address the above issues. The Elastic SIEM can process data from various sources and support in-depth analysis of the processed data. Some of the key components required for your network security monitoring capability are:
- Beats – Lightweight data shippers that collect data and send it to Elasticsearch. Packetbeat is Elastic’s real-time network packet analyser. Filebeat is another member of the Beat family, which is used to forward log data from other network security monitoring tools. The Filebeat has a variety of modules used to process logs.
- Logstash or ingestion pipelines – Used to parse and enrich the log data.
- Elasticsearch– Data is centrally stored here and becomes ready for search and analysis.
- Kibana– also referred to as open search Kibana, is a frontend application for users to query and visualise data indexed in Elasticsearch. In particular, it provides the UI for SIEM features and enables the SOC team to view alerts generated from network security data.
With the integrated solution provided by Elastic, you can streamline the process to monitor network traffic and have a complete picture to detect anomalous behaviours. You can respond quickly when malicious parties try to gain unauthorised access to the network or when your IDS/IPS detects suspicious activity. Using Elasticsearch, you can collect relevant information to investigate security incidents and reduce the impact of cybersecurity attacks.
Building a Centralised Monitoring Environment
In a real-world environment, the network traffic will be forwarded to an endpoint via a Switch Port Analyzer (SPAN), mirror port or TAP, where a network security monitoring tool is installed. A beat agent will be installed on the same endpoint to ship the events generated from the network security monitoring tool to Elastic.
For simplification, we will demonstrate below how to configure and use the centralised environment in a test environment where Zeek, Suricata, Snort and Arkime are installed on one host and shipping the logs to an Elastic Cloud instance using Filebeat. We will also show how to enable the community ID that is used to correlate events between different tools.
Configuring network security and monitoring tools in the test environment –
After you install Suricata on a host, you need to edit the configuration file /etc/suricata/suricata.yml to select which interface and IP address you want Suricata to listen on and which IP/network is not local. Also, enable community flow hash ID by setting community-id as true.
You can run Suricata with the help of the following command:
Suricata logs can be viewed in: /var/log/suricata/eve.json
After you install Zeek on a host, you need to edit the file /usr/local/zeek/etc/node.cfg to select which interface to monitor and /usr/local/zeek/etc/network.cfg to identify networks which are local to the environment being monitored.
You can run Zeek with the help of the following command:
After navigating to /usr/local/zeek/bin/, Zeek logs can be viewed in: /usr/local/zeek/logs/current/
Logs for different events like capture_loss, dns, http etc. can be found there.
To enable community ID flow hashing, bro-community-id plugin needs to be downloaded and installed. Once done, Zeek needs to be configured to load this plugin. This can be done by adding the following line in /usr/local/zeek/share/zeek/site/local.zeek:
Restart Zeek with the new configuration to load the plugin:
After you install Snort on a host, you need to edit the configuration file snort.conf to identify which network address you want Snort to listen on and which IP/Network is not local. Also, uncomment the syslog output in step 6.
Configure rsyslog to listen on 514(TCP) and forward the logs to Filebeat on port 10514(TCP). You can run Snort with the help of the following command:
Snort logs can be viewed at: /var/log/snort/snort.log
Arkime (formally known as Moloch)
After you install and configure Arkime on a host, you can start Arkime service with the help of the following command:
Packet capture can be started using the following command:
After navigating to /data/moloch/bin/, enable Filebeat modules. After you install and configure the Filebeat on a host, you can enable the built-in modules to simplify the collection, parsing and visualisation of common log formats. The Zeek, Snort, and Suricata module can be enabled by running the command:
Then you need to edit the configuration file of each module, zeek.yml snort.yml suricata.yml, particularly when you want to edit the entries for getting in logs.
Specify var.paths for each event type.
Now you can start Filebeat using:
View logs in the Discover page
Once you have the Elasticsearch cluster in place, you can use the Filebeat to transform the network logs and store them in Elasticsearch as Filebeat indices. On the Discover page, you can find how to make the network data visible with simple queries.
View built-in dashboards for Filebeat
Filebeat Zeek overview:
Filebeat Suricata Events Overview:
Filebeat Suricata Alert Overview:
View Elastic Security and Create Detection Rules
Now you can view your network security data in the Elastic Security UI and have a look at how many events are collected by Filebeat.
Moreover, you can enable some detection rules or build your own to generate alerts for malicious activities. You can also use machine learning to identify anomalous network behaviours.
Correlating Suricata, Zeek and Arkime events
Suricata and Zeek events can be correlated with each other and pcap in Arkime with the help of ‘Community Id’ field present in Arkime and ‘network.community_id’ field present in elastic for Zeek and Suricata. As you can see in the below screenshot, the pcap in Arkime and Zeek and Suricata events in Elastic have the same community id.
To conclude, Network Security Monitoring is crucial for your security detection capability. You must build an effective solution to gain centralised visibility into your network. Leveraging Elasticsearch within the Elastic Stack offers a powerful solution to the challenges faced in network security monitoring (NSM). By adopting Elasticsearch and implementing best practices, organisations can reduce blind spots, boost response times, and fortify their defences against cyber threats. From Docker deployment to score normalisation, each aspect contributes to actionable insights and proactive threat detection.
Skillfield is a Melbourne-based Cyber Security and Data Services consultancy and professional services company. We provide solutions that help our customers discover, protect and optimise big data in a way that works for them.