Elastic Security

How to Set Up Elastic Cloud Security – A Detailed Guide

Identity compromise is a primary concern for any business operating in the cloud. The problem has become more blatant over time as organisations continue to adopt cloud resources as part of their infrastructure without appropriately implementing and maintaining an accompanying capability to monitor and react to incidents of identity compromise.

The following post discusses some previous cases of cloud data breaches, specifically relating to identity compromise and the impact it has on an organisation.

The associated impacts of these compromises vary widely. Attackers might use this for something as straightforward as stealing time and CPU on your instances to mine Bitcoin, but it could be a lot worse; credential compromise could lead to the deletion of infrastructure and stolen data.

In this blog post, we will discuss Elasticsearch, specifically focusing on Elastic Security and its’ pre-built rules for anomaly detection within your cloud environment. Consider your cloud environment to be “all those resources associated with your identity-managed accounts”.

But first, let’s explore some of the ramifications of a data breach, in the context of identity theft. The repercussions of these breaches offer a varied spectrum of impacts. At the less severe end, bad actors might exploit compromised identities to hareness processing power and computer resources for activities such as cryptocurrency mining. On a more serious note, unauthorised access to credentials could potentially lead to extensive infrastructural damage and the illicit acquisition of sensitive data. 

To address these potential vulnerabilities, Elastic Security offers a shield of defence equipped with a collection of pre-configured rules designed to identify anomalies within your cloud environment. These rules function with the precision of expertly drawn character maps, identifying even subtle deviations from established usage patterns. Similar to the way GNAF coordinates assist in pinpointing geographic locations, Elastic Security establishes a firm presence within your digital landscape. 

Central to the narrative of cloud security is the notion of “identity-managed accounts”. Your cloud environment encompasses an array of resources intricately tied to these accounts, forming a dynamic framework of digital assets and operational functionalities. 

Listed below is the sequence of steps explaining how to leverage these rules to monitor your cloud environment for anomalies.

Setting up your Elastic cluster

Visit the Elastic website, sign up for a 14-day trial and get access to all the features of the Elastic stack. Finally, assign a name to your deployment.

It should only take a few minutes to set up your cluster in the background. You will be notified once it is ready.

Choose the integration of your choice

The built-in rules can be found on the “Integrations” page. Here you’ll see various options  categorised based on various use cases or environments. Since we are seeking to monitor our cloud resources, you can simply filter the options to only display “cloud” integrations or perhaps those that are specific to your provider.

A selection of integrations is on display

 

Installing your Elastic agent

Elastic provides a user-friendly interface for setting up these integrations. The first step is to deploy an elastic agent to transport the logs from your cloud environment to Elastic.

Installing your elastic agent

 

At this step, our aim is to monitor Azure Logs in order to enable a resource-specific pre-built rule to trigger an alert. The method by which you install your agents will depend on the host type you choose.

 

Elastic will provide the installation steps for each compatible operating system. In this example, I used an Ubuntu virtual machine.

Ubuntu virtual Machine

Once you’ve successfully installed your agent, it will be automatically detected by the set-up wizard and take you to the next step.

set-up wizard

enrolment confirmed

Configuring the integration to Azure

The next step is to configure the environment-specific integration components. Each integration option will have a preferred way of collecting logs on your environment. Azure Logs require the use of an Event Hub and a Storage account. The requirements for enabling seamless communication to your cloud environment are then provided.

Azure logs integration

Listed below are some of the required fields to be completed for the integration to work.

Integration settings

To satisfy the minimum configuration requirements, we need to make a few changes to your cloud environment. The first step is to set up a diagnostic setting for streaming activity logs. Navigate to the “Activity log” of the resource group and click on “Export Activity Logs”. 

Navigate to the activity log

Add a diagnostic setting.

Diagnostic Setting

 

Create an event hub that will be used as the log collector.

create an event hub

 

Create a namespace for your Event hubs. 

create a namespace

 

Create a new Event Hub instance that will collect diagnostic logs within your environment.

create a new event hub

 

Update your diagnosing settings to enable the streaming of logs using the Event Hub that you created.

 

Using the same settings, you can set the type of logs you want to capture (i.e Administrative, Security, etc). For this tutorial, make sure to check the Administrative category.

select the types of logs you want to capture

 

Add an access policy to your Event Hub to generate a connection key. This is one of the required inputs in the integration form as shown in step #10.

generate a connection key

 

Set the relevant access. For this tutorial, make sure to check “Manage” to allow both upstream and downstream actions. Don’t forget to copy the key (as highlighted below) and enter it in the configuration settings.

set the relevant access

 

Check the status of your agent. A Healthy state indicates that your connection is successful. 

check agent status

 

Creating rules to detect anomalies in your cloud environment

Elastic Security provides pre-built rules derived from a set of custom queries< that can be used to search large volumes of data periodically. You can enable these rules by navigating to Elastic Security > Detect > Rules.

enable pre-built rules

 

You can search or filter these rules using tags and easily find what fits your use case. Since we are dealing with the cloud, filter the list to show built-in rules for Azure.

filter the rules

 

The scenario where an attacker is erasing resources in our cloud environment in an attempt to evade detection, an event hub, is the one being evaluated.  A pre-built rule queries Azure Activity logs for any resource deletion. Details of the rules can be obtained by clicking the row item from the listing page.

details of the rules

 

Enable the rule by toggling the status switch toggle switchlocated on the upper right corner of the detail page.

On the lower right section of the detail page, you can also see the pre-configured frequency or runtime schedule for this rule. The enabled rule runs every 5 mins with a look-back time of 20 mins. This later works as a buffer in the case the previous 3 runs failed or the logs were delayed.

set rule run-times

 

Rule Actions can also be configured such that for every rule execution, another event will be triggered. On the rules detail page, there is a three-dot icon in the upper right section. This provides the option to edit the rule settings.

edit the rules

 

In this tutorial, we will send a notification via Slack message, which provides the required details to establish a connection via Webhook URL.

Establish Slack integration

 

Triggering the Detection

We will replicate a delete action in our Azure environment in order to test the rule that we’ve configured. To conduct an accurate test, ensure your agent is running and the rule is enabled. At this stage, 4 new Event Hubs are created within the subscription. To trigger the detection rule, delete them one by one either manually or via Azure CLI.

triggering the detection

 

Viewing and Triaging Alerts

Pre-built rules generate alerts and Elastic Security provides a convenient way for security analysts to capture, investigate and respond to them using a unified view pane.

triggering alerts

 

Conduct further analysis or investigation, click on the action icon provided for every alert instance.

open every alert instance

 

You will be directed to a new screen where additional queries can be performed to the log data. A correlation analysis against other instances that have overlapping information can also be conducted.

investigating unauthorised deletion

 

These alerts will generate Slack messages.

Slack messages will be generated

Conclusion

Detecting attackers in your cloud environments can be challenging not only because a part of it is no longer within your reach, but because the number of resources you create is growing. Manually monitoring resources and ensuring that the actions performed are within the authoritative boundaries can be time-consuming. The bigger the scope of your environment, the more it makes sense to automate repetitive tasks. 

Elastic Security is a one-stop-shop solution for monitoring, detecting and investigating anomalous activity within your cloud environment. The effort to monitor these resources can be reduced using pre-built rules. Security teams can shift focus to the more critical aspect of an alert rather than spending time auditing each activity log in the subscription. Historical data is stored in one place and can be easily retrieved using the search power that ElasticSearch brings.

FAQ

What is an elasticsearch index? 

A commonly asked question in the context of Elasticsearch is what are indices? Indices, or an index, refers to a collection of documents that share similar data structures and characteristics. 

What is a Nested Field Type?
Nested, or nestedness, refers to a specific data structure that allows you to store arrays of objects as separate, hidden documents within a parent document. It is particularly useful when dealing with arrays of complex objects, where you wish to maintain the relationships and structure of those objects for querying and indexing purposes. 

What are Beats?
Beats refer to a family of lightweight data shippers designed to collect, send, centralise various types of operation data from different sources. They’re an integral part of the Elastic Stack, which also includes Elasticsearch, Kibana, and logstash. Beats play a key role in ingesting and processing data for analysis, visualisaton, and monitoring. 

Further Reading

  1. Network Security Monitoring (NSM) using Elasticsearch
  2. Using Elasticsearch to Trigger Alerts in TheHive
  3. Elasticsearch and IoT: Match Made in Heaven?
About Skillfield

Skillfield is an Australian based IT services consultancy company empowering businesses to excel in the digital era. Across our two main practices of Cyber Security & Data Services, our talented and committed professionals provide smart and simplified solutions to complex cyber security and big data challenges.

Share