Creating a Single-Node ELK Stack
Building off my previous post, Introduction to ELK, I figured it would be great to begin to discuss how to create a “stack.” I have created multiple different stacks in the past couple months, each with their own specific purpose. While the services within an ELK stack are meant to be spread across different nodes, building a “single-node” stack can be a great and easy way to familiarize yourself first-hand with the functionality of Elasticsearch, Logstash, and Kibana.
The following steps below are heavily inspired and adopted by the work or Roberto Rodriguez, @Cyb3rWard0g. Roberto’s dedication to DFIR and Threat Hunting, as well as his generously detailed GitHub page, have taught me almost all of the fundamentals I needed to learn when starting with ELK. If you have time, I would highly recommend checking out his work.
Requirements
- ESXI Server OR VMware Workstation
- Ubuntu Server 18.04.1 LTS – ISO Download
- OR Ubuntu Server 16.04.5 LTS – ISO Download
- Minimum of 3GB storage
- Minimum 4GB RAM
Setting up Elasticsearch
The base installation of Ubuntu does not come with Java, which is necessary for both Elasticsearch and Logstash to run, so you are going to have to install it. At the time of this post, both Java 9 AND Java 10 are unsupported (yeah – I have no idea why) so we will be installing the Java 8 package.
If you are using a previously created VM, you should first check to see if Java is installed.
$ java -version`
If Oracle’s Java is not installed, or is not version 8, install it by first adding it’s repository:
$ sudo add-apt-repository ppa:webupd8team/java
Grab and Install the Java 8 package using this new repository:
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
Check your Java version again. You should now see that you are running Java 8.
With Java installed, we can now turn our attention to installing Elasticsearch. To install any of Elastic’s products, we first need to grab their PGP signing key down to our VM.
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -`
$ sudo apt-get install apt-transport-https
Make sure your system is fully up-to-date and install Elasticsearch’s Debian package. At the time of this post, the most recent version of Elastic’s services is 6.3.2.
$ echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list`
$ sudo apt-get update && sudo apt-get install elasticsearch
When Elasticsearch is done installing, you will then need to navigate to it’s directory to modify its configuration file, elasticsearch.yml. Occasionally when I attempt to navigate to this directory, I am denied permission. We will also need elevated privileges later in the post, so starting here I am going to enter superuser, or root.
$ sudo su
# cd /etc/elasticsearch
# nano /etc/elasticsearch/elasticsearch.yml
You should now be presented with the following window. Since I am a big fan of Mac, there may be some differences GUI-wise; I am using SSH in Mac Terminal while also using VMware Fusion for my Ubuntu 18.04.1 virtual environment.
From here you have a couple of steps, some of which are optional while others are not:
- If you wish to name your cluster, as I have already done above, remove the ”#” from the beginning of the line with
cluster.nameand add your own custom name - Take note of the
path.logsvariable — this is the path where all your log files associated with Elasticsearch will reside - Navigate to the Network section, and look for
network.host- Remove the ”#” from the beginning of the line
- If your IP is static, you can put your current IP here. However, for the purpose of this practice stack, type “
localhost”
- Exit your text editor (For nano, press CTR+X then Y to save)
Now that Elasticsearch is configured to how you want, start the service and confirm that it is running!
Setting up Kibana
This is one of the quickest to setup:
# apt-get update && sudo apt-get install kibana
# nano /etc/kibana/kibana.yml
From here, find server.host and remove the ”#” at the beginning of the line. Then, add localhost as your address. Your configuration file should look like mine below.
Start the Kibana service and check to ensure it is running. It’s as simple as that!
Setting up Logstash
Logstash, in my opinion, is one of the more complex of Elastic’s services. This is due to how much goes on concerning this service, and the delicate role it plays in filtering logs it is receiving from endpoints.
# apt-get update && sudo apt-get install logstash
Logstash’s main purpose is to filter logs that you are ingesting. That may be a little hard without rules for these filters thought, right? So let’s add a quick one right now.
Navigate to Logstash’s directory for filters, “conf.d,” and create a new file called “02-beats-input.conf” (thanks to @Cyb3rWard0g for the naming scheme). This will be your input filter for all logs that Logstash sees from Beats data shippers.
# cd /etc/logstash/conf.d
# nano 02-beats-input.conf
Enter exactly what you see in my window below in your new input filter. Make sure SSL is set to “false” as we have not implemented it, and be sure to save the file as you exit.
input {
beats {
port => 5044
ssl => false
}
}
Next, we need to create an output filter to correspond to out new input filter. Create a new file called 50-beats-output.conf in the same directory as your input filter.
# nano 50-beats-output.conf
Ensure that your output filter matches my window below, and that you save the file when you exit.
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "winlogbeat-%{+YYYY.MM}"
}
}
Now it is time to start up Logstash. If Logstash is already running, restart the service to ensure that the filters kick into place.
Congrats!
You now have a fully functional, basic ELK Stack waiting for logs to be sent over! You may have noticed that I mentioned Beats data shippers, however we never configured any. These are done on the endpoints you wish to monitor, and I hope to cover the setup process for that in a later post.
Enough of listening to me though, go have fun! Be adventurous, look around at all your newly installed directories and play around to see what does what; that is what helps me to better understand how services interact and function on my servers.