This is a multi part Elasticsearch Tutorial where we will cover all the related topics on ELK Stack using Elasticsearch 7.5
- Install and Configure ElasticSearh Cluster 7.5 with 3 Nodes
- Enable HTTPS and Configure SSS/TLS to secure Elasticsearch Cluster
- Install and Configure Kibana 7.5 with SSL/TLS for Elasticsearch Cluster
- Configure Metricbeat 7.5 to monitor Elasticsearch Cluster Setup over HTTPS
- Install and Configure Logstash 7.5 with Elasticsearch
Configure SSL/TLS encryption
When Elasticsearch security is enabled for a cluster that is running with a basic or production license, the use of TLS/SSL for transport communications is obligatory so you must configure SSL/TLS encryption. Additionally, once security has been enabled to secure elasticsearch, all communications to an Elasticsearch cluster must be authenticated, including communications from Kibana and/or application servers.
Elasticsearch has two levels of communications,
- Transport Communications: The transport protocol is used for internal communications between Elasticsearch nodes,
- HTTP Communications:Â HTTP protocol is used for communications from clients to the Elasticsearch cluster.
Elasticsearch comes with a utility called elasticsearch-certutil
that can be used for generating self signed certificates that can be used to secure elasticsearch for encrypting internal communications within an Elasticsearch cluster.
Create input yml file
We will use a yml file as an input to generate self signed certificates to enable https configuration and secure elasticsearch. You can add more nodes based on your environment
[root@server1 ~]# cat /tmp/instance.yml instances: - name: 'server1' dns: [ 'server1.example.com' ] ip: [ '192.168.0.11' ] - name: "server2" dns: [ 'server2.example.com' ] ip: [ '192.168.0.12' ] - name: 'server3' dns: [ 'server3.example.com' ] ip: [ '192.168.0.13' ] - name: 'centos-8' dns: [ 'centos-8.example.com' ] ip: [ '192.168.0.14' ]
Generate self signed certificate
The elasticsearch-certutil command simplifies the process of generating self signed certificate for the Elastic Stack to enable HTTPS configuration and to secure elasticsearch. It takes care of generating a CA and signing certificates with the CA.
Navigate inside "/usr/share/elasticsearch/
" where we have all the elasticsearch tools
[root@server3 ~]# cd /usr/share/elasticsearch/
Here we will use elasticsearch-certutil
to generate our own self signed certificate to secure elasticsearch. We will store these certificates under /tmp/certs
. If the output directory does not exists, the elasticsearch-certutil
tool will create the same.
[root@server3 elasticsearch]# bin/elasticsearch-certutil cert --keep-ca-key ca --pem --in /tmp/instance.yml --out /tmp/certs/certs.zip <Output trimmed> If you specify any of the following options: * -pem (PEM formatted output) * -keep-ca-key (retain generated CA key) * -multiple (generate multiple certificates) * -in (generate certificates from an input file) then the output will be be a zip file containing individual certificate/key files Directory /tmp/certs does not exist. Do you want to create it? [Y/n]Y Certificates written to /tmp/certs/certs.zip <Output trimmed>
Next navigate inside the output directory /tmp/certs
[root@server3 elasticsearch]# cd /tmp/certs/ [root@server3 certs]# ls certs.zip
Extract the certificates. You will need unzip utility to extract the certificates files
[root@server1 certs]# unzip certs.zip Archive: certs.zip creating: ca/ inflating: ca/ca.crt inflating: ca/ca.key creating: server1/ inflating: server1/server1.crt inflating: server1/server1.key creating: server2/ inflating: server2/server2.crt inflating: server2/server2.key creating: server3/ inflating: server3/server3.crt inflating: server3/server3.key creating: centos-8/ inflating: centos-8/centos-8.crt inflating: centos-8/centos-8.key
Place the certificates
Next to enable HTTPS configuration we will create certs
directory inside /etc/elasticsearch/
on all the cluster nodes to store the self signed certificates
[root@server1 ~]# mkdir -p /etc/elasticsearch/certs [root@server2 ~]# mkdir -p /etc/elasticsearch/certs [root@server3 ~]# mkdir -p /etc/elasticsearch/certs [root@centos-8 ~]# mkdir -p /etc/kibana/certs
Copy the applicable certificate file to /etc/elasticsearch/certs
directory on the localhost
which in our case is server1
[root@server1 ~]# cp /tmp/certs/ca/ca.crt /tmp/certs/server1/* /etc/elasticsearch/certs
Verify the list of files and permissions on these certificate files
[root@server1 certs]# ls -l /etc/elasticsearch/certs total 20 -rw-r--r--. 1 root elasticsearch 1200 Dec 24 22:25 ca.crt -rw-r--r--. 1 root elasticsearch 1196 Dec 24 22:24 server1.crt -rw-r--r--. 1 root elasticsearch 1675 Dec 24 22:24 server1.key
Next copy these certificates to all the elasticsearch cluster nodes in the same location under /etc/elasticsearch/certs
and under /etc/kibana/certs
on centos-8
[root@server1 ~]# scp -r /tmp/certs/ca/ca.crt /tmp/certs/server2/* server2:/etc/elasticsearch/certs/ [root@server1 ~]# scp -r /tmp/certs/ca/ca.crt /tmp/certs/server3/* server3:/etc/elasticsearch/certs/ [root@server1 ~]# scp -r /tmp/certs/ca/ca.crt /tmp/certs/centos-8/centos-8.* centos-8:/etc/kibana/certs/
Enable authentication to secure Elasticsearch
Set xpack.security.enabled
to true
in elasticsearch.yml
of all the elasticsearch cluster nodes to secure elasticsearch and force a custom user authentication for processing any request.
xpack.security.enabled: true
With this to gain access to restricted resources, a user must prove their identity, via passwords, credentials, or some other means (typically referred to as authentication tokens). The Elastic Stack authenticates users by identifying the users behind the requests that hit the cluster and verifying that they are who they claim to be. The authentication process is handled by one or more authentication services called realms.
Enable SSL/TLS to encrypt communication between cluster nodes
The transport protocol is used for communication between nodes to secure Elasticsearch cluster. Because each node in an Elasticsearch cluster is both a client and a server to other nodes in the cluster, all transport certificates must be both client and server certificates.
xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.key: certs/server1.key xpack.security.transport.ssl.certificate: certs/server1.crt xpack.security.transport.ssl.certificate_authorities: [ "certs/ca.crt" ]
elasticsearch.yml
of all the cluster nodes and change the certificate path and file name accordingly.
Enable HTTPS configuration to encrypt HTTP Client Communications
When security features are enabled, you can optionally use TLS to enable HTTPS configuration and to ensure that communication between HTTP clients and the cluster is encrypted.
xpack.security.http.ssl.enabled: true xpack.security.http.ssl.key: certs/server1.key xpack.security.http.ssl.certificate: certs/server1.crt xpack.security.http.ssl.certificate_authorities: certs/ca.crt
elasticsearch.yml
of all the cluster nodes and change the certificate path and file name accordingly.
Restart Elasticsearch Cluster services
You must perform a full cluster restart to enable HTTPS configuration and secure elasticsearch cluster. Nodes which are configured to use TLS cannot communicate with nodes that are using unencrypted networking (and vice-versa). After enabling TLS you must restart all nodes in order to maintain communication across the cluster.
Check Cluster Status
Now let us try to check the cluster status using API request over HTTPS
[root@server1 ~]# curl --cacert /etc/elasticsearch/certs/ca.crt -XGET https://server1.example.com:9200/_cat/nodes?pretty
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "missing authentication credentials for REST request [/_cat/nodes?pretty]",
"header" : {
"WWW-Authenticate" : [
"Bearer realm="security"",
"ApiKey",
"Basic realm="security" charset="UTF-8""
]
}
}
],
--cacert
to avoid any certificate related warning as we are using a self signed certificate. You can avoid --cacert
directive if you are using a certificate from a registered CAAs you see the cluster API request fails due to missing authentication. Since we have enabled security features to secure elasticsearch, we will use username password of either any built in user or a file user to authorise the API request. Although there are other methods using which we can authenticate the request but that is not in the scope of this article.
Change built-in user's password
By default we do not know the default password of the built-in users. So we will change the password of all the built-in user using elasticsearch-setup-passwords
. This tool is available under /usr/share/elasticsearch
. Navigate under this path and execute the command bin/elasticsearch-setup-passwords auto
. This will generate random passwords for the various internal stack users.
You can alternatively skip the auto
parameter and manually define your passwords using the interactive
parameter. Keep track of these passwords, we’ll need them again soon.
[root@server1 ~]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
Here I have given custom password to all the built in users.
Check elasticsearch cluster health status
Now I can use elastic
user to check the cluster health
[root@server1 ~]# curl --cacert /etc/elasticsearch/certs/ca.crt -u elastic -XGET https://server1.example.com:9200/_cat/nodes?pretty Enter host password for user 'elastic': 192.168.0.12 8 96 0 1.00 1.01 1.05 dim - server2 192.168.0.13 21 93 2 1.02 1.03 1.05 dim * server3 192.168.0.11 29 83 1 1.00 1.01 1.05 dim - server1
As you see we were able to enable https configuration using self signed certificate to secure elasticsearch cluster.
Troubleshoot error messages
Below are some of the error scenarios I faced while trying to configure ELK Stack
Error: failed to initialize SSL TrustManager
Caused by: org.elasticsearch.ElasticsearchException: failed to initialize SSL TrustManager - access to read truststore file [/some/path/certs/elastic-stack-ca.p12] is blocked; SSL resources should be placed in the [/etc/elasticsearch] directory
Explanation:
As per the official elastic guide we can place the certificates under any location to enable https configuration and secure elasticsearch but for some reason with elasticsearch 7.5 I was getting the above error when I place the certificates under my home folder.
Solution:
The recommendation here was to place the certificates under /etc/elasticsearch
. You can create a certs
directory and place the certificates there.
Error: failed to decrypt safe contents entry
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
Explanation:
If you have provided a password for the CA and other certificates then those needs to be added the respective keystore, here since we are configuring SSL/TLS for Elasticsearch cluster hence we must add those passwords to elasticsearch keystore.
How to Fix:
You can follow this guide for more details to add password to elasticsearch keystore for PEM and PKCS#12 format
Lastly I hope the steps from the article to enable SSL and HTTPS configuration using self signed certificate for encrypted communication to secure elasticsearch cluster on Linux was helpful. So, let me know your suggestions and feedback using the comment section.
hi , i’m using ES 7.12 , when i set
xpack.security.enabled: true
, i can’t start elasticsearch ??and then i can’t run
./bin/elasticsearch-setup-passwords auto/ interactive
are you getting any error on the console or log files?
Hi,
I am using all OSS licenses for Metricbeat, Logstash and Elasticsearch 7.10.
I want to setup SSL based communication between 1) Metricbeat –> Logstash, 2) Logstash –> Elasticsearch
Can you please let me know if this is possible to set up in OSS licenses?
Regards,
Arpit
Sorry but I also not familiar with this, you can ask in the official forum
Can you please share your elasticsearch.yml file.
For transport also we need to mention port.
It is provided in the first part of this tutorial
Dear Admin,
With pkcs8 works well. Many thanks again.
Best Regards,
Dan
I am glad it worked.
Dear Admin,
You are right, thank you.
When I created the certs for Filebeat:
/usr/share/elasticsearch/bin/elasticsearch-certutil cert –pem –ca-cert /etc/elasticsearch/certs/ca.crt –ca-key /etc/elasticsearch/certs/ca.key –in /tmp/filebeats.yml –out /tmp/certs/filebeats.zip
Then filebeat cannot connect to Logstash:
ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://elk-node-01:5044)): dial tcp 10.0.10.11:5044: connect: connection refused
filebeat.yml:
output.logstash:
hosts: [“elk-node-01:5044”, “elk-node-02:5044”, “elk-node-03:5044”]
loadbalance: true
ssl.certificate_authorities: [“/etc/filebeat/certs/ca.crt”]
ssl.certificate: “/etc/filebeat/certs/client.crt”
ssl.key: “/etc/filebeat/certs/client.key”
In Logstash configuration there are certs used for communication between ELK nodes.
input {
Logstash conf looks like:
beats {
port => 5044
ssl => true
ssl_certificate_authorities => [“/etc/logstash/certs/ca.crt”]
ssl_certificate => “/etc/logstash/certs/elk-node-01.crt”
ssl_key => “/etc/logstash/certs/elk-node-01.key”
ssl_verify_mode => “force_peer”
}
}
Logstash show error:
elk-node-01 logstash: [2020-07-17T12:26:27,320][ERROR][logstash.javapipeline ][main] Pipeline aborted due to error {:pipeline_id=>”main”, :exception=>java.lang.IllegalArgumentException:
File does not contain valid private key: /etc/logstash/certs/elk-node-01.key, :backtrace=>[“io.netty.handler.ssl.SslContextBuilder.keyManager(io/netty/handler/ssl/SslContextBuilder.java:270)”, “io.netty.handler.ssl.SslContextBuilder.forServer(io/netty/handler/ssl/SslContextBuilder.java:90)”, “org.logstash.netty.SslContextBuilder.buildContext(org/logstash/netty/SslContextBuilder.java:104)”, “java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
Do you have any idea what is wrong?
Many Thanks.
Regards,
Dan
As per the official documentation, filebeat accepts key in PKCS8 format, can you try converting your key to PKCS8 format using below command
openssl pkcs8 -in elk-node-01.key -topk8 -out elk-node-01-pkcs8.key -nocrypt
Dear Admin,
I would like to generate the certs for filebeats service and when I try to do this I’ve got error message:
There no password for
ca.crt
. Do you have any idea what is wrong?The ca was generated with
--keep-ca-key
option.Best Regards,
Daniel
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca
only accepts certificate in PKCS#12 format, I assume you are providing the certificate in PEM format. In such case to generate the certificates you must useelasticsearch-certutil cert --ca-cert <your ca.crt in PEM format> --ca-key <your ca.key>
Or you can convert your certificate to PKCS#12 format and then you can use the existing command.
Dear Admin,
You are right. It’s works. Thank you.
Regards,
Dan
Dear Admin,
Can we use external CA to generate the keys to secure ELK? Is the elasticsearch able to communicate with external CA?
Best Regards,
Dan
You can use
elasticsearch-certutil cert --ca <your_CA>
to generate the node certificates and private keyDear Admin,
First of all , Big thank you for the amazing article.
I setup a 12 node cluster with security . I followed the steps to generate self sign certificates. Now I have :1 ca.crt , 12 node certs (server1.crt, server2.crt, etc) and 12 node keys (server1.key , server2.key,… etc).
Now I want to add a 13th nodes to my cluster. How do I generate certificate and key for the new node (server13) using the ca certificate ?
I tried generating the certificate and key using the following command but it seems I need to provide the ca-key which I did not generate in the initial step.
$./bin/elasticsearch-certutil cert –ca-cert /tmp/certs/ca.crt -pem
ERROR: Missing required option(s) [ca-key]
Sorry I am really not a security expert and I don’t understand much in certificates, but is it possible to generate a cert/key pair using only the ca.crt?
Thanks Aveek for your kind words
Unfortunately I have missed to use
--keep-ca-key
while generating the server certificates so now you can’t get the CA key which was used to sign the server certificates. We always need both CAKey and CA cert to sign the certificate request. So you will have to create another CA certificate.The same is also informed here
https://discuss.elastic.co/t/add-new-node-in-es-cluster/217529/3
I have taken learning from your comment and have updated the article to also use
--keep-ca-key
while generating certificates.You can learn more about CA certificates and generating server certificates here, I have tried to explain in very simple terms
Steps to create your own CA certificate to issue server certificates using openssl in Linux
Apologies from my side.
Thank you Admin for your response. I also generated the passwords for built-in users and setup other components like logstash, kibana, beats, apm-server to communicate with elasticsearch using those users .
If I now generate a new set of certificates for elasticsearch cluster, will I also need to setup the passwords for built-in user or can I continue to use the old one. Basically , I am trying to save data on the cluster without having to teardown the whole cluster and build it again .
Thanks in Advance .
I don’t think so, you can create a new set of server and CA certificate and use it for the new host in the cluster.
At the end the local server will always authenticate with the CA certificate you have provided locally so it should be ok.
You can also raise a ticket with elasticsearch team’s forum to be sure about it.
Thank you so much for your help. I generated a new set of server and CA certificates , replaced the old ones . It worked after a full cluster restart.
I am glad it worked 🙂
Hi,
I’m looking for free of charge solution for authorization in ELK 7.6.2. Is it really free solution described in this article? I’m afraid because in configuration is used some options like: xpack.security.enabled or xpack.security.transport.ssl* and xpack.security.http.ssl.*.
Where can I find exact description of xpack which explain what is paid and what is free of charge?
BTW Your articles are super.
Thanks
Best Regards,
Dan
Thank you Dan for the feedback. I was using the basic license which is the free version while writing these articles. Now even with free license Elasticsearch makes it explicit to configure SSL.
You can always check your license using
https://www.elastic.co/guide/en/elasticsearch/reference/current/get-license.html
and the different types of subscriptions available
https://www.elastic.co/subscriptions
Dear Admin,
One more question if I can. Does the https communication for http clients also free of charge?
Thank you a lot.
Best Regards,
Daniel
Sorry I did not understood your question. In a server client HTTPS communication, the client needs access to the CA certificate and client certificates but those should be created along with server certificates so I don’t think client needs to pay anything extra
Dear Admin,
Sorry for my stupid question, I think that now everything it’s clear for me. Thank you.
Best Regards,
Dan
Hi, Is this above configurations are valid for ELK basic license as well. I was trying the same in this version but got issues with licensing as well so just want a quick confirmation please.
Hello yes I was also using the basic license for writing these articles