Nous allons installer une VM avec une Debian 9 et y installer Elasticearch, Kibana et Logstag. On va lui ajouter les templates Beats afin de disposer de tableaux de bord prêts à l'emploi.
Installation Elasticsearch [VM ELK]
Installation de Java :
sudo apt install default-jdk
Installation d'Elasticsearch :
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.1.deb
sudo dpkg -i elasticsearch-6.4.1.deb
sudo nano /etc/elasticsearch/elasticsearch.yml
Activer les lignes suivantes et configurer comme suit :
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: IP_SERVER
#
# Set a custom port for HTTP:
#
http.port: 9200
Démarrage :
sudo /etc/init.d/elasticsearch start
Démarrage automatique :
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
Réglage de la mémoire de la JVM :
sudo nano /etc/elasticsearch/jvm.options
-Xms4g
-Xmx4g
Redémarrage
sudo reboot
Test connexion elasticsearch :
curl http://IP_SERVER:9200
Démarrage :
sudo -i service elasticsearch start
sudo -i service elasticsearch stop
Debug :
sudo journalctl -f
sudo journalctl --unit elasticsearch
On vérifie que le service est bien lancé en allant dans le navigateur à l'adresse : http://192.168.0.34:9200
On doit obtenir une réponse json de ce type :
{
"name" : "dHm71Q1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Nkkmbc3vQpajkKD4eQhZ8Q",
"version" : {
"number" : "6.4.1",
"build_hash" : "bd92e7f",
"build_date" : "2017-12-17T20:23:25.338Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Installation Kibana [VM ELK]
Installation Kibana
curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-6.4.1-linux-x86_64.tar.gz
tar xzvf kibana-6.4.1-linux-x86_64.tar.gz
cd kibana-6.4.1-linux-x86_64/
Paramétrage :
sudo nano config/kibana.yml
Décommenter et configurer comme suit :
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
server.host: "192.168.0.34"
elasticsearch.url: "http://192.168.0.34:9200"
kibana.index: ".kibana"
Démarrer Kibana :
./bin/kibana
Démarrage auto :
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
Installation Metricbeat [VM ELK]
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.4.1-amd64.deb
sudo dpkg -i metricbeat-6.4.1-amd64.deb
Configuration :
sudo nano /etc/metricbeat/metricbeat.yml
output.elasticsearch:
hosts: ["IP_SERV_ELASTICSEARCH:9200"]
//...
setup.kibana:
host: "IP_SERV_KIBANA:5601"
Activation du module system et démarrage :
sudo metricbeat modules enable system
sudo metricbeat setup -e
sudo service metricbeat start
Ajout Dashboard System metrics dans Kibana :
Depuis l'admin de Kibana > Add metric data > System metrics.
Installation de Logstash [VM ELK]
curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.deb
sudo dpkg -i logstash-6.4.2.deb
Paramétrage :
sudo nano /etc/logstash/logstash.yml
http.host: "IP_SRV_ELK"
http.port: 9600-9700
Configuration du poller :
sudo nano /etc/logstash/conf.d/logstash-poller.conf
Configuration du filtre grok pour les logs systèmes et apache :
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
if [fileset][module] == "system" {
if [fileset][name] == "auth" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
"GREEDYMULTILINE"=> "(.|\n)*"
}
remove_field => "message"
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
}
else if [fileset][name] == "syslog" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
remove_field => "message"
}
date {
match => [ "[system][syslog][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
if [fileset][module] == "apache2" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?",
"%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{@timestamp}" }
}
date {
match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[apache2][access][time]"
}
useragent {
source => "[apache2][access][agent]"
target => "[apache2][access][user_agent]"
remove_field => "[apache2][access][agent]"
}
geoip {
source => "[apache2][access][remote_ip]"
target => "[apache2][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{LOGLEVEL:[apache2][error][level]}\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message]}",
"\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{DATA:[apache2][error][module]}:%{LOGLEVEL:[apache2][error][level]}\] \[pid %{NUMBER:[apache2][error][pid]}(:tid %{NUMBER:[apache2][error][tid]})?\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message1]}" ] }
pattern_definitions => {
"APACHE_TIME" => "%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}"
}
remove_field => "message"
}
mutate {
rename => { "[apache2][error][message1]" => "[apache2][error][message]" }
}
date {
match => [ "[apache2][error][timestamp]", "EEE MMM dd H:m:s YYYY", "EEE MMM dd H:m:s.SSSSSS YYYY" ]
remove_field => "[apache2][error][timestamp]"
}
}
}
}
output {
elasticsearch {
hosts => "192.168.0.51:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
Démarrage :
sudo service logstash start
sudo service logstash stop
Installation Filebeat :
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-amd64.deb
sudo dpkg -i filebeat-6.4.2-amd64.deb
sudo nano /etc/filebeat/filebeat.yml
Logs Apache2 et système
Passer enabled à true et ajouter les logs système :
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
output.elasticsearch:
hosts: ["IP_SRV_ELK:9200"]
//...
setup.kibana:
host: "IP_SERV_KIBANA:5601"
Puis dans la section module :
filebeat.config.modules:
# Glob pattern for configuration loading
path: /etc/filebeat/modules.d/*.yml
Activation module system :
cd /etc/filebeat/modules.d
sudo mv system.yml.disabled system.yml
Démarrer :
sudo service filebeat start
Ajout des templates de dashboard :
sudo filebeat setup --dashboards
Ajout de l'index de Filebeat dans Kibana :
Management > Index Patterns > Create Index Pattern
pattern : filebeat-*
title : filebeat-*
Ajout du dashboard Filebeat System logs :
Home > Add log data > System logs
Suivre la procédure d'installation.
Sur la VM ELK installer le plugin ingest-user-agent :
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent
Démarrage de Filebeat :
sudo service filebeat restart
Installation des agents sur les hôtes à superviser
Installation de Metricbeat
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.4.1-amd64.deb
sudo dpkg -i metricbeat-6.4.1-amd64.deb
Configuration :
sudo nano /etc/metricbeat/metricbeat.yml
Commenter les lignes de configuration d'elasticsearch et kibana
#output.elasticsearch:
#hosts: ["IP_SERV_ELASTICSEARCH:9200"]
//...
#setup.kibana:
#host: "IP_SERV_KIBANA:5601"
Décommenter et configurer l'IP du serveur logstash :
output.logstash:
hosts: ["IP_SERV_ELK:5044"]
Démarrage :
sudo service metricbeat start
Pour la supervision apache avec metricbeat, activer la page server-status avec modstatus :
/etc/apache2/mods-enabled/status.conf
<Location /server-status>
SetHandler server-status
Require local
Require ip IP_VM_ELK/24
</Location>
Installation Filebeat sur une VM à superviser
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-amd64.deb
sudo dpkg -i filebeat-6.4.2-amd64.deb
sudo nano /etc/filebeat/filebeat.yml
Configuration :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
Commenter :
#output.elasticsearch:
#hosts: ["IP_SRV_ELK:9200"]
Décommenter et configurer l'IP du serveur logstash :
output.logstash:
hosts: ["IP_SRV_ELK:5044"]
Dans la section module :
# Glob pattern for configuration loading
path: /etc/filebeat/modules.d/*.yml
Activation des modules system et apache2 :
cd /etc/filebeat/modules.d/
sudo mv apache2.yml.disabled apache2.yml
sudo mv system.yml.disabled system.yml
Démarrer :
sudo service filebeat start
Test de la configuration
sudo ./filebeat -c /etc/filebeat/filebeat.yml test config
Sources :