Name
Last commit
Last update
..
config elk reinit
images elk, image update
Dockerfile elk reinit
README.md elk, image update

logo

개요

ELK Stack을 구성하여, 로그 관리시스템을 구축 및 운영 한다.

Elasticsearch는 Apache Lucene(아파치 루씬) 기반의 Java 오픈소스 분산 검색엔진이다. Elastisearch를 통해 루씬 라이브러리를 단독으로 사용할 수 있으며, 방대한 양의 데이터를 신속하고 거의 실시간(NRT, Near Real Time)으로 저장, 검색, 분석할 수 있다.

www.hongsnet.net LOG Management

elk_stack

구성 요소

  • Virtual Machine (CentOS 7, Clustered Env)
  • ELK Stack (VM, Dockerized)
  • Elasticsearch
  • Logstash
    • Filebeat
  • Kibana

ELK Stack Installing

  • ElasticSearch

Logstach로부터 받은 데이터를 검색 및 집계하여, 원하는 정보를 획득할 수 있다.

# rpm -ivh http://pds.hongsnet.net:8888/packages/elasticsearch-7.9.3-x86_64.rpm
경고: elasticsearch-7.7.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
준비 중...                         ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
   1:elasticsearch-0:7.7.0-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore

기본 포트번호는 9200/tcp이며, 만약 변경해야한다면 다음과 같이 변경한다.

# cat /etc/elasticsearch/elasticsearch.yml
...중략
network.host: ["localhost"]
...중략
http.port: 9999

완료하려면, elasticsearch.service 데몬을 기동한다.

# systemctl start elasticsearch.service; systemctl enable elasticsearch.service

만약 정상적으로 동작되는지를 검증하려면 다음과 같이 수행한다.

# curl http://localhost:9200
{
  "name" : "elk.hongsnet.net",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "SenwWrKLQ9--JBCJz6eolA",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
  • Kibana

Elastisearch의 빠른 검색을 통해 데이터를 시각화 및 모니터링한다.

# rpm -ivh http://pds.hongsnet.net:8888/packages/kibana-7.9.3-x86_64.rpm
경고: kibana-7.7.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
준비 중...                         ################################# [100%]
Updating / installing...
   1:kibana-7.7.0-1                   ################################# [100%]

만약 Listen 주소 및 포트를 변경하고 싶다면, 다음과 같이 수행한다.

# cat /etc/kibana/kibana.yml |grep 61.100
server.host: "61.100.0.145"
elasticsearch.hosts: ["http://61.100.0.145:9999"]

# cat /etc/kibana/kibana.yml |grep 5666
server.port: 5666 => default 5601

완료하려면, kibana.service 데몬을 시작하고 활성화 한다.

# systemctl restart kibana.service; systemctl enable kibana.service
  • logstash

다양한 소스(DB, syslog, csv 등)의 로그 또는 트랜젝션 데이터를 수집, 집계, 파싱하여 Elastisearch로 전달한다.

먼저 다음과 같이 시스템에 JDK가 설치되어 있어야 한다.

# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
# rpm -ivh http://pds.hongsnet.net:8888/packages/logstash-7.9.3.rpm

다음은 홍쓰넷에서 사용하는 Logstash의 설정파일이다.

# cat /etc/logstash/conf.d/first-pipeline.conf
input {
  beats {
    port => 5444
    host => "0.0.0.0"
    client_inactivity_timeout => "1200"
    type => "filebeats"
  }

  udp {
    port => 514
    host => "0.0.0.0"
    type => "syslog"
  }
}

filter {

  if [type] == "messages" {

    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }

  }else if [type] == "syslog" {

    if "Received disconnect" in [message] {
       drop { }
    }else if "Disconnected from" in [message] {
       drop { }
    }else if "Removed slice User Slice of" in [message] {
       drop { }
    }else if "NetworkDB stats" in [message] {
       drop { }
    }else if "Created slice User Slice" in [message] {
       drop { }
    }else if "Started Session" in [message] {
       drop { }
    }else if "Removed slice User Slice" in [message] {
       drop { }
    }else if "authentication failure" in [message] {
       drop { }
    }else if "reverse mapping checking getaddrinfo for" in [message] {
       drop { }
    }else if "not met by user" in [message] {
       drop { }
    }else if "Removed session" in [message] {
       drop { }
    }else if "Invalid user" in [message] {
       drop { }
    }else if "invalid user" in [message] {
       drop { }
    }

    grok {
      add_tag => [ "sshd_fail" ]
      match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
    }

    grok {
      add_tag => [ "sshd_fail2" ]
      match => { "message" => "Failed %{WORD:sshd_auth_type} for invalid user %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
    }

    grok {
      add_tag => [ "sshd_accept" ]
      match => { "message" => "%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: Accepted password for %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
    }

    mutate {
      convert => {"geoip.city_name" => "string"}
    }
    geoip {
      source => "sshd_client_ip"
    }


  }else if [type] == "filebeats" {

    if "ZABBIXDB" in [message] {
       drop { }
    }else if "ZABBIX_DEMO" in [message] {
       drop { }
    }else if "zabbix" in [message] {
       drop { }
    }

    grok {
      add_tag => [ "db_conn" ]
      match => { "message" => "%{YEAR:year}%{MONTHNUM:month}%{MONTHDAY:day} %{TIME:time},%{GREEDYDATA:host},%{GREEDYDATA:username},%{GREEDYDATA:client_hostname},%{INT:connection_id},%{INT:query_id},%{GREEDYDATA:operation},%{GREEDYDATA:schema},%{GREEDYDATA:object},%{INT:return_code}" }
    }

    grok {
        match => [ "message", "^# User@Host: %{USER:query_user}(?:\[[^\]]+\])?\s+@\s+%{HOSTNAME:query_host}?\s+\[%{IP:query_ip}?\]" ]
    }

    grok {
        match => [ "message", "^# Thread_id: %{NUMBER:thread_id:int}\s+Schema: %{USER:schema}\s+Last_errno: %{NUMBER:last_errno:int}\s+Killed: %{NUMBER:killed:int}"]
    }

    grok {
        match => [ "message", "^# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time}\s+ Rows_sent: %{NUMBER:rows_sent:int} \s+Rows_examined: %{NUMBER:rows_examined:int}\s+Rows_affected: %{NUMBER:rows_affected:int}\s+Rows_read: %{NUMBER:rows_read:int}"]
    }

    grok {  match => [ "message", "^# Bytes_sent: %{NUMBER:bytes_sent:float}"]   }
    grok {  match => [ "message", "^SET timestamp=%{NUMBER:timestamp}" ]      }
    grok {  match => [ "message", "^SET timestamp=%{NUMBER};\s+%{GREEDYDATA:query}" ]   }
    date {  match => [ "timestamp", "UNIX" ]  }

    mutate {
        remove_field => "timestamp"
    }

  }

}

output {
  if [type] == "syslog" {
    elasticsearch {
      hosts => "172.16.0.228:9200"
      manage_template => false
      index => "rsyslog-%{+YYYY.MM.dd}"
    }
  }else if [type] == "filebeats" {
    elasticsearch {
      hosts => "172.16.0.228:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  }
}

참고 : 홍쓰넷 WEB 로그현황은 WEB/ 디렉토리안의 logstash 파일을 참조 한다.

완료하려면, logstash.service 데몬을 시작하고 활성화 한다.

# systemctl restart logstash.service; systemctl enable logstash.service
  • 최종 확인

다음과 같이 curl 명령을 사용해서 검증할 수 있다.

# curl localhost:9200/_cat/indices?v
health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .apm-custom-link          MUv1k_7kQd25iCFSpR8aEw   1   0          0            0       208b           208b
yellow open   filebeat-7.7.0-2020.05.14 Xb6sMlYSRwCTpV8PBUEt9g   1   1       6640            0      3.7mb          3.7mb
green  open   .kibana_task_manager_1    _DlHKSSXSsmdb254H4UTyg   1   0          5            0       38kb           38kb
green  open   .apm-agent-configuration  1bTCtA0wQEOgDkGPxhm3Lw   1   0          0            0       208b           208b
green  open   .kibana_1                 HFt3nVQhRaWq8bPkBJcZFg   1   0          9            0     46.3kb         46.3kb
  • Kibana 접속

http://주소:5601

Extras

위의 VM 환경이 아닌, 단독 Docker 기반의 경우 다음의 Dockerfile을 참조 한다.

# cat Dockerfile
FROM registry.hongsnet.net/joohan.hong/docker/centos:7.6.1810
MAINTAINER Hongs <master@hongsnet.net>

ENV TZ=Asia/Seoul
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone &&\
    yum -y install epel-release

ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8

RUN yum -y install glibc glibc-common openssl-devel net-tools vi vim iproute wget postfix cronie crontabs java supervisor rsyslog httpd-tools

RUN rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

RUN yum install -y nginx
RUN chown -R nobody:nobody /usr/share/nginx

COPY config/rsyslog.conf /etc/rsyslog.conf
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY config/supervisord.conf /etc/supervisor/supervisord.conf
COPY config/supervisord_elk.conf /etc/supervisor/conf.d/supervisord_elk.conf
COPY config/htpasswd /etc/nginx/conf.d/.htpasswd

RUN rpm -ivh http://pds.hongsnet.net:8888/packages/elasticsearch-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/kibana-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/logstash-7.9.3.rpm

COPY config/logstash_test.conf /etc/logstash/conf.d/logstash_test.conf
COPY config/kibana.yml /usr/share/kibana/config/kibana.yml

RUN chown kibana.kibana /usr/share/kibana/config/kibana.yml
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/filebeat-7.9.3-x86_64.rpm

EXPOSE 443
EXPOSE 514/udp

CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]