Commit 471d5f47 authored by JooHan Hong's avatar JooHan Hong

elk, init

parent eef811f1
Pipeline #5172 passed with stages
in 45 seconds
......@@ -126,12 +126,3 @@ if __name__ == '__main__':
![zabbix_mattermost](./images/zabbix_mattermost.png)
## Line Push Alert Script
```bash
```
FROM registry.hongsnet.net/joohan.hong/docker/centos:7.6.1810
MAINTAINER Hongs <master@hongsnet.net>
ENV TZ=Asia/Seoul
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone &&\
yum -y install epel-release
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN yum -y install glibc glibc-common openssl-devel net-tools vi vim iproute wget postfix cronie crontabs java supervisor rsyslog httpd-tools
RUN rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
RUN yum install -y nginx
RUN chown -R nobody:nobody /usr/share/nginx
COPY config/rsyslog.conf /etc/rsyslog.conf
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY config/supervisord.conf /etc/supervisor/supervisord.conf
COPY config/supervisord_elk.conf /etc/supervisor/conf.d/supervisord_elk.conf
COPY config/htpasswd /etc/nginx/conf.d/.htpasswd
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/elasticsearch-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/kibana-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/logstash-7.9.3.rpm
COPY config/logstash_test.conf /etc/logstash/conf.d/logstash_test.conf
COPY config/kibana.yml /usr/share/kibana/config/kibana.yml
RUN chown kibana.kibana /usr/share/kibana/config/kibana.yml
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/filebeat-7.9.3-x86_64.rpm
EXPOSE 443
EXPOSE 514/udp
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
[![logo](https://www.hongsnet.net/images/logo.gif)](https://www.hongsnet.net)
# 개요
ELK Stack을 구성하여, 로그 관리시스템을 구축 및 운영 한다.
**Elasticsearch**는 Apache Lucene(아파치 루씬) 기반의 Java 오픈소스 분산 검색엔진이다. Elastisearch를 통해 루씬 라이브러리를 단독으로 사용할 수 있으며, 방대한 양의 데이터를 신속하고 거의 실시간(NRT, Near Real Time)으로 저장, 검색, 분석할 수 있다.
# www.hongsnet.net LOG Management
![elk_stack](./images/elk-stack.png)
# 구성 요소
* Virtual Machine (CentOS 7, Clustered Env)
* ELK Stack (VM, Dockerized)
- **E**lasticsearch
- **L**ogstash
- Filebeat
- **K**ibana
## ELK Stack Installing
- **ElasticSearch**
> Logstach로부터 받은 데이터를 검색 및 집계하여, 원하는 정보를 획득할 수 있다.
```bash
# rpm -ivh http://pds.hongsnet.net:8888/packages/elasticsearch-7.9.3-x86_64.rpm
경고: elasticsearch-7.7.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
준비 중... ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
1:elasticsearch-0:7.7.0-1 ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
```
기본 포트번호는 9200/tcp이며, 만약 변경해야한다면 다음과 같이 변경한다.
```bash
# cat /etc/elasticsearch/elasticsearch.yml
...중략
network.host: ["localhost"]
...중략
http.port: 9999
```
완료하려면, elasticsearch.service 데몬을 기동한다.
```bash
# systemctl start elasticsearch.service; systemctl enable elasticsearch.service
```
만약 정상적으로 동작되는지를 검증하려면 다음과 같이 수행한다.
```bash
# curl http://localhost:9200
{
"name" : "elk.hongsnet.net",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "SenwWrKLQ9--JBCJz6eolA",
"version" : {
"number" : "7.7.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
```
- **Kibana**
Elastisearch의 빠른 검색을 통해 데이터를 시각화 및 모니터링한다.
```bash
# rpm -ivh http://pds.hongsnet.net:8888/packages/kibana-7.9.3-x86_64.rpm
경고: kibana-7.7.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
준비 중... ################################# [100%]
Updating / installing...
1:kibana-7.7.0-1 ################################# [100%]
```
만약 Listen 주소 및 포트를 변경하고 싶다면, 다음과 같이 수행한다.
```bash
# cat /etc/kibana/kibana.yml |grep 61.100
server.host: "61.100.0.145"
elasticsearch.hosts: ["http://61.100.0.145:9999"]
# cat /etc/kibana/kibana.yml |grep 5666
server.port: 5666 => default 5601
```
완료하려면, kibana.service 데몬을 시작하고 활성화 한다.
```bash
# systemctl restart kibana.service; systemctl enable kibana.service
```
- **logstash**
다양한 소스(DB, syslog, csv 등)의 로그 또는 트랜젝션 데이터를 수집, 집계, 파싱하여 Elastisearch로 전달한다.
먼저 다음과 같이 시스템에 JDK가 설치되어 있어야 한다.
```bash
# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
```
```bash
# rpm -ivh http://pds.hongsnet.net:8888/packages/logstash-7.9.3.rpm
```
다음은 홍쓰넷에서 사용하는 Logstash의 설정파일이다.
```bash
# cat /etc/logstash/conf.d/first-pipeline.conf
input {
beats {
port => 5444
host => "0.0.0.0"
client_inactivity_timeout => "1200"
type => "filebeats"
}
udp {
port => 514
host => "0.0.0.0"
type => "syslog"
}
}
filter {
if [type] == "messages" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}else if [type] == "syslog" {
if "Received disconnect" in [message] {
drop { }
}else if "Disconnected from" in [message] {
drop { }
}else if "Removed slice User Slice of" in [message] {
drop { }
}else if "NetworkDB stats" in [message] {
drop { }
}else if "Created slice User Slice" in [message] {
drop { }
}else if "Started Session" in [message] {
drop { }
}else if "Removed slice User Slice" in [message] {
drop { }
}else if "authentication failure" in [message] {
drop { }
}else if "reverse mapping checking getaddrinfo for" in [message] {
drop { }
}else if "not met by user" in [message] {
drop { }
}else if "Removed session" in [message] {
drop { }
}else if "Invalid user" in [message] {
drop { }
}else if "invalid user" in [message] {
drop { }
}
grok {
add_tag => [ "sshd_fail" ]
match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
grok {
add_tag => [ "sshd_fail2" ]
match => { "message" => "Failed %{WORD:sshd_auth_type} for invalid user %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
grok {
add_tag => [ "sshd_accept" ]
match => { "message" => "%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: Accepted password for %{USERNAME:sshd_invalid_user} from %{IPORHOST:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
mutate {
convert => {"geoip.city_name" => "string"}
}
geoip {
source => "sshd_client_ip"
}
}else if [type] == "filebeats" {
if "ZABBIXDB" in [message] {
drop { }
}else if "ZABBIX_DEMO" in [message] {
drop { }
}else if "zabbix" in [message] {
drop { }
}
grok {
add_tag => [ "db_conn" ]
match => { "message" => "%{YEAR:year}%{MONTHNUM:month}%{MONTHDAY:day} %{TIME:time},%{GREEDYDATA:host},%{GREEDYDATA:username},%{GREEDYDATA:client_hostname},%{INT:connection_id},%{INT:query_id},%{GREEDYDATA:operation},%{GREEDYDATA:schema},%{GREEDYDATA:object},%{INT:return_code}" }
}
grok {
match => [ "message", "^# User@Host: %{USER:query_user}(?:\[[^\]]+\])?\s+@\s+%{HOSTNAME:query_host}?\s+\[%{IP:query_ip}?\]" ]
}
grok {
match => [ "message", "^# Thread_id: %{NUMBER:thread_id:int}\s+Schema: %{USER:schema}\s+Last_errno: %{NUMBER:last_errno:int}\s+Killed: %{NUMBER:killed:int}"]
}
grok {
match => [ "message", "^# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time}\s+ Rows_sent: %{NUMBER:rows_sent:int} \s+Rows_examined: %{NUMBER:rows_examined:int}\s+Rows_affected: %{NUMBER:rows_affected:int}\s+Rows_read: %{NUMBER:rows_read:int}"]
}
grok { match => [ "message", "^# Bytes_sent: %{NUMBER:bytes_sent:float}"] }
grok { match => [ "message", "^SET timestamp=%{NUMBER:timestamp}" ] }
grok { match => [ "message", "^SET timestamp=%{NUMBER};\s+%{GREEDYDATA:query}" ] }
date { match => [ "timestamp", "UNIX" ] }
mutate {
remove_field => "timestamp"
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "172.16.0.228:9200"
manage_template => false
index => "rsyslog-%{+YYYY.MM.dd}"
}
}else if [type] == "filebeats" {
elasticsearch {
hosts => "172.16.0.228:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
```
> 참고 : 홍쓰넷 WEB 로그현황은 WEB/ 디렉토리안의 logstash 파일을 참조 한다.
완료하려면, logstash.service 데몬을 시작하고 활성화 한다.
```bash
# systemctl restart logstash.service; systemctl enable logstash.service
```
- **최종 확인**
다음과 같이 curl 명령을 사용해서 검증할 수 있다.
```bash
# curl localhost:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .apm-custom-link MUv1k_7kQd25iCFSpR8aEw 1 0 0 0 208b 208b
yellow open filebeat-7.7.0-2020.05.14 Xb6sMlYSRwCTpV8PBUEt9g 1 1 6640 0 3.7mb 3.7mb
green open .kibana_task_manager_1 _DlHKSSXSsmdb254H4UTyg 1 0 5 0 38kb 38kb
green open .apm-agent-configuration 1bTCtA0wQEOgDkGPxhm3Lw 1 0 0 0 208b 208b
green open .kibana_1 HFt3nVQhRaWq8bPkBJcZFg 1 0 9 0 46.3kb 46.3kb
```
- **Kibana 접속**
> http://주소:5601
# Extras
위의 VM 환경이 아닌, **단독 Docker 기반**의 경우 다음의 Dockerfile을 참조 한다.
```bash
# cat Dockerfile
FROM registry.hongsnet.net/joohan.hong/docker/centos:7.6.1810
MAINTAINER Hongs <master@hongsnet.net>
ENV TZ=Asia/Seoul
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone &&\
yum -y install epel-release
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN yum -y install glibc glibc-common openssl-devel net-tools vi vim iproute wget postfix cronie crontabs java supervisor rsyslog httpd-tools
RUN rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
RUN yum install -y nginx
RUN chown -R nobody:nobody /usr/share/nginx
COPY config/rsyslog.conf /etc/rsyslog.conf
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY config/supervisord.conf /etc/supervisor/supervisord.conf
COPY config/supervisord_elk.conf /etc/supervisor/conf.d/supervisord_elk.conf
COPY config/htpasswd /etc/nginx/conf.d/.htpasswd
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/elasticsearch-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/kibana-7.9.3-x86_64.rpm
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/logstash-7.9.3.rpm
COPY config/logstash_test.conf /etc/logstash/conf.d/logstash_test.conf
COPY config/kibana.yml /usr/share/kibana/config/kibana.yml
RUN chown kibana.kibana /usr/share/kibana/config/kibana.yml
RUN rpm -ivh http://pds.hongsnet.net:8888/packages/filebeat-7.9.3-x86_64.rpm
EXPOSE 443
EXPOSE 514/udp
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
```
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=elk_cache:10m max_size=3g inactive=120m use_temp_path=off;
upstream elk_backend {
server 127.0.0.1:5601;
}
server {
listen 443 ssl http2;
server_name elk.hongsnet.net;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
ssl_certificate /etc/letsencrypt/live/hongsnet.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/hongsnet.net/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/hongsnet.net/chain.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 61.100.0.136 8.8.4.4 valid=300s;
resolver_timeout 30s;
add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
access_log /var/log/nginx/elk.hongsnet.net-access.log;
error_log /var/log/nginx/elk.hongsnet.net-error.log;
location ~ /api/v[0-9]+/(users/)?websocket$ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 50M;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_pass http://elk_backend;
}
location / {
proxy_http_version 1.1;
client_max_body_size 50M;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_cache elk_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 2;
proxy_cache_use_stale timeout;
proxy_cache_lock on;
proxy_pass http://elk_backend;
}
}
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
admin:XXXXXXXXX
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
input {
beats {
port => 5444
host => "0.0.0.0"
}
}
filter {
if [system][process] {
if [system][process][cmdline] {
grok {
match => {
"[system][process][cmdline]" => "^%{PATH:[system][process][cmdline_path]}"
}
remove_field => "[system][process][cmdline]"
}
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
autoindex off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###
[supervisord]
logfile = /dev/stdout ; (main log file;default $CWD/supervisord.log)
pidfile = /tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir = /tmp ; ('AUTO' child log dir, default $TEMP)
critical = critical
logfile_maxbytes = 0
logfile_backupcount = 0
loglevel = info
user=root
nodaemon=true
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl = unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
user=root
[program:rsyslog]
command=/usr/sbin/rsyslogd -n
user=root
autorestart=true
autostart=true
stdout_logfile=/var/log/rsyslog.log
stderr_logfile=/var/log/rsyslog.log
[program:elasticsearch]
command=/usr/share/elasticsearch/bin/systemd-entrypoint --quiet
user=elasticsearch
[program:kibana]
command=/usr/share/kibana/bin/kibana
user=kibana
stdout_logfile=/var/log/kibana.log
[program:logstash]
command=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
user=logstash
stdout_logfile=/var/log/logstash.log
[program:filebeat]
command=/usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat.log
user=root
stdout_logfile=/var/log/filebeat.log
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment