Name
Last commit
Last update
..
README.md web-proxy 클러스터구성

logo

Pacemaker를 이용한 WEB Proxy 이중화 구성

KVM 가상화를 이용한 MASTER / SLAVE 구성을 다룬다.

구성 환경

  • KVM Virtual Machine
  • CentOS 7
  • Pacemaker / Corosync
  • Heartbeat Network

Pacemaker 설치 및 구성

Pacemaker는 Heartbeat Network으로 구성되기 때문에 다음과 같이 /etc/hosts 파일에 IP를 설정한다.

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.242 adminproxy-master
192.168.200.243 adminproxy-slave

아래와 같이 모든 Node에 Pacemaker 패키지를 설치한다.

[ALL]# yum install pcs fence-agents-all

위의 패키지는 CentOS 기본(Base) 패키지에서 제공한다.

설치를 마무리 한다.

[ALL]# systemctl start pcsd; systemctl enable pcsd

[ALL]# echo 패스워드 | passwd --stdin hacluster
[MASTER]# pcs cluster adminproxy-master adminproxy-slave

[MASTER]# pcs cluster setup --start --name ADMIN-PROXY adminproxy-master adminproxy-slave
[MASTER]# pcs cluster enable --all
[MASTER]# pcs property set stonith-enabled=false
[MASTER]# pcs property set no-quorum-policy=ignore

!중요 : 클러스터 Quorum 및 Fencing 설정

다음의 클러스터 구성은 2 Node로 구성되기 때문에 정족수(Quorum) 설정을 비활성화하여야 한다. 또한 Fence Device(IPMI 등)가 구성되지 않은 경우에도 설정(stonith)을 비활성화 해야 한다. 만약 이 설정이 무시될 경우 클러스터가 구성되지 않는다.

클러스터 리소스 구성 및 용도

  • 서비스 / Private VIP 리소스 구성 => 서비스 VIP 사용
  • NGINX / PHP-FPM 설치 및 리소스 구성 => WEB Proxy 서비스
  • 클러스터 제약조건 설정

서비스 / Private VIP 구성

[MASTER]# pcs resource create PU-VIP IPaddr2 ip=61.100.0.XX cidr_netmask=32 nic=eth0 op monitor interval=10s timeout=30s on-fail=standby
[MASTER]# pcs resource create PR-VIP IPaddr2 ip=172.24.0.XX cidr_netmask=32 nic=eth1 op monitor interval=10s timeout=30s on-fail=standby

서비스 VIP(61.100.0.XX)를 설정하고, 시스템의 nic는 eth0, 모니터링 주기는 10초, 최종 timeout은 30초로 설정한다. 만약 현재 Owner의 장애가 발생하면, Standby로 Role을 이관한다.

NGINX / PHP-FPM 설치 및 리소스 구성

Nginx의 경우 CentOS Base Repository를 통해 설치가 불가능하므로, 다음과 같이 내부 저장소의 Repository를 설정한다.

[ALL]# cat /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://yum.hongsnet.net:8889/nginx/centos7/$basearch
gpgcheck=0
enabled=1
[ALL]# yum install nginx

PHP-FPM의 경우는 최신버전의 PHP-FPM을 CentOS Base Repository를 통한 설치가 불가능하므로, 다음과 같이 내부 저장소의 Repository를 설정한다.

[ALL]# cat /etc/yum.repos.d/hongs.repo
[remi]
name = Remi Safe
baseurl = http://yum.hongsnet.net:8889/centos/7/remi/$basearch
gpgcheck = 0
enabled = 1

[centos7-extra]
name = CentOS7 Extras(EPEL)
baseurl = http://yum.hongsnet.net:8889/centos/7/extra/$basearch
gpgcheck = 0
enabled = 1

PHP-FPM 설치는 일반적인 WEB 환경이 모두 지원되도록, 다음과 같이 각종 라이브러리를 설치한다.

[ALL]# yum install php74-php-fpm php74 php74-php-pdo php74-php-xml php74-php-cli php74-php-pdo-firebird \
php74-php-mysqlnd php74-php-process php74-php-pear php74-php-pecl-mysql \
php74-php-pecl-interbase php74-php-xmlrpc php74-php-odbc php74-php-pspell php74-php-gmp \
php74-php-ldap php74-php-pecl-zip php74-php-intl php74-php-dbg php74-php-enchant \
php74-php-tidy php74-php-pecl-mcrypt php74-php-opcache php74-php-bcmath \
php74-php-dba php74-php-soap php74-php-imap php74-php-gd php74-php-mbstring \
php74-php-pecl-recode

!중요 : 설치 후 nginx / php-fpm 데몬은 pacemaker가 제어해야하기 때문에 리부팅 후 실행되지않게 비활성처리가 되어야 함

[ALL]# systemctl disable nginx; systemctl disable php74-php-fpm

다음과 같이 클러스터 리소스로 haproxy 데몬을 등록한다.

[MASTER]# pcs resource create NGINX systemd:nginx

php-fpm 설정

php-fpm을 설치하면 기본적으로 TCP 방식으로 구성되는데, 만약 이를 소켓방식으로 변경하려면 다음과 같이 변경한다.

[ALL]# cat /etc/opt/remi/php74/php-fpm.d/www.conf
...중략
;listen = 127.0.0.1:9000
listen = /var/opt/remi/php70/run/php-fpm/php-fpm.socket

nginx 설정

Nginx를 php-fpm과 연동하려면, 다음과 같이 설정해야 한다.

# cat /etc/nginx/conf.d/default.conf
server {
    listen       80;
    server_name  61.100.0.185;
    root /var/www/html;
    ...중략
    
    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    location ~ \.php$ {
        #fastcgi_pass unix:/var/opt/remi/php70/run/php-fpm/php-fpm.socket;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param   SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

nginx 및 php-fpm 리소스 구성

다음과 같이 클러스터 리소스로 nginx / php-fpm 데몬을 등록한다.

[MASTER]# pcs resource create NGINX systemd:nginx
[MASTER]# pcs resource create PHP-FPM systemd:php74-php-fpm

클러스터 제약조건 설정

위와 같이 서비스가 등록되면, MASTER / SLVAE의 구분없이, 서로 리소스를 실행하려고 한다.

  • 클러스터 리소스 시작순서 설정
  • [ STEP 1 ] : 서비스 Public IP / Private IP 실행
[MASTER]# pcs constraint order start PU-VIP then start PR-VIP

PU(Public IP)-VIP 리소스를 실행한 후 PR-VIP 리소스를 실행한다.

  • [ STEP 2 ] : Private IP / NGINX 실행
[MASTER]# pcs constraint order start PR-VIP then start NGINX

PR(Private IP)-VIP 리소스를 실행한 후 NGINX 리소스를 실행한다.

  • [ STEP 3 ] : php-fpm를 실행
[MASTER]# pcs constraint order start NGINX then start PHP-FPM
  • [ STEP 4 ] : 중요 클러스터 리소스의 우선순위 설정
[MASTER]# pcs constraint colocation add PU-VIP with PR-VIP
[MASTER]# pcs constraint colocation add PR-VIP with NGINX
[MASTER]# pcs constraint colocation add NGINX with PHP-FPM

이 설정은 클러스터의 모든 리소스가 Owner Node에서 실행됨을 의미한다.

클러스터 구성내역

# pcs status
Cluster name: ADMIN-PROXY
Stack: corosync
Current DC: adminproxy-master (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Tue Mar  2 11:31:35 2021
Last change: Mon Mar  1 01:45:26 2021 by root via crm_resource on adminproxy-master

2 nodes configured
4 resource instances configured

Online: [ adminproxy-master adminproxy-slave ]

Full list of resources:

 PU-VIP (ocf::heartbeat:IPaddr2):       Started adminproxy-master
 PR-VIP (ocf::heartbeat:IPaddr2):       Started adminproxy-master
 NGINX  (systemd:nginx):        Started adminproxy-master
 PHP-FPM        (systemd:php74-php-fpm):        Started adminproxy-master

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

다음은 리소스의 제약조건에 대한 결과이다.

# pcs resource --full
 Resource: PU-VIP (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: cidr_netmask=32 ip=61.100.0.231 nic=eth0
  Operations: monitor interval=10s on-fail=standby timeout=30s (PU-VIP-monitor-interval-10s)
              start interval=0s timeout=20s (PU-VIP-start-interval-0s)
              stop interval=0s timeout=20s (PU-VIP-stop-interval-0s)
 Resource: PR-VIP (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: cidr_netmask=32 ip=192.168.200.231 nic=eth1
  Operations: monitor interval=10s on-fail=standby timeout=30s (PR-VIP-monitor-interval-10s)
              start interval=0s timeout=20s (PR-VIP-start-interval-0s)
              stop interval=0s timeout=20s (PR-VIP-stop-interval-0s)
 Resource: NGINX (class=systemd type=nginx)
  Operations: monitor interval=60 timeout=100 (NGINX-monitor-interval-60)
              start interval=0s timeout=100 (NGINX-start-interval-0s)
              stop interval=0s timeout=100 (NGINX-stop-interval-0s)
 Resource: PHP-FPM (class=systemd type=php74-php-fpm)
  Operations: monitor interval=60 timeout=100 (PHP-FPM-monitor-interval-60)
              start interval=0s timeout=100 (PHP-FPM-start-interval-0s)
              stop interval=0s timeout=100 (PHP-FPM-stop-interval-0s)

클러스터 관리명령

  • 클러스터 리소스 CleanUP 수행

클러스터에 에러가 발생할 경우 다음과 같이 CleanUP을 수행해야 처리할수 있다.

[OWNER]# pcs resource cleanup 리소스명

단, 시스템 구성이나 데몬에 이상이 발생되면, 에러는 Fix되지 않고 계속 발생될 것이다. 이는 기 발생된 에러에 대한 CleanUP 처리이다.

  • 클러스터 리소스 Relocate 수행

현재 운용되는 클러스터의 유지보수로 인해 Slave Node로의 On-demand 절체(relocate)를 수행할 수 있다.

[OWNER]# pcs resource relocate 리소스명

Extras

다음의 설정은 WEB Proxy로 사용되고 있는 nginx의 설정이다. 아래의 설정들은 Back-End IP주소와 포트를 변경한 후 서비스에 맞게 즉시 적용하여 사용할 수 있다.

elk

# cat /etc/nginx/conf.d/elk-demo.conf
upstream elk_demo_backend {
  server 172.16.0.228:5601;
}

server {

    listen 443 ssl http2;
    server_name elk-demo.hongsnet.net;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/conf.d/.htpasswd_demo;

    ssl_certificate /etc/letsencrypt/live/hongsnet.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/hongsnet.net/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/hongsnet.net/chain.pem;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
    ssl_prefer_server_ciphers on;

    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 61.100.0.136 8.8.4.4 valid=300s;
    resolver_timeout 30s;

    add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;

    access_log /var/log/nginx/elk-demo.hongsnet.net-access.log;
    error_log /var/log/nginx/elk-demo.hongsnet.net-error.log;

    location ~ /api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_pass http://elk_demo_backend;
   }

   location / {
       proxy_http_version 1.1;
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_pass http://elk_demo_backend;
   }

}

jenkins

# cat /etc/nginx/conf.d/jenkins.conf
server {
  listen  80;
  server_name jenkins.freehongs.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  location / {
    return 301 https://$server_name$request_uri;
  }
}

server {
  listen  443 ssl http2;
  server_name jenkins.freehongs.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  ssl_certificate /etc/letsencrypt_freehongs/live/freehongs.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt_freehongs/live/freehongs.net/privkey.pem;
  ssl_trusted_certificate /etc/letsencrypt_freehongs/live/freehongs.net/chain.pem;

  location / {

    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # Required for new HTTP-based CLI
    proxy_http_version 1.1;
    proxy_request_buffering off;
    proxy_buffering off; # Required for HTTP-based CLI to work over SSL

    proxy_pass http://172.16.0.158:8888;
    proxy_read_timeout 90;
    proxy_redirect http://172.16.0.158:8888 jenkins.freehongs.net;

    # workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
    add_header 'X-SSH-Endpoint' 'jenkins.freehongs.tld:50022' always;
  }

  location /asynchPeople {
    return 401;
  }

}

mattermost

# cat /etc/nginx/conf.d/mattermost.conf
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

upstream mattermost_backend {
  server 172.24.0.151:8888;
  keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name chat.hongsnet.net;

    ssl_certificate /etc/letsencrypt/live/hongsnet.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/hongsnet.net/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/hongsnet.net/chain.pem;

    http2_push_preload on; # Enable HTTP/2 Server Push

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
    ssl_prefer_server_ciphers on;

    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 61.100.0.136 8.8.4.4 valid=300s;
    resolver_timeout 30s;

    add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;

    access_log /var/log/nginx/chat.hongsnet.net-access.log;
    error_log /var/log/nginx/chat.hongsnet.net-error.log;

   location ~ /api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_pass http://mattermost_backend;
   }

   location / {
       proxy_http_version 1.1;
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_pass http://mattermost_backend;
   }

}

portainer

# cat /etc/nginx/conf.d/port135.conf
server {
  listen  80;
  server_name port135.hongsnet.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  location / {
    return 301 https://$server_name$request_uri;
  }
}

server {
  listen  443 ssl http2;
  server_name port135.hongsnet.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  ssl_certificate /etc/letsencrypt/live/hongsnet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/hongsnet.net/privkey.pem;
  ssl_trusted_certificate /etc/letsencrypt/live/hongsnet.net/chain.pem;

  #include ssl-params.conf;

  location / {

    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # Required for new HTTP-based CLI
    proxy_http_version 1.1;
    proxy_request_buffering off;
    proxy_buffering off; # Required for HTTP-based CLI to work over SSL

    proxy_pass http://192.168.200.62:9000;
    proxy_read_timeout 90;
    proxy_redirect http://192.168.200.62:9000 port135.hongsnet.net;

  }

  location /api/websocket/ {
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_http_version 1.1;
    proxy_pass http://192.168.200.62:9000/api/websocket/;
  }

}

grafana

# cat /etc/nginx/conf.d/grafana.conf
server {
  listen  80;
  server_name grafana.hongsnet.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  location / {
    return 301 https://$server_name$request_uri;
  }
}

server {
  listen  443 ssl http2;
  server_name grafana.hongsnet.net;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  ssl_certificate /etc/letsencrypt/live/hongsnet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/hongsnet.net/privkey.pem;
  ssl_trusted_certificate /etc/letsencrypt/live/hongsnet.net/chain.pem;

  location / {

    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # Required for new HTTP-based CLI
    proxy_http_version 1.1;
    proxy_request_buffering off;
    proxy_buffering off; # Required for HTTP-based CLI to work over SSL

    proxy_pass http://172.24.0.245:3000;
    proxy_read_timeout 90;
    proxy_redirect http://172.24.0.245:3000 grafana.hongsnet.net;

  }

}