www.hongsnet.net 서비스 구성
다음은 최종적으로 검토/검증 된
K8s
구성내역이며, 결과에 대해서만 명시한다.
k8s Version의 현황
# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Cluster Node의 현황
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
tb2 Ready <none> 17h v1.20.4
tb2-docker Ready control-plane,master 23h v1.20.4
tb3 Ready <none> 17h v1.20.4
tb3-docker Ready <none> 17h v1.20.4
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.24.0.245 TB2-DOCKER
172.24.0.151 TB2
172.16.0.158 TB3
172.16.0.251 TB3-DOCKER
Cluster의 현황
# kubectl cluster-info
Kubernetes control plane is running at https://172.24.0.245:6443
KubeDNS is running at https://172.24.0.245:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Config의 현황
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.24.0.245:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Object의 내역
- Deployment
# cat hongsnet.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hongsnet
name: hongsnet-web
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: hongsnet
template:
metadata:
labels:
app: hongsnet
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: key
operator: In
values:
- worker
volumes:
- name: webeditor-hongsnet-volume
persistentVolumeClaim:
claimName: webeditor-hongsnet-volume-pvc
- name: webdata-hongsnet-volume
persistentVolumeClaim:
claimName: webdata-hongsnet-volume-pvc
- name: webeditor-edu-volume
persistentVolumeClaim:
claimName: webeditor-edu-volume-pvc
- name: webdata-edu-volume
persistentVolumeClaim:
claimName: webdata-edu-volume-pvc
- name: webeditorfile-edu-volume
persistentVolumeClaim:
claimName: webeditorfile-edu-volume-pvc
- name: webeditor-newsystem-volume
persistentVolumeClaim:
claimName: webeditor-newsystem-volume-pvc
- name: webdata-newsystem-volume
persistentVolumeClaim:
claimName: webdata-newsystem-volume-pvc
- name: websrc-home-volume
persistentVolumeClaim:
claimName: websrc-home-volume-pvc
containers:
- image: registry.hongsnet.net/joohan.hong/docker/hongsnet:latest
#imagePullPolicy: Always
imagePullPolicy: IfNotPresent
name: hongsnet-web
volumeMounts:
- mountPath: /home
name: websrc-home-volume
- mountPath: /home/edu/public_html/HongsBoard/Data
name: webdata-edu-volume
- mountPath: /home/edu/public_html/HongsBoard/Web_editor/EDU
name: webeditor-edu-volume
- mountPath: /home/edu/public_html/HongsBoard/Web_editor/FILE
name: webeditorfile-edu-volume
- mountPath: /home/hongsnet/public_html/Data
name: webdata-hongsnet-volume
- mountPath: /home/hongsnet/public_html/Web_editor/FILE
name: webeditor-hongsnet-volume
- mountPath: /home/newhongsystem/public_html/Data
name: webdata-newsystem-volume
- mountPath: /home/newhongsystem/public_html/Web_editor/FILE
name: webeditor-newsystem-volume
resources:
requests:
cpu: "500m"
limits:
cpu: "500m"
nodeSelector:
key: worker
- Scale
# cat hongsnet_scale.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hongsnet-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hongsnet-web
minReplicas: 6
maxReplicas: 12
targetCPUUtilizationPercentage: 80
- Service
# cat hongsnet_service.yaml
apiVersion: v1
kind: Service
metadata:
name: hongsnet-web
spec:
type: NodePort
selector:
app: hongsnet
ports:
- port: 80
targetPort: 80
nodePort: 30000
Service의 현황
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hongsnet-web NodePort 10.110.34.12 <none> 80:30000/TCP 7h47m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
Pod의 현황
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hongsnet-web-66cbcd57c7-fj9vq 1/1 Running 0 4m16s 10.244.1.53 tb2 <none> <none>
hongsnet-web-66cbcd57c7-hr698 1/1 Running 0 8m49s 10.244.2.46 tb3 <none> <none>
hongsnet-web-66cbcd57c7-nt7vs 1/1 Running 0 9m2s 10.244.3.36 tb3-docker <none> <none>
hongsnet-web-66cbcd57c7-qmxwn 1/1 Running 0 8m57s 10.244.2.45 tb3 <none> <none>
hongsnet-web-66cbcd57c7-v97sn 1/1 Running 0 8m45s 10.244.1.49 tb2 <none> <none>
hongsnet-web-66cbcd57c7-wnmwm 1/1 Running 0 8m53s 10.244.3.37 tb3-docker <none> <none>
Node Affinity의 적용
Master Node의 Default Taint 설정으로 인해, 실질적으로 배포되는 Node는
3
개 이다. 따라서Node Affinity
를 적용하여, 모든 Node가 균등하게 배포되도록 적용한다.
Replicas의 현황
# ./kube-replicaget.sh
NAME DESIRED CURRENT READY AGE
hongsnet-web-66cbcd57c7 6 6 6 9m27s
Deployment의 현황
# ./kube-getdeploy.sh
NAME READY UP-TO-DATE AVAILABLE AGE
hongsnet-web 6/6 6 6 3d9h
전체 Namespace 현황
# ./kube-allnamespace.sh
NAMESPACE NAME READY STATUS RESTARTS AGE
default hongsnet-web-54b64478f9-6ff9x 1/1 Running 0 7h49m
default hongsnet-web-54b64478f9-7bzjw 1/1 Running 0 7h49m
default hongsnet-web-54b64478f9-gtsmt 1/1 Running 0 7h49m
default hongsnet-web-54b64478f9-sqdrm 1/1 Running 0 7h49m
default hongsnet-web-54b64478f9-vh7lf 1/1 Running 0 7h49m
kube-system coredns-74ff55c5b-5zwvp 1/1 Running 0 23h
kube-system coredns-74ff55c5b-qpjpx 1/1 Running 0 23h
kube-system etcd-tb2-docker 1/1 Running 0 23h
kube-system kube-apiserver-tb2-docker 1/1 Running 0 23h
kube-system kube-controller-manager-tb2-docker 1/1 Running 0 23h
kube-system kube-flannel-ds-2xrfm 1/1 Running 0 23h
kube-system kube-flannel-ds-4pqw4 1/1 Running 0 17h
kube-system kube-flannel-ds-dqv8s 1/1 Running 0 17h
kube-system kube-flannel-ds-v8d5h 1/1 Running 0 17h
kube-system kube-proxy-7tjgj 1/1 Running 0 23h
kube-system kube-proxy-cdh6g 1/1 Running 0 17h
kube-system kube-proxy-hqc94 1/1 Running 0 17h
kube-system kube-proxy-vbsnr 1/1 Running 0 17h
kube-system kube-scheduler-tb2-docker 1/1 Running 0 23h
kube-system metrics-server-6b4d4457cd-6mxcg 1/1 Running 0 9h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-r5pn9 1/1 Running 0 9h
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-q9m4x 1/1 Running 0 9h
Label의 현황
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
tb2 Ready <none> 17h v1.20.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,key=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=tb2,kubernetes.io/os=linux
tb2-docker Ready control-plane,master 23h v1.20.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,key=master,kubernetes.io/arch=amd64,kubernetes.io/hostname=tb2-docker,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
tb3 Ready <none> 17h v1.20.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,key=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=tb3,kubernetes.io/os=linux
tb3-docker Ready <none> 17h v1.20.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,key=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=tb3-docker,kubernetes.io/os=linux
Auto Scaling 설정(HPA) 현황
# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hongsnet-hpa Deployment/hongsnet-web 0%/80% 6 12 6 3d9h
HA-Proxy Back-end 현황
backend www_hongsnet_net
balance roundrobin
option forwardfor
option httpchk HEAD / HONGSNET_LVS
option httpclose
cookie SVID insert indirect nocache maxlife 10m
redirect scheme https code 301 if !{ ssl_fc }
# 에러 파일 설정
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
http-request cache-use web_cache
http-response cache-store web_cache
http-request set-src src
server tb2.hongsnet.net 172.24.0.151:30000 cookie tb2 check fall 3 rise 2
server tb3.hongsnet.net 172.16.0.158:30000 cookie tb3 check fall 3 rise 2
server tb3-docker.hongsnet.net 172.16.0.251:30000 cookie tb3-docker check fall 3 rise 2
Back-end의 NodePort를 이용해 tcp/30000번 포트를 Listen