Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

使用小脚本一键部署k8s-1.10.4高可用集群,更多交流可以访问我的个人博客:www.eryajf.net

Notifications You must be signed in to change notification settings

eryajf/magic-of-kubernetes-scripts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

1,简单说明。

此脚本所能够成形于今日,完全是拜大神分享的https://github.com/opsnull/follow-me-install-kubernetes-cluster 项目所依托而成。之前也曾想过对k8s熟悉之后做一下部署脚本,但那时候并没有什么多么好的思路,直到上周看到了如上开源项目的部署思路,让我有种拨云见日,豁然开朗的感觉,当我跟随项目学习的时候,就已经打算了要写一下部署小脚本了。

因此,这个脚本基本上可以说是大神项目流程的一个堆叠,自己则只不过是做了一点点小小的整理与调试罢了,再一次,郑重的,对此表示感谢!

当然啦,事实上当自己来整理这个脚本的时候发现,事情也并没有那么的简单,而写脚本的不简单,则是为了以后每次部署的更简单。

这里简单说明一下我使用的服务器情况:

服务器均采用CentOS7.3版本,未在其他系统版本中进行测试。

主机 主机名 组件
192.168.111.3 kube-node1 Kubernetes 1.10.4,Docker 18.03.1-ce,Etcd 3.3.7,Flanneld 0.10.0,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy
192.168.111.4 kube-node2 同上
192.168.111.5 kube-node3 同上

2,准备工作。

首先将整个部署文件上传到部署服务器,进行解压,然后做以下准备工作。

整个安装包我已打包并上传百度云,可自行下载。

链接: https://pan.baidu.com/s/1JbICafwEdIwHnsDlGvPIMw 提取码: 4iaq

1,修改以下内容。

config/environment.sh #修改ip为自己将要部署的机器ip

config/Kcsh/hosts  #修改ip为自己将要部署的机器ip

config/Ketcd/etcd-csr.json #修改ip为自己将要部署的机器ip

config/Kmaster/Kha/haproxy.cfg #修改ip为自己将要部署的机器ip

config/Kmaster/Kapi/kubernetes-csr.json #修改ip为自己将要部署的机器ip

config/Kmaster/Kmanage/kube-controller-manager-csr.json #修改ip为自己将要部署的机器ip

config/Kmaster/Kscheduler/kube-scheduler-csr.json #修改ip为自己将要部署的机器ip

2,基础配置。

这些操作均在kube-node1主机上执行。

注意:请严格按照如下这几步操作进行,否则可能导致下边部署脚本无法正常走完。

ssh-keygen
ssh-copy-id 192.168.111.3
ssh-copy-id 192.168.111.4
ssh-copy-id 192.168.111.5

scp config/Kcsh/hosts root@192.168.111.3:/etc/hosts
scp config/Kcsh/hosts root@192.168.111.4:/etc/hosts
scp config/Kcsh/hosts root@192.168.111.5:/etc/hosts


ssh root@kube-node1 "hostnamectl set-hostname kube-node1"
ssh root@kube-node2 "hostnamectl set-hostname kube-node2"
ssh root@kube-node3 "hostnamectl set-hostname kube-node3"

3,正式部署。

部署非常简单,直接执行magic.sh脚本即可。

不过有几点需要做一下简单说明:

  • 1,启动正式部署之前,务必仔细认真检查各处配置是否与所需求的相匹配了,若不匹配,应当调整。
  • 2,部署过程中如果有卡壳,或者未正常部署而退出,请根据对应的部署阶段进行排查,然后重新执行部署脚本,即可进行接续部署。
  • 3,如对脚本中一些不足地方有任何建议,欢迎与我提出,一起维护,共同进步!

4,简单验证。

部署完成之后,可使用如下方式进行一些对集群可用性的初步检验:

1,检查服务是否均已正常启动。

#!/bin/bash
#
#author:eryajf
#blog:www.eryajf.net
#time:2018-11
#
set -e
source /opt/k8s/bin/environment.sh

#
##set color##
echoRed() { echo $'\e[0;31m'"$1"$'\e[0m'; }
echoGreen() { echo $'\e[0;32m'"$1"$'\e[0m'; }
echoYellow() { echo $'\e[0;33m'"$1"$'\e[0m'; }
##set color##
#

for node_ip in ${NODE_IPS[@]}
do
    echoGreen ">>> ${node_ip}"
	ssh root@${node_ip} "systemctl status etcd|grep Active"
	ssh root@${node_ip} "systemctl status flanneld|grep Active"
	ssh root@${node_ip} "systemctl status haproxy|grep Active"
	ssh root@${node_ip} "systemctl status keepalived|grep Active"
	ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
	ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
	ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
	ssh root@${node_ip} "systemctl status docker|grep Active"
	ssh root@${node_ip} "systemctl status kubelet | grep Active"
	ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
done

2,查看相关服务可用性。

1,验证etcd集群可用性。

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
done
EOF

2,验证flannel网络。

查看已分配的 Pod 子网段列表:

source /opt/k8s/bin/environment.sh

etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  ls ${FLANNEL_ETCD_PREFIX}/subnets

输出:

/kubernetes/network/subnets/172.30.84.0-24
/kubernetes/network/subnets/172.30.8.0-24
/kubernetes/network/subnets/172.30.29.0-24

验证各节点能通过 Pod 网段互通:

注意其中的IP段换成自己的。

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh ${node_ip} "ping -c 1 172.30.8.0"
    ssh ${node_ip} "ping -c 1 172.30.29.0"
    ssh ${node_ip} "ping -c 1 172.30.84.0"
done
EOF

3,高可用组件验证。

查看 VIP 所在的节点,确保可以 ping 通 VIP:

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh ${node_ip} "/usr/sbin/ip addr show ${VIP_IF}"
    ssh ${node_ip} "ping -c 1 ${MASTER_VIP}"
done
EOF

4,高可用性试验。

查看当前的 leader:

$kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_444fbc06-f3d8-11e8-8ca8-0050568f514f","leaseDurationSeconds":15,"acquireTime":"2018-11-29T13:11:21Z","renewTime":"2018-11-29T13:48:10Z","leaderTransitions":0}'
  creationTimestamp: 2018-11-29T13:11:21Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "3134"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 4452bff1-f3d8-11e8-a5a6-0050568fef9b

可见,当前的 leader 为 kube-node1 节点。

现在停掉kube-node1上的kube-controller-manager。

$systemctl stop kube-controller-manager
$systemctl status kube-controller-manager |grep Active
   Active: inactive (dead) since Sat 2018-11-24 00:52:53 CST; 44s ago

大概一分钟后,再查看一下当前的leader:

$kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node3_45525ae6-f3d8-11e8-a2b8-0050568fbcaa","leaseDurationSeconds":15,"acquireTime":"2018-11-29T13:49:28Z","renewTime":"2018-11-29T13:49:28Z","leaderTransitions":1}'
  creationTimestamp: 2018-11-29T13:11:21Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "3227"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 4452bff1-f3d8-11e8-a5a6-0050568fef9b

可以看到已经自动漂移到kube-node3上去了。

5,查验kube-proxy功能。

查看 ipvs 路由规则

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
done
EOF

输出:

$bash magic.sh
>>> 192.168.111.120
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 192.168.111.120:6443         Masq    1      0          0
  -> 192.168.111.121:6443         Masq    1      0          0
  -> 192.168.111.122:6443         Masq    1      0          0
>>> 192.168.111.121
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 192.168.111.120:6443         Masq    1      0          0
  -> 192.168.111.121:6443         Masq    1      0          0
  -> 192.168.111.122:6443         Masq    1      0          0
>>> 192.168.111.122
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 192.168.111.120:6443         Masq    1      0          0
  -> 192.168.111.121:6443         Masq    1      0          0
  -> 192.168.111.122:6443         Masq    1      0          0

6,创建一个应用。

查看集群节点:

$kubectl get node
NAME         STATUS    ROLES     AGE       VERSION
kube-node1   Ready     <none>    45m       v1.10.4
kube-node2   Ready     <none>    45m       v1.10.4
kube-node3   Ready     <none>    45m       v1.10.4

创建测试应用:

cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

执行定义文件,启动之前,可以先将上边定义的镜像pull下来。

$ kubectl create -f nginx-ds.yml
service "nginx-ds" created
daemonset.extensions "nginx-ds" created

检查各 Node 上的 Pod IP 连通性

$kubectl get pods  -o wide|grep nginx-ds
nginx-ds-78lqz   1/1       Running   0          2m        172.30.87.2   kube-node3
nginx-ds-j45zf   1/1       Running   0          2m        172.30.99.2   kube-node2
nginx-ds-xhttt   1/1       Running   0          2m        172.30.55.2   kube-node1

可见,nginx-ds 的 Pod IP 分别是 172.30.84.2、172.30.8.2、172.30.29.2,在所有 Node 上分别 ping 这三个 IP,看是否连通:

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "ping -c 1 172.30.87.2"
    ssh ${node_ip} "ping -c 1 172.30.99.2"
    ssh ${node_ip} "ping -c 1 172.30.55.2"
done
EOF

检查服务 IP 和端口可达性

$kubectl get svc |grep nginx-ds
nginx-ds           NodePort    10.254.110.153   <none>        80:8781/TCP      6h

在所有 Node 上 curl Service IP:

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh ${node_ip} "curl 10.254.128.98"
done
EOF

检查服务的 NodePort 可达性

cat > magic.sh << "EOF"
#!/bin/bash

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}
do
    echo ">>> ${node_ip}" 
    ssh ${node_ip} "curl ${node_ip}:8996"
done
EOF

About

使用小脚本一键部署k8s-1.10.4高可用集群,更多交流可以访问我的个人博客:www.eryajf.net

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published

Languages

Morty Proxy This is a proxified and sanitized view of the page, visit original site.