k8s 安装

[TOC]

一、在master节点上的操作

众多需要下载和安装的包我都已经下载好了放在集群上,所以后面的安装等就是直接从集群backup目录拷贝过来直接用。

master节点配置的服务有,etcd,flannel,docker,kubectl,kube-apiserver,kube-controller-manager,kube-scheduler,kubelete,kube-proxy,kubedns(插件),kube-dashboard(插件)

1.首先设置环境变量

1
2
3
4
5
6
7
8
9
10
11
12
cat /atlas/backup/kubernetes-1.7.6/environment.sh
#!/usr/bin/bash

BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"
SERVICE_CIDR="10.254.0.0/16"
CLUSTER_CIDR="172.30.0.0/16"
export NODE_PORT_RANGE="8400-9000"
export ETCD_ENDPOINTS="http://172.16.10.18:2379,http://172.16.10.10:2379"
export FLANNEL_ETCD_PREFIX="/flannel/network"
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"
export CLUSTER_DNS_SVC_IP="10.254.0.2"
export CLUSTER_DNS_DOMAIN="cluster.local."

2.下载cfssl

后面使用cfssl来生成certificate authority

1
2
3
4
cd /atlas/backup/kubernetes-1.7.6/cfssl
cp cfssl_linux-amd64 /usr/local/bin/cfssl
cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
cp cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3.创建CA配置文件ca-config.json和签名请求ca-csr.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
mkdir /atlas/backup/kubernetes-1.7.6/ssl 
cd /atlas/backup/kubernetes-1.7.6/ssl/
# vim /atlas/backup/kubernetes-1.7.6/ssl/ca-config.json
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF

# vim /atlas/backup/kubernetes-1.7.6/ssl/ca-csr.json
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

4.生成证书和私钥

1
2
3
4
5
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
mkdir -pv /etc/kubernetes/ssl
cp /atlas/backup/kubernetes-1.7.6/ssl/ca* /etc/kubernetes/ssl

5.加载环境变量

1
2
3
4
5
export NODE_NAME=etcd-host0
export NODE_IP=172.16.10.10
export NODE_IPS="172.16.10.10 172.16.10.18"
export ETCD_NODES=etcd-host0=http://172.16.10.10:2380,etcd-host1=http://172.16.10.18:2380
source /atlas/backup/kubernetes-1.7.6/environment.sh

6.配置etcd服务

kuberntes 系统使用 etcd 存储所有数据,同时flannel网络服务也会把数据保存到etcd中,etcd服务集群只在login节点和master节点搭建。有些教程在etcd集群和flannel集群中加入了ca认证过程,这里为了简单就没有加密。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install -y /atlas/backup/kubernetes-1.7.6/etcd3-3.0.14-2.el7.x86_64.rpm 

cat > /etc/etcd/etcd.conf <<EOF
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.10.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.10.10:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.10.10:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://172.16.10.10:2380,etcd2=http://172.16.10.18:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.10.10:2379"
EOF

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service

7.启动并查看服务

1
2
3
4
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

在start 的时候会等待一段时间(等待与其他节点的etcd服务通信),这个时候如果登录节点也同时起来了,就会很快。

8.检查etcd集群的健康状况

1
2
3
4
5
6
7
8
for ip in ${NODE_IPS}; do
ETCDCTL_API=3 etcdctl \
--endpoints=http://${ip}:2379 \
endpoint health; done

# 输出结果如下
http://172.16.10.10:2379 is healthy: successfully committed proposal: took = 15.749146ms
http://172.16.10.18:2379 is healthy: successfully committed proposal: took = 11.567679ms

9.配置flannel服务

kubernetes 要求集群内各节点能通过 Pod 网段互联互通,所以配置flannel服务用以解决网络的问题。

1
2
3
4
5
6
7
8
9
10
yum -y install /atlas/backup/kubernetes-1.7.6/flannel-0.7.1-2.el7.x86_64.rpm
# 删除掉docker中默认创建的docker0网络,这个网络之前已经分配了一个不能与其他互通的网络,所以要删除掉他,重启docker的时候会自动重新创建一个。
ip link delete docker0

# 这里配置flannel把数据保存到etcd服务中,同时设置flannel网络使用的网卡名称enp4s0f0(根据具体网卡名称修改)
cat > /etc/sysconfig/flanneld <<EOF
FLANNEL_ETCD_ENDPOINTS="http://172.16.10.10:2379,http://172.16.10.18:2379"
FLANNEL_ETCD_PREFIX="/flannel/network"
FLANNEL_OPTIONS="--iface=enp4s0f0"
EOF

10.启动服务查看状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
ournalctl -u flanneld |grep 'Lease acquired'

Oct 25 15:01:20 master flanneld-start[19831]: I1025 15:01:20.304394 19831 manager.go:250] Lease acquired: 172.30.40.0/24

ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.30.40.0 netmask 255.255.255.255 broadcast 0.0.0.0
ether d2:d6:8b:be:2f:70 txqueuelen 0 (Ethernet)
RX packets 1989 bytes 862786 (842.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2560 bytes 207435 (202.5 KiB)
TX errors 0 dropped 11 overruns 0 carrier 0 collisions 0

11.检查flannel网络服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@master ~]#etcdctl \
> --endpoints=${ETCD_ENDPOINTS} \
> get ${FLANNEL_ETCD_PREFIX}/config
{"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}
[root@master ~]#etcdctl \
> --endpoints=${ETCD_ENDPOINTS} \
> ls ${FLANNEL_ETCD_PREFIX}/subnets
/flannel/network/subnets/172.30.50.0-24
/flannel/network/subnets/172.30.68.0-24
/flannel/network/subnets/172.30.89.0-24
/flannel/network/subnets/172.30.7.0-24
/flannel/network/subnets/172.30.40.0-24
/flannel/network/subnets/172.30.34.0-24

# 这里显示了所有节点的flannel服务以及分配的ip段

安装配置docker,master节点也可以作为一个node。docker安装教程这里不再描述。在flannel服务的时候记得删除docker0 服务,如果忘记可以在重启docker前删除。

12.修改docker配置

安装好的docker只需要修改一下docker启动服务,在启动docker的时候加入/run/flannel/docker这个文件中的参数,同时 禁用掉了iptables

1
2
3
4
5
6
vim /usr/lib/systemd/system/docker.service

[Service]
...
EnvironmentFile=-/run/flannel/docker #添加这一行
ExecStart=/usr/bin/dockerd --log-level=error --iptables=false $DOCKER_NETWORK_OPTIONS #修改这一行

13.去掉防火墙

清空掉了iptables,centos7 防火墙默认是拒绝,所以加一条accept在最后。

1
2
3
4
5
iptables -F 
iptables -P FORWARD ACCEPT
systemctl daemon-reload
systemctl enable docker
systemctl start docker

14.kubectl配置

kubectl是一个客户端连接工具,kubectl 默认从 ~/.kube/config 配置文件获取访问 kube-apiserver 地址、证书、用户名等信息。

首先指明kube-apiserver地址,和下载kubectl客户端

1
2
3
4
5
export MASTER_IP=172.16.10.10
export KUBE_APISERVER="https://${MASTER_IP}:6443"
cp /atlas/backup/kubernetes-1.7.6/kubernetes/client/bin/kube* /usr/local/bin/
chmod a+x /usr/local/bin/kube*
chmod a+r /usr/local/bin/kube*

a.添加kubectl证书和秘钥

kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供 TLS 证书和秘钥。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# vim admin-csr.json
cd /atlas/backup/kubernetes-1.7.6/ssl
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin

cp admin*.pem /etc/kubernetes/ssl/

创建 kubectl kubeconfig 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}
# 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
# 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
# 设置默认上下文
kubectl config use-context kubernetes

生成的 kubeconfig 被保存到 ~/.kube/config ,拷贝到共享目录方便后面其他节点使用

1
cp -r /root/.kube/config /atlas/backup/kubernetes-1.7.6/.kube

15.配置server

首先拷贝可执行文件

1
cp -r /atlas/backup/kubernetes-1.7.6/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

证书和私钥

创建签名请求并创建证书和私钥,并放入到指定目录/etc/kubernetes/ssl/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cd /atlas/backup/kubernetes-1.7.6/ssl 
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"${MASTER_IP}",
"${CLUSTER_KUBERNETES_SVC_IP}",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes


mkdir -p /etc/kubernetes/ssl/
cp kubernetes*.pem /etc/kubernetes/ssl/

kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,kube-apiserver 验证 kubelet 请求中的 token 是否与它配置的 token.csv 一致,如果一致则自动为 kubelet生成证书和秘钥。

1
2
3
4
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
mv token.csv /etc/kubernetes/

kube-apiserver

创建 kube-apiserver 的 systemd unit 文件,服务中指定了etcd服务,认证,以及开启rbac等。kube-apiserver提供Restful API,是整个集群管理和控制的入口。apiserver封装了资源对象的CRUD操作并持久化到etcd中,REST API提供给外部客户端和内部组件调用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cat  > kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${MASTER_IP} \\
--bind-address=${MASTER_IP} \\
--insecure-bind-address=${MASTER_IP} \\
--authorization-mode=RBAC \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--kubelet-https=true \\
--experimental-bootstrap-token-auth \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=${NODE_PORT_RANGE} \\
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\
--etcd-servers=${ETCD_ENDPOINTS} \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/lib/audit.log \\
--event-ttl=1h \\
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动服务kube-apiserver

1
2
3
4
5
cp kube-apiserver.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

kube-controller-manager

配置和启动kube-controller-manager服务,controller-manager是Kubernetes集群中所有资源的自动化控制中心

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=127.0.0.1 \\
--master=http://${MASTER_IP}:8080 \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--cluster-cidr=${CLUSTER_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

cp kube-controller-manager.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

kube-scheduler

配置和启动kube-scheduler服务,kube-scheduler是调度器,主要负责Pod调度,每个Pod最终被调度到哪台服务器上是由Scheduler决定的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--address=127.0.0.1 \\
--master=http://${MASTER_IP}:8080 \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

cp kube-scheduler.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

16.配置node

由于master节点也作为一个node来用,所以也需要相关配置,但其他node配置都是做到openhpc里面,所以这里的配置和其他node的配置不同。

首先拷贝可执行文件

1
cp -r /atlas/backup/kubernetes-1.7.6/kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限创建认证请求(certificate signing requests):

1
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建 kubelet bootstrapping kubeconfig 文件,拷贝到指定目录中/etc/kubernetes/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
cp bootstrap.kubeconfig /etc/kubernetes/

这个文件在所有node都要用,所以拷贝到公共目录。

1
cp bootstrap.kubeconfig /atlas/backup/kubernetes-1.7.6/node 

创建 kubelet 的 systemd unit 文件,kubelet主要负责本节点Pod的生命周期管理,定期向Master上报本节点及Pod的基本信息。kubelet会从apiserver接收Pod的创建请求,启动和停止Pod。每个node的这个文件只是node_ip值不一样,其他都相同。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
--address=${NODE_IP} \\
--hostname-override=${NODE_IP} \\
--pod-infra-container-image=172.16.10.10:5000/pod-infrastructure:latest \\
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--require-kubeconfig \\
--cert-dir=/etc/kubernetes/ssl \\
--cluster-dns=${CLUSTER_DNS_SVC_IP} \\
--cluster-domain=${CLUSTER_DNS_DOMAIN} \\
--hairpin-mode promiscuous-bridge \\
--allow-privileged=true \\
--serialize-image-pulls=false \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

在node节点首次启动时,会去下载pod-infrastructure镜像,由于集群下载该镜像很慢,所以把他做到了本地,修改了server配置文件。

1
2
3
4
5
6
7
[root@node7 dashboard]#docker images 
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 7 weeks ago 209MB
172.16.10.10:5000/pod-infrastructure latest 1158bd68df6d 3 months ago 209MB
# 由于集群上下载pod-infrastructure 镜像较慢,因此把他做成本地repository
--pod-infra-container-image=172.16.10.10:5000/pod-infrastructure:latest \\
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \\

17.启动kubelete服务

1
2
3
4
5
cp kubelet.service /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

测试检验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-n62mb 18s kubelet-bootstrap Pending
# pending表示为未认证的,Approved,Issued表示已通过的
kubectl certificate approve csr-n62mb
certificatesigningrequest "csr-n62mb" approved
# 这样就可以看到加入进来的node
kubectl get nodes
NAME STATUS AGE VERSION
172.16.10.17 Ready 20s v1.6.10

kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-2JEdaLskngS2xSbyM0vihsxKe0gliuXbtVgu5WSTxcY 2d kubelet-bootstrap Approved,Issued
node-csr-AXt9Sy1PgX9zOze6V6PxXwTCrxXNKBlKEdsgrN4I_k0 1d kubelet-bootstrap Approved,Issued

经过上面的测试检验,认证。node这边自动生成了 kubelet kubeconfig 文件和公私钥

1
2
3
4
5
6
7
8
ls -l /etc/kubernetes/kubelet.kubeconfig
-rw-------. 1 root root 2279 Oct 12 16:35 /etc/kubernetes/kubelet.kubeconfig

ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r--. 1 root root 1046 Oct 12 16:35 /etc/kubernetes/ssl/kubelet-client.crt
-rw-------. 1 root root 227 Oct 12 16:34 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r--. 1 root root 1111 Oct 12 16:35 /etc/kubernetes/ssl/kubelet.crt
-rw-------. 1 root root 1675 Oct 12 16:35 /etc/kubernetes/ssl/kubelet.key

18.kube-proxy服务

kube-proxy实现Kubernetes上Service的通信及负载均衡。配置kube-proxy 服务,证书请求,认证和私钥。创建 kube-proxy kubeconfig 文件,创建 kube-proxy 的 systemd unit 文件,并启动服务。一并写入下面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
cd /atlas/backup/kubernetes-1.7.6/ssl/node/
cat > /atlas/backup/kubernetes-1.7.6/ssl/node/kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/etc/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

cp kube-proxy*.pem /etc/kubernetes/ssl/
cp kube-proxy*.pem /atlas/backup/kubernetes-1.7.6/ssl/node

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cp kube-proxy.kubeconfig /etc/kubernetes/
cp /atlas/backup/kubernetes-1.7.6/ssl/node/kube-proxy.kubeconfig /etc/kubernetes/

mkdir -p /var/lib/kube-proxy
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
--bind-address=${NODE_IP} \\
--hostname-override=${NODE_IP} \\
--cluster-cidr=${SERVICE_CIDR} \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cp kube-proxy.service /etc/systemd/system/

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

19.kubedns

部署kubedns插件,这个插件有一个官方文件目录:kubernetes/cluster/addons/dns

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
mkdir /atlas/backup/kubernetes-1.7.6/dns 
cd /atlas/backup/kubernetes-1.7.6/kubernetes/cluster/addons/dns
cp *.yaml *.base /root/dns
cp kubedns-controller.yaml.base kubedns-controller.yaml
cp kubedns-svc.yaml.base kubedns-svc.yaml


diff kubedns-svc.yaml.base kubedns-svc.yaml
30c30
< clusterIP: __PILLAR__DNS__SERVER__
---
> clusterIP: 10.254.0.2




diff kubedns-controller.yaml.base kubedns-controller.yaml
58c58
< image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
---
> image: 172.16.10.10:5000/k8s-dns-kube-dns-amd64:v1.14.4
88c88
< - --domain=__PILLAR__DNS__DOMAIN__.
---
> - --domain=cluster.local.
92c92
< __PILLAR__FEDERATIONS__DOMAIN__MAP__
---
> #__PILLAR__FEDERATIONS__DOMAIN__MAP__
110c110
< image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
---
> image: 172.16.10.10:5000/k8s-dns-dnsmasq-nanny-amd64:v1.14.4
129c129
< - --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053
---
> - --server=/cluster.local./127.0.0.1#10053
148c148
< image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
---
> image: 172.16.10.10:5000/k8s-dns-sidecar-amd64:v1.14.4
161,162c161,162
< - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A
< - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A
---
> - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
> - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A


ls /root/dns/*.yaml
kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml
kubectl create -f .

主要是修改域名,和image。由于在集群上无法下载Google镜像,因此我提前把他下载到了集群,然后放在了私有仓库中,所以在这里就直接从私有仓库获取images。

检查kubedns

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat > my-nginx.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: 172.16.10.10:5000/nginx:1.7.9
ports:
- containerPort: 80
EOF
kubectl create -f my-nginx.yaml
kubectl expose deploy my-nginx
kubectl get services --all-namespaces |grep my-nginx
default my-nginx 10.254.189.4 <none> 80/TCP 14m


cat > pod-nginx.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 172.16.10.10:5000/nginx:1.7.9
ports:
- containerPort: 80
EOF

kubectl create -f pod-nginx.yaml
kubectl exec nginx -i -t -- /bin/bash
root@nginx:/# cat /etc/resolv.conf
nameserver 10.254.0.2
search default.svc.cluster.local svc.cluster.local cluster.local tjwq01.ksyun.com
options ndots:5

root@nginx:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (10.254.86.48): 48 data bytes

root@nginx:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes

root@nginx:/# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes

20.dashboard

部署dashboard插件,官方文件目录:kubernetes/cluster/addons/dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
mkidr /atlas/backup/kubernetes-1.7.6/kubernetes/dashboard
cd /atlas/backup/kubernetes-1.7.6/kubernetes/cluster/addons/dashboard/
ls *.yaml
dashboard-controller.yaml dashboard-service.yaml
cp /atlas/backup/kubernetes-1.7.6/kubernetes/cluster/addons/dashboard/*.yaml /atlas/backup/kubernetes-1.7.6/dashboard

cat > dashboard-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dashboard
subjects:
- kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF



diff dashboard-service.yaml.orig dashboard-service.yaml
10a11
> type: NodePort

diff dashboard-controller.orig dashboard-controller
20a21
> serviceAccountName: dashboard
23c24
< image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
---
> image: 172.16.10.10:5000/kubernetes-dashboard-amd64:v1.6.1


ls *.yaml
dashboard-controller.yaml dashboard-rbac.yaml dashboard-service.yaml
kubectl create -f .

检验测试

1
2
3
4
5
6
7
8
9
10
11
kubectl get services kubernetes-dashboard -n kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.254.229.169 <nodes> 80:8895/TCP 2d

kubectl get deployment kubernetes-dashboard -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 2d
1 3m

kubectl get pods -n kube-system | grep dashboard
kubernetes-dashboard-3677875397-1zwhg 1/1 Running 0 2d

访问dashboard有三种方法

  1. kubernetes-dashboard 服务暴露了 NodePort,可以使用 http://NodeIP:nodePort 地址访问 dashboard;

  2. 通过 kube-apiserver 访问 dashboard;

  3. 通过 kubectl proxy 访问 dashboard:

通过 kubectl proxy 访问 dashboard

1
2
kubectl proxy --address='172.16.10.10' --port=8086 --accept-hosts='^*$'
Starting to serve on 172.16.10.10:8086

在浏览器中打开 http://172.16.10.10:8086/ui自动跳转到:http://172.16.10.10:8086/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default

通过 kube-apiserver 访问dashboard

1
2
3
4
5
6
kubectl cluster-info
Kubernetes master is running at https://172.16.10.10:6443
KubeDNS is running at https://172.16.10.10:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://172.16.10.10:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

由于 kube-apiserver 开启了 RBAC 授权,而浏览器访问 kube-apiserver 的时候使用的是匿名证书,所以访问安全端口会导致授权失败。这里需要使用非安全端口访问 kube-apiserver:

浏览器访问 URL:http://172.16.10.10:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

1

由于缺少 Heapster 插件,当前 dashboard 不能展示 Pod、Nodes 的 CPU、内存等 metric 图形;

二、在login上的操作

login节点只需要配置etcd集群服务和kubectl客户端

首先加载环境变量

1
2
3
4
5
export NODE_NAME=etcd-host1
export NODE_IP=172.16.10.18、
export NODE_IPS="172.16.10.10 172.16.10.18"
export ETCD_NODES=etcd-host0=http://172.16.10.10:2380,etcd-host1=http://172.16.10.18:2380
source /atlas/backup/kubernetes-1.7.6/environment.sh
1
2
mkdir -pv /etc/kubernetes/ssl
cp /atlas/backup/kubernetes-1.7.6/ssl/ca* /etc/kubernetes/ssl

搭建etcd服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
yum install -y /atlas/backup/kubernetes-1.7.6/etcd3-3.0.14-2.el7.x86_64.rpm 

cat > /etc/etcd/etcd.conf <<EOF
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.10.18:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.10.18:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.10.18:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://172.16.10.10:2380,etcd2=http://172.16.10.18:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.10.18:2379"
EOF

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service


systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

kubectl

1
2
3
4
5
6
cp /atlas/backup/kubernetes-1.7.6/kubernetes/client/bin/kube* /usr/local/bin/
chmod a+x /usr/local/bin/kube*
chmod a+r /usr/local/bin/kube*
mkdir /root/.kube
cp -r /atlas/backup/kubernetes-1.7.6/.kube/config /root/.kube

三、在node节点的操作

node节点操作比较特殊,集群所有节点使用openhpc运行,由于直接在node配置kubernetes后,重启失效。所以必须要做到openhpc中。

首先把所有需要的安装包和二进制程序拷贝到openhpc中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cp /atlas/backup/kubernetes-1.7.6/environment.sh /atlas/os_images/compute_node_v0.2.0/usr/local/bin
cp /atlas/backup/kubernetes-1.7.6/ssl/ca* /atlas/os_images/compute_node_v0.2.0/etc/kubernetes/ssl
cp /atlas/backup/kubernetes-1.7.6/flannel-0.7.1-2.el7.x86_64.rpm /atlas/os_images/compute_node_v0.2.0/tmp

cat > /atlas/os_images/compute_node_v0.2.0/etc/sysconfig/flanneld <<EOF
FLANNEL_ETCD_ENDPOINTS="http://172.16.10.10:2379,http://172.16.10.18:2379"
FLANNEL_ETCD_PREFIX="/flannel/network"
FLANNEL_OPTIONS="--iface=eth0"
EOF

cp /atlas/backup/kubernetes-1.7.6/kubernetes/client/bin/kube* /atlas/os_images/compute_node_v0.2.0/usr/local/bin/
chmod a+x /atlas/os_images/compute_node_v0.2.0/usr/local/bin/kube*
chmod a+r /atlas/os_images/compute_node_v0.2.0/usr/local/bin/kube*

mkdir /atlas/os_images/compute_node_v0.2.0/root/.kube -pv
cp /atlas/backup/kubernetes-1.7.6/.kube/config /atlas/os_images/compute_node_v0.2.0/root/.kube

cp -r /atlas/backup/kubernetes-1.7.6/kubernetes/server/bin/{kube-proxy,kubelet} /atlas/os_images/compute_node_v0.2.0/usr/local/bin/

cp /atlas/backup/kubernetes-1.7.6/ssl/node/bootstrap.kubeconfig /atlas/os_images/compute_node_v0.2.0/etc/kubernetes/

mkdir /atlas/os_images/compute_node_v0.2.0/var/lib/kubelet

cp /atlas/backup/kubernetes-1.7.6/ssl/node/kube-proxy.kubeconfig /atlas/os_images/compute_node_v0.2.0/etc/kubernetes/

mkdir -p /atlas/os_images/compute_node_v0.2.0/var/lib/kube-proxy

chroot到openhpc中,安装flannel,修改docker配置,创建开机启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
chroot /atlas/os_images/compute_node_v0.2.0
yum install -y -q vim bash-completion tree /tmp/flannel-0.7.1-2.el7.x86_64.rpm

vim /usr/lib/systemd/system/docker.service

[Service]
...
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd --log-level=error --iptables=false $DOCKER_NETWORK_OPTIONS

cd /etc/systemd/system/multi-user.target.wants
#systemctl enable flanneld
ln -sv /usr/lib/systemd/system/flanneld.service flanneld.service
#systemctl enable docker
ln -sv /usr/lib/systemd/system/docker.service docker.service

ln -sv /etc/systemd/system/kubelet.service kubelet.service
ln -sv /etc/systemd/system/kube-proxy.service kube-proxy.service
exit

由于每个node节点的kubelete服务和kube-proxy服务的unit文件不同,所以要通过openhpc的的特殊配置实现。首先是写好每个node的unit,拷贝到制定目录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cd /opt/ohpc/pub/examples/network/centos

cp /atlas/backup/kubernetes-1.7.6/ssl/node1/kubelet.service node1-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node1/kube-proxy.service node1-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node2/kubelet.service node2-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node2/kube-proxy.service node2-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node3/kubelet.service node3-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node3/kube-proxy.service node3-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node4/kubelet.service node4-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node4/kube-proxy.service node4-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node5/kubelet.service node5-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node5/kube-proxy.service node5-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node6/kubelet.service node6-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node6/kube-proxy.service node6-kube-proxy.service

cp /atlas/backup/kubernetes-1.7.6/ssl/node7/kubelet.service node7-kubelet.service
cp /atlas/backup/kubernetes-1.7.6/ssl/node7/kube-proxy.service node7-kube-proxy.service

通过openhpc的wwsh命令 分配到每一个node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
wwsh
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node1-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node2-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node3-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node4-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node5-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node6-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node7-kubelet.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node1-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node2-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node3-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node4-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node5-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node6-kube-proxy.service
Warewulf> file import /opt/ohpc/pub/examples/network/centos/node7-kube-proxy.service

#一下省略了“Warewulf>”
file set node1-kube-proxy.service node2-kube-proxy.service node3-kube-proxy.service node4-kube-proxy.service node5-kube-proxy.service node6-kube-proxy.service node7-kube-proxy.service --path=/etc/systemd/system/kube-proxy.service

file set node1-kubelet.service node2-kubelet.service node3-kubelet.service node4-kubelet.service node5-kubelet.service node6-kubelet.service node7-kubelet.service --path=/etc/systemd/system/kubelet.service

provision set node1 --fileadd=node1-kube-proxy.service --fileadd=node1-kubelet.service
provision set node2 --fileadd=node2-kube-proxy.service --fileadd=node2-kubelet.service
provision set node3 --fileadd=node3-kube-proxy.service --fileadd=node3-kubelet.service
provision set node4 --fileadd=node4-kube-proxy.service --fileadd=node4-kubelet.service
provision set node5 --fileadd=node5-kube-proxy.service --fileadd=node5-kubelet.service
provision set node6 --fileadd=node6-kube-proxy.service --fileadd=node6-kubelet.service
provision set node7 --fileadd=node7-kube-proxy.service --fileadd=node7-kubelet.service
exit

最后将配置好的unit制作到container中。reboot所有node节点即可。

node节点在认证通过csr之后,自动生成了公钥和私钥,但是如果某一个node服务器宕机重启之后,还要重新认证。为了解决这个问题,我把生成的公私钥放到openhpc中,系统重启自动加载。操作步骤如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#将node#节点的公私钥都拷贝出来
cp kubelet.kubeconfig ssl/* /atlas/backup/kubernetes-1.7.6/ssl/node#/

#master节点上,重命名所有的公私钥,加一个前缀
rename kube node1-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node2-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node3-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node4-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node5-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node6-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

rename kube node7-kube ./*
cp -n ./* /opt/ohpc/pub/examples/network/centos/

#导入所有node的公私钥到openhpc中
wwsh
file import node1-kubelet-client.crt node1-kubelet-client.key node1-kubelet.crt node1-kubelet.key node1-kubelet.kubeconfig node2-kubelet-client.crt node2-kubelet-client.key node2-kubelet.crt node2-kubelet.key node2-kubelet.kubeconfig node3-kubelet-client.crt node3-kubelet-client.key node3-kubelet.crt node3-kubelet.key node3-kubelet.kubeconfig node4-kubelet-client.crt node4-kubelet-client.key node4-kubelet.crt node4-kubelet.key node4-kubelet.kubeconfig node5-kubelet-client.crt node5-kubelet-client.key node5-kubelet.crt node5-kubelet.key node5-kubelet.kubeconfig node6-kubelet-client.crt node6-kubelet-client.key node6-kubelet.crt node6-kubelet.key node6-kubelet.kubeconfig node7-kubelet-client.crt node7-kubelet-client.key node7-kubelet.crt node7-kubelet.key node7-kubelet.kubeconfig

#指定公私钥放入node节点的具体位置
file set node1-kubelet-client.crt node2-kubelet-client.crt node3-kubelet-client.crt node4-kubelet-client.crt node5-kubelet-client.crt node6-kubelet-client.crt node7-kubelet-client.crt --path=/etc/kubernetes/ssl/kubelet-client.crt
file set node1-kubelet-client.key node2-kubelet-client.key node3-kubelet-client.key node4-kubelet-client.key node5-kubelet-client.key node6-kubelet-client.key node7-kubelet-client.key --path=/etc/kubernetes/ssl/kubelet-client.key
file set node1-kubelet.crt node2-kubelet.crt node3-kubelet.crt node4-kubelet.crt node5-kubelet.crt node6-kubelet.crt node7-kubelet.crt --path=/etc/kubernetes/ssl/kubelet.crt
file set node1-kubelet.key node2-kubelet.key node3-kubelet.key node4-kubelet.key node5-kubelet.key node6-kubelet.key node7-kubelet.key --path=/etc/kubernetes/ssl/kubelet.key
file set node1-kubelet.kubeconfig node2-kubelet.kubeconfig node3-kubelet.kubeconfig node4-kubelet.kubeconfig node5-kubelet.kubeconfig node6-kubelet.kubeconfig node7-kubelet.kubeconfig --path=/etc/kubernetes/kubelet.kubeconfig

#不同节点添加不同的的公私钥
provision set node1 --fileadd=node1-kubelet-client.crt --fileadd=node1-kubelet-client.key --fileadd=node1-kubelet.crt --fileadd=node1-kubelet.key --fileadd=node1-kubelet.kubeconfig
provision set node2 --fileadd=node2-kubelet-client.crt --fileadd=node2-kubelet-client.key --fileadd=node2-kubelet.crt --fileadd=node2-kubelet.key --fileadd=node2-kubelet.kubeconfig
provision set node3 --fileadd=node3-kubelet-client.crt --fileadd=node3-kubelet-client.key --fileadd=node3-kubelet.crt --fileadd=node3-kubelet.key --fileadd=node3-kubelet.kubeconfig
provision set node4 --fileadd=node4-kubelet-client.crt --fileadd=node4-kubelet-client.key --fileadd=node4-kubelet.crt --fileadd=node4-kubelet.key --fileadd=node4-kubelet.kubeconfig
provision set node5 --fileadd=node5-kubelet-client.crt --fileadd=node5-kubelet-client.key --fileadd=node5-kubelet.crt --fileadd=node5-kubelet.key --fileadd=node5-kubelet.kubeconfig
provision set node6 --fileadd=node6-kubelet-client.crt --fileadd=node6-kubelet-client.key --fileadd=node6-kubelet.crt --fileadd=node6-kubelet.key --fileadd=node6-kubelet.kubeconfig
provision set node7 --fileadd=node7-kubelet-client.crt --fileadd=node7-kubelet-client.key --fileadd=node7-kubelet.crt --fileadd=node7-kubelet.key --fileadd=node7-kubelet.kubeconfig

file sync
wwsh file sync