[TOC]
官方学习文档:https://github.com/easzlab/kubeasz
1、环境说明
IP
主机名
角色
虚拟机配置
192.168.56.11
k8s-master
deploy、master1、lb1、etcd
4c4g
192.168.56.12
k8s-master2
master2、lb2
4c4g
192.168.56.13
k8s-node01
etcd、node
2c2g
192.168.56.14
k8s-node02
etcd、node
2c2g
192.168.56.110
vip
系统内核
3.10
docker版本
18.09
k8s版本
1.13
etcd版本
3.0
2、准备工作
1 2 3 yum install -y epel-release yum update -y yum install python -y
1 2 3 yum install -y ansible ssh-keygen for ip in 11 12 13 14;do ssh-copy-id 192.168.56.$ip ;done
1 2 [root@k8s-master ~]# git clone https://github.com/gjmzj/kubeasz.git [root@k8s-master ~]# mv kubeasz/* /etc/ansible/
可以根据自己所需版本,下载对应的tar包,这里我下载1.13 经过一番折腾,最终把k8s.1-13-5.tar.gz的tar包放到了depoly上
1 2 [root@k8s-master ~]# tar -zxf k8s.1-13-5.tar.gz [root@k8s-master ~]# mv bin/* /etc/ansible/bin/
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 [root@k8s-master ~]# cd /etc/ansible/ [root@k8s-master ansible]# cp example/hosts.m-masters.example hosts cp : overwrite ‘hosts’? y[root@k8s-master ansible]# vim hosts [deploy] 192.168.56.11 NTP_ENABLED=no [etcd] 192.168.56.11 NODE_NAME=etcd1 192.168.56.13 NODE_NAME=etcd2 192.168.56.14 NODE_NAME=etcd3 [kube-master] 192.168.56.11 192.168.56.12 [kube-node] 192.168.56.13 192.168.56.14 [lb] 192.168.56.12 LB_ROLE=backup 192.168.56.11 LB_ROLE=master [all:vars] DEPLOY_MODE=multi-master MASTER_IP="192.168.56.110" KUBE_APISERVER="https://{{ MASTER_IP }}:8443" CLUSTER_NETWORK="flannel" SERVICE_CIDR="10.68.0.0/16" CLUSTER_CIDR="172.20.0.0/16" NODE_PORT_RANGE="20000-40000" CLUSTER_KUBERNETES_SVC_IP="10.68.0.1" CLUSTER_DNS_SVC_IP="10.68.0.2" CLUSTER_DNS_DOMAIN="cluster.local." bin_dir="/opt/kube/bin" ca_dir="/etc/kubernetes/ssl" base_dir="/etc/ansible" [root@k8s-master ansible]# ansible all -m ping 192.168.56.12 | SUCCESS => { "changed" : false , "ping" : "pong" } 192.168.56.13 | SUCCESS => { "changed" : false , "ping" : "pong" } 192.168.56.14 | SUCCESS => { "changed" : false , "ping" : "pong" } 192.168.56.11 | SUCCESS => { "changed" : false , "ping" : "pong" }
3、分步骤安装 3.1、创建证书和安装准备 1 [root@k8s-master ansible]# ansible-playbook 01.prepare.yml
3.2、安装etcd集群 1 2 3 4 5 6 7 8 9 10 11 [root@k8s-master ansible]# ansible-playbook 02.etcd.yml [root@k8s-master ansible]# bash [root@k8s-master ansible]# systemctl status etcd [root@k8s-master ansible]# for ip in 11 13 14;do ETCDCTL_API=3 etcdctl --endpoints=https://192.168.56.$ip :2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health;done https://192.168.56.11:2379 is healthy: successfully committed proposal: took = 7.967375ms https://192.168.56.13:2379 is healthy: successfully committed proposal: took = 12.557643ms https://192.168.56.14:2379 is healthy: successfully committed proposal: took = 9.70078ms
3.3、安装docker 1 [root@k8s-master ansible]# ansible-playbook 03.docker.yml
3.4、安装master节点 1 2 3 4 5 6 7 8 9 10 11 12 13 [root@k8s-master ansible]# ansible-playbook 04.kube-master.yml [root@k8s-master ansible]# systemctl status kube-apiserver [root@k8s-master ansible]# systemctl status kube-controller-manager [root@k8s-master ansible]# systemctl status kube-scheduler [root@k8s-master ansible]# kubectl get componentstatus NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health" :"true" } etcd-1 Healthy {"health" :"true" } etcd-2 Healthy {"health" :"true" }
3.5、安装node节点 1 2 3 4 5 6 7 8 9 [root@k8s-master ansible]# ansible-playbook 05.kube-node.yml [root@k8s-master ansible]# systemctl status kubelet [root@k8s-master ansible]# systemctl status kube-proxy [root@k8s-master ansible]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.56.11 Ready,SchedulingDisabled master 6m56s v1.13.5 192.168.56.12 Ready,SchedulingDisabled master 6m57s v1.13.5 192.168.56.13 Ready node 40s v1.13.5 192.168.56.14 Ready node 40s v1.13.5
3.6、部署集群网络 1 2 3 4 5 6 7 [root@k8s-master ansible]# ansible-playbook 06.network.yml [root@k8s-master ansible]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-856rg 1/1 Running 0 115s kube-flannel-ds-amd64-j4542 1/1 Running 0 115s kube-flannel-ds-amd64-q9cmh 1/1 Running 0 115s kube-flannel-ds-amd64-rhg66 1/1 Running 0 115s
3.7、部署集群插件(dns,dashboard) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@k8s-master ansible]# ansible-playbook 07.cluster-addon.yml [root@k8s-master ansible]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 10.68.29.48 <none> 80/TCP 64s kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 71s kubernetes-dashboard NodePort 10.68.117.7 <none> 443:24190/TCP 64s metrics-server ClusterIP 10.68.107.56 <none> 443/TCP 69s [root@k8s-master ansible]# kubectl cluster-info Kubernetes master is running at https://192.168.56.110:8443 CoreDNS is running at https://192.168.56.110:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy kubernetes-dashboard is running at https://192.168.56.110:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump' . [root@k8s-master ansible]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.56.11 523m 13% 2345Mi 76% 192.168.56.12 582m 15% 1355Mi 44% 192.168.56.13 182m 10% 791Mi 70% 192.168.56.14 205m 11% 804Mi 71%
一步ansible安装k8s集群命令如下:
1 ansible-playbook 90.setup.yml
3.8、测试DNS解析 1 2 3 4 5 6 7 8 [root@k8s-master ansible]# kubectl run nginx --image=nginx --expose --port=80 [root@k8s-master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh / Server: 10.68.0.2 Address: 10.68.0.2:53 Name: nginx.default.svc.cluster.local Address: 10.68.149.79