1 [root@k8s-master ~]# kubectl explain svc 2 KIND: Service 3 VERSION: v1 4 5 DESCRIPTION: 6 Service is a named abstraction of software service (for example, mysql) 7 consisting of local port (for example 3306) that the proxy listens on, and 8 the selector that determines which pods will answer requests sent through 9 the proxy. 10 11 FIELDS: 12 apiVersion <string> 13 APIVersion defines the versioned schema of this representation of an 14 object. Servers should convert recognized schemas to the latest internal 15 value, and may reject unrecognized values. More info: 16 https://git.k8s.io/community/contributors/devel/api-conventions.md#resources 17 18 kind <string> 19 Kind is a string value representing the REST resource this object 20 represents. Servers may infer this from the endpoint the client submits 21 requests to. Cannot be updated. In CamelCase. More info: 22 https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds 23 24 metadata <Object> 25 Standard object's metadata. More info: 26 https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata 27 28 spec <Object> 29 Spec defines the behavior of a service. 30 https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status 31 32 status <Object> 33 Most recently observed status of the service. Populated by the system. 34 Read-only. More info: 35 https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
[root@k8s-master mainfests]# kubectl get pod redis-5b5d6fbbbd-v82pw -o wide NAME READY STATUS RESTARTS AGE IP NODE redis-5b5d6fbbbd-v82pw 1/1 Running 0 20d 10.244.1.16 k8s-node01
[root@k8s-master mainfests]# while true;do curl http://192.168.56.11:30080/;sleep 1;done Hello MyApp | Version: v1 | Pod Name Hello MyApp | Version: v1 | Pod Name Hello MyApp | Version: v1 | Pod Name Hello MyApp | Version: v1 | Pod Name Hello MyApp | Version: v1 | Pod Name Hello MyApp | Version: v1 | Pod Name
[root@k8s-master mainfests]# kubectl explain svc.spec.sessionAffinity KIND: Service VERSION: v1
FIELD: sessionAffinity <string>
DESCRIPTION: Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
(2)创建headless service [root@k8s-master mainfests]# kubectl apply -f myapp-svc-headless.yaml service/myapp-headless created [root@k8s-master mainfests]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36d myapp NodePort 10.101.245.119 <none> 80:30080/TCP 1h myapp-headless ClusterIP None <none> 80/TCP 5s redis ClusterIP 10.107.238.182 <none> 6379/TCP 2h
(3)使用coredns进行解析验证 [root@k8s-master mainfests]# dig -t A myapp-headless.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> -t A myapp-headless.default.svc.cluster.local. @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62028 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-headless.default.svc.cluster.local. IN A
;; ANSWER SECTION: myapp-headless.default.svc.cluster.local. 5 IN A 10.244.1.18 myapp-headless.default.svc.cluster.local. 5 IN A 10.244.1.19 myapp-headless.default.svc.cluster.local. 5 IN A 10.244.2.15 myapp-headless.default.svc.cluster.local. 5 IN A 10.244.2.16 myapp-headless.default.svc.cluster.local. 5 IN A 10.244.2.17
[root@k8s-master mainfests]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 36d
DESCRIPTION: DEPRECATED - This group version of Deployment is deprecated by apps/v1beta2/Deployment. See the release notes for more information. Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object> Standard object metadata.
spec <Object> Specification of the desired behavior of the Deployment.
status <Object> Most recently observed status of the Deployment.
DESCRIPTION: Specification of the desired behavior of the Deployment.
DeploymentSpec is the specification of the desired behavior of the Deployment.
FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)
paused <boolean> Indicates that the deployment is paused and will not be processed by the deployment controller.
progressDeadlineSeconds <integer> The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. This is not set by default.
replicas <integer> Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1.
revisionHistoryLimit <integer> The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified.
rollbackTo <Object> DEPRECATED. The config this deployment is rolling back to. Will be cleared after rollback is done.
selector <Object> Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment.
strategy <Object> The deployment strategy to use to replace existing pods with new ones.
template <Object> -required- Template describes the pods that will be created.
.spec.revisionHistoryLimit 是一个可选配置项,用来指定可以保留的旧的ReplicaSet数量。该理想值取决于心Deployment的频率和稳定性。如果该值没有设置的话,默认所有旧的Replicaset或会被保留,将资源存储在etcd中,是用kubectl get rs查看输出。每个Deployment的该配置都保存在ReplicaSet中,然而,一旦删除的旧的RepelicaSet,Deployment就无法再回退到那个revison了。
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml [root@k8s-master ~]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp-deploy 2 0 0 0 1s [root@k8s-master ~]# kubectl get rs
[root@k8s-master ~]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp-deploy 2 2 2 2 10s
我们可以看到Deployment已经创建了2个 replica,所有的 replica 都已经是最新的了(包含最新的pod template),可用的(根据Deployment中的.spec.minReadySeconds声明,处于已就绪状态的pod的最少个数)。执行kubectl get rs和kubectl get pods会显示ReplicaSet(RS)和Pod已创建。
1 2 3
[root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp-deploy-2035384211 2 2 0 18s
[root@k8s-node01 ~]# docker ps |grep pause 0cbf85d4af9e k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_myapp-848b5b879b-ksgnv_default_0af41a40-a771-11e8-84d2-000c2972dc1f_0 d6e4d77960a7 k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_myapp-848b5b879b-5f69p_default_09bc0ba1-a771-11e8-84d2-000c2972dc1f_0 5f7777c55d2a k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_1 8e56ef2564c2 k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_client2_default_17dad486-a769-11e8-84d2-000c2972dc1f_1 7815c0d69e99 k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_nginx-deploy-5b595999-872c7_default_7e9df9f3-a6b6-11e8-84d2-000c2972dc1f_2 b4e806fa7083 k8s.gcr.io/pause:3.1 "/pause" 7 days ago Up 7 days k8s_POD_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_2
[root@k8s-master ~]# kubectl apply -f memleak-pod.yaml pod/memleak-pod created [root@k8s-master ~]# kubectl get pods -l app=memleak NAME READY STATUS RESTARTS AGE memleak-pod 0/1 OOMKilled 2 12s [root@k8s-master ~]# kubectl get pods -l app=memleak NAME READY STATUS RESTARTS AGE memleak-pod 0/1 CrashLoopBackOff 2 28s
[root@k8s-master ~]# kubectl explain pods KIND: Pod VERSION: v1
DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.
FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object> Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
spec <Object> Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
status <Object> Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
[root@k8s-master mainfests]# kubectl get pods --show-labels #查看pod标签 NAME READY STATUS RESTARTS AGE LABELS pod-demo 2/2 Running 0 25s app=myapp,tier=frontend
[root@k8s-master mainfests]# kubectl get pods -l app #过滤包含app的标签 NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 1m [root@k8s-master mainfests]# kubectl get pods -L app NAME READY STATUS RESTARTS AGE APP pod-demo 2/2 Running 0 1m myapp
[root@k8s-master mainfests]# kubectl label pods pod-demo release=canary #给pod-demo打上标签 pod/pod-demo labeled [root@k8s-master mainfests]# kubectl get pods -l app --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 2/2 Running 0 1m app=myapp,release=canary,tier=frontend
[root@k8s-master mainfests]# kubectl label pods pod-demo release=stable --overwrite #修改标签 pod/pod-demo labeled [root@k8s-master mainfests]# kubectl get pods -l release NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 2m [root@k8s-master mainfests]# kubectl get pods -l release,app NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 2m
2、标签选择器
等值关系标签选择器:=, == , != (kubectl get pods -l app=test,app=dev)
集合关系标签选择器: KEY in (v1,v2,v3), KEY notin (v1,v2,v3) !KEY (kubectl get pods -l “app in (test,dev)”)
操作符: in notin:Values字段的值必须是非空列表 Exists NotExists: Values字段的值必须是空列表
3、节点标签选择器
1 2 3 4 5 6 7 8 9 10 11
[root@k8s-master mainfests]# kubectl explain pod.spec nodeName <string> NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements.
nodeSelector <map[string]string> NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
(3)重新创建pod-demo,可以看到固定调度在k8s-node01节点上 [root@k8s-master mainfests]# kubectl delete -f pod-demo.yaml pod "pod-demo" deleted
[root@k8s-master mainfests]# kubectl create -f pod-demo.yaml pod/pod-demo created [root@k8s-master mainfests]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE pod-demo 2/2 Running 0 20s 10.244.1.13 k8s-node01 [root@k8s-master mainfests]# kubectl describe pod pod-demo ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 42s default-scheduler Successfully assigned default/pod-demo to k8s-node01 ......
语法: kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [--command] -- [COMMAND] [args...] [options]
实用举例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[root@k8s-master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1 #创建一个nginx的应用,副本数为1 deployment.apps/nginx-deploy created
[root@k8s-master ~]# kubectl get deployment #获取应用信息,查看应用是否符合预期状态 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deploy 1 1 1 1 40s
[root@k8s-master ~]# kubectl get pods #获取pod信息 NAME READY STATUS RESTARTS AGE nginx-deploy-5b595999-44zwq 1/1 Running 0 1m
[root@k8s-master ~]# kubectl get pods -o wide #查看pod运行在哪个节点上 NAME READY STATUS RESTARTS AGE IP NODE nginx-deploy-5b595999-44zwq 1/1 Running 0 1m 10.244.2.2 k8s-node02
[root@k8s-master ~]# kubectl get pods -o wide #由于在node01节点上没有镜像,需要重新下载 NAME READY STATUS RESTARTS AGE IP NODE nginx-deploy-5b595999-872c7 0/1 ContainerCreating 0 24s <none> k8s-node01 [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deploy-5b595999-872c7 1/1 Running 0 56s 10.244.1.2 k8s-node01
/ # wget -O - -q http://nginx:80 #请求解析nginx <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p> </body> </html>
(2)查看升级过程 [root@k8s-master ~]# kubectl rollout status deployment myapp #查看更新过程 Waiting for deployment "myapp" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... deployment "myapp" successfully rolled out
[root@k8s-master ~]# vim /etc/sysconfig/kubelet #修改kubelet禁止提示swap警告 KUBELET_EXTRA_ARGS="--fail-swap-on=false" #如果配置了swap不然提示出错信息 更改kubelet配置,不提示swap警告信息,最好关闭swap [root@k8s-master ~]# swapoff -a #关闭swap
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap #初始化 [init] using Kubernetes version: v1.11.1 [preflight] running pre-flight checks I0821 18:14:22.223765 18053 kernel_validator.go:81] Validating kernel version I0821 18:14:22.223894 18053 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 51.033696 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] using token: dx7mko.j2ug1lqjra5bf6p2 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root:
该令牌用于主节点和加入节点之间的相互认证。这里包含的令牌是秘密的。保持安全,因为拥有此令牌的任何人都可以向集群添加经过身份验证的节点。可以使用该kubeadm token命令列出,创建和删除这些令牌。到此,集群的初始化已经完成,可以使用kubectl get cs进行查看集群的健康状态信息:
1 2 3 4 5 6 7 8
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 43m v1.11.2
从上面的结果可以看到,master的组件controller-manager、scheduler、etcd都处于正常状态。那么apiserver到哪去了?要知道kubectl是通过apiserver进行通信,从而在etcd中获取到集群的状态信息,所以可以获取到集群的状态信息,即表示apiserver是处于正常运行的状态。使用kubectl get node获取节点信息,可以看到master节点的状态是NotReady,这是因为还没有部署好Pod网络。
[root@k8s-node01 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5502c29b43df f0fad859c909 "/opt/bin/flanneld -…" 3 minutes ago Up 3 minutes k8s_kube-flannel_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_1 db1cc0a6fec4 d5c25579d0ff "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0 bc54ad3399e8 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0 cbfca066b71d k8s.gcr.io/pause:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_0
[root@k8s-master ~]# kubectl get node #此时再查看状态已经变成Ready NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d v1.11.2 k8s-node01 Ready <none> 1d v1.11.2
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d v1.11.2 k8s-node01 Ready <none> 1d v1.11.2 k8s-node02 Ready <none> 2h v1.11.2
如果在集群安装过程中有遇到其他问题,可以使用以下命令进行重置:
1 2 3 4
$ kubeadm reset $ ifconfig cni0 down && ip link delete cni0 $ ifconfig flannel.1 down && ip link delete flannel.1 $ rm -rf /var/lib/cni/
[root@linux-node1 ~]# kubectl create -f coredns.yaml serviceaccount "coredns" created clusterrole.rbac.authorization.k8s.io "system:coredns" created clusterrolebinding.rbac.authorization.k8s.io "system:coredns" created configmap "coredns" created deployment.extensions "coredns" created service "coredns" created
(3)查看coredns服务
1 2 3 4 5 6 7 8 9 10 11 12
[root@linux-node1 ~]# kubectl get deployment -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE coredns 2 2 2 0 1m
[root@linux-node1 ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 10.1.0.2 <none> 53/UDP,53/TCP 1m
[root@linux-node1 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-77c989547b-d84n8 1/1 Running 0 2m coredns-77c989547b-j4ms2 1/1 Running 0 2m
(4)Pod容器中进行域名解析测试
1 2 3 4 5 6 7 8 9 10 11 12
[root@linux-node1 ~]# kubectl run alpine --rm -ti --image=alpine -- /bin/sh If you don't see a command prompt, try pressing enter.
/ # nslookup httpd-svc nslookup: can't resolve '(null)': Name does not resolve
Name: httpd-svc Address 1: 10.1.230.129
/ # wget httpd-svc:8080 Connecting to httpd-svc:8080 (10.1.230.129:8080) index.html 100% |********************************************************************************************************************************************| 45 0:00:00 ETA
[root@linux-node1 dashboard]# ll total 20 -rw-r--r-- 1 root root 357 Aug 22 09:26 admin-user-sa-rbac.yaml -rw-r--r-- 1 root root 4901 Aug 22 09:26 kubernetes-dashboard.yaml -rw-r--r-- 1 root root 458 Aug 22 09:26 ui-admin-rbac.yaml -rw-r--r-- 1 root root 477 Aug 22 09:26 ui-read-rbac.yaml
[root@linux-node1 dashboard]# kubectl create -f . serviceaccount "admin-user" created clusterrolebinding.rbac.authorization.k8s.io "admin-user" created secret "kubernetes-dashboard-certs" created serviceaccount "kubernetes-dashboard" created role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created deployment.apps "kubernetes-dashboard" created service "kubernetes-dashboard" created clusterrole.rbac.authorization.k8s.io "ui-admin" created rolebinding.rbac.authorization.k8s.io "ui-admin-binding" created clusterrole.rbac.authorization.k8s.io "ui-read" created rolebinding.rbac.authorization.k8s.io "ui-read-binding" created
[root@linux-node1 dashboard]# kubectl get pods -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE coredns-77c989547b-d84n8 1/1 Running 0 55m 10.2.99.7 192.168.56.13 coredns-77c989547b-j4ms2 1/1 Running 0 55m 10.2.76.6 192.168.56.12 kubernetes-dashboard-66c9d98865-mps22 1/1 Running 0 4m 10.2.76.12 192.168.56.12
[root@linux-node1 dashboard]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 10.1.0.2 <none> 53/UDP,53/TCP 56m kubernetes-dashboard NodePort 10.1.234.201 <none> 443:38974/TCP 5m
[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 36000 #创建名称为net-test的应用,镜像指定为alpine,副本数为2个 deployment.apps "net-test" created [root@linux-node1 ~]# kubectl get pod -o wide #查看pod的状态信息,此时是API Server从etcd中读取这些数据 NAME READY STATUS RESTARTS AGE IP NODE net-test-7b949fc785-2v2qz 1/1 Running 0 56s 10.2.87.2 192.168.56.120 net-test-7b949fc785-6nrhm 0/1 ContainerCreating 0 56s <none> 192.168.56.130 [root@linux-node1 ~]# kubectl get deployment net-test NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE net-test 2 2 2 2 22h
1
kubectl get deployment命令可以查看net-test的状态,输出显示两个副本正常运行。还可以在创建的过程中,通过kubectl describe deployment net-test了解详细的信息。
[root@linux-node1 ~]# kubectl get pod #获取Pod信息,可以看到2个副本都处于Running状态 NAME READY STATUS RESTARTS AGE net-test-5767cb94df-djt98 1/1 Running 0 22h net-test-5767cb94df-zb8m4 1/1 Running 0 23h
[root@linux-node1 ~]# kubectl create -f nginx-service.yaml #创建service service "nginx-service" created
[root@linux-node1 ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 4h nginx-service ClusterIP 10.1.213.126 <none> 80/TCP 15s #这个就是vip
在kubectl get node时,会看到节点的状态READY,如果状态为NotReady,可以查看节点上的kubelet是否已经启动,如果未启动,进行启动。kubelet无法启动,要进行查看systemctl status kubelet或journalctl -xe看看是什么原因导致无法启动。遇到的一种情况是依赖docker,查看docker无法启动。再进一步排查docker无法启动的原因。
[root@linux-node2 system]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled) Active: active (running) since 四 2018-05-31 16:33:17 CST; 16h ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 53223 (kubelet) CGroup: /system.slice/kubelet.service └─53223 /opt/kubernetes/bin/kubelet --address=192.168.56.120 --hostname-override=192.168.56.120 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experiment...
6月 01 08:51:09 linux-node2.example.com kubelet[53223]: E0601 08:51:09.355765 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:51:19 linux-node2.example.com kubelet[53223]: E0601 08:51:19.363906 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:51:29 linux-node2.example.com kubelet[53223]: E0601 08:51:29.385439 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:51:39 linux-node2.example.com kubelet[53223]: E0601 08:51:39.393790 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:51:49 linux-node2.example.com kubelet[53223]: E0601 08:51:49.401081 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:51:59 linux-node2.example.com kubelet[53223]: E0601 08:51:59.407863 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:52:09 linux-node2.example.com kubelet[53223]: E0601 08:52:09.415552 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:52:19 linux-node2.example.com kubelet[53223]: E0601 08:52:19.425998 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:52:29 linux-node2.example.com kubelet[53223]: E0601 08:52:29.443804 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... 6月 01 08:52:39 linux-node2.example.com kubelet[53223]: E0601 08:52:39.450814 53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to... Hint: Some lines were ellipsized, use -l to show in full.
(5)查看csr请求 注意是在linux-node1上执行。
1 2 3 4
[root@linux-node1 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 1m kubelet-bootstrap Pending node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 1m kubelet-bootstrap Pending
[root@linux-node1 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 2m kubelet-bootstrap Approved,Issued node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 2m kubelet-bootstrap Approved,Issued
执行完毕后,查看节点状态已经是Ready的状态了 [root@linux-node1 ssl]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.56.120 Ready <none> 50m v1.10.1 192.168.56.130 Ready <none> 46m v1.10.1
[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target