1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943
| k8s学习大纲 - 基础大纲 - 集群部署及陈述式命令管理 - 资源类型及配置清单 - pod资源 - pod 控制器 - service资源 - 存储卷 - configmap与secret资源 - statefulset控制器 - 认证、授权及准入控制 - 网络模型及网络策略 - pod资源调度 - crd、自定义资源、自定义控制器及自定义API server, cni=Custom/Container Network Interface, crd=CustomResourceDefinition, CRI=Container Runtime Interface - 资源指标与HPA控制器 - Helm管理器 - 高可用kubernetes
Cloud Native Apps: 云原生应用,程序开发出来就是运行在云平台上,而非单机上的应用。 Serverless: 与云原生应用组合,成为新的发展趋势,FaaS。 Knative。
单体应用程序: 亦称巨石应用,牵一发而动全身。 分成架构服务: 每个团队维护一个层次。(比如用户,商品,支付) 微服务(Microservice): 服务拆分成多个小功能,单独运行。 - 服务注册和服务发现: 分布式和微服务存在的问题 - 三五个程序运行支撑的服务转变为 三五十个微型服务,服务之间的调用成为网状结构。 - 非静态配置,动态服务发现,服务总线 - 服务编排系统: 解决运维部署难的问题 - 容器编排系统: 解决系统异构,服务依赖环境各不相同。 服务编排系统 --> 容器编排系统
容器编排系统: (主流: k8s, Borg, Docker Swarm, Apache Mesos Marathon DC/OS ) what is container orchestration? - container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. - software teams use container orchestration to control and automade many task: - provisioning and deployment of containers - redundancy and availability of containers - scaling up or removing containers to spread application load evently across host infrastructure - movement of containers from one host to another if there is a shortage of resources in host, or if a host dies. - allocation of resources between containers - exteral exposure of services running in a container with the outside world - load balancing of service discovery between containers - heath monitoring of containers and hosts - configuration of an application in relation to the containers runing it
简单来说,容器编排是指容器应用的自动布局、协同及管理,它主要负责完成以下具体内容: service discovery load balancing secrets/configuration/storage management heath checks auto-[scaling/restart/healing] of containers and nodes zero-downtime deploys
- pod资源
- pod 控制器: pod创建方式: - 自助式pod: 直接创建pod - 由控制器管理pod: 创建deployment,service等
ReplicationControler:
ReplicaSet/rs: - 副本数 - 标签选择 - pod资源摸版
- kubectl explain rs - kubectl explain rs.spec - replicas - selector - template
Deployment: 无状态任务,关注群体行为. 只关注数量,不关注个体. - kubectl explain deploy.spec - replicas - selector - template - strategy: 更新策略 - rollingUpdate - maxSurge: 最多超出目标pod数量 1/20% - maxUnavailable: 最多不可用数量. 1/80% - type - Recreate - rollingUpdate
- minReadySeconds - revisionHistoryLimit: 保存历史版本数量限制
DaemonSet/ds: 每个node节点只运行一个,eg:ELK. - kubectl explain ds.spec - selector - template
- minReadySeconds - revisionHistoryLimit: 保存历史版本数量限制 - updateStrategy
Service: 工作模式: userspace, iptables, ipvs. (kube-proxy) userspace: 1.1- iptables: 1.10- ipvs: 1.11+
- NodePort/ClusterIP : client --> NodeIP:NodePort --> ClusterIP:ServicePort --> PodIP:containerPort - LBAAS(LoadBalancerAsAService): 公有云环境中. - LoadBalancer - ExternalName - FQDN - CNAME -> FQDN - No ClusterIP: Headless Service - ServiceName -> PodIP
- kubectl explain svc - type: ExternalName, ClusterIP, NodePort, LoadBalancer - port: - NodePort # type=NodePort 时 可用. client --> NodeIP:NodePort --> ClusterIP:ServicePort --> PodIP:containerPort - LBAAS(LoadBalancerAsAService): 公有云环境中. - LoadBalancer - ExternalName - FQDN - CNAME -> FQDN - - port - targetPort
Ingress: Service: ingress-nginx (NodePort, DaemonSet - HostNetwork) IngressController: ingress-nginx Ingress: - site1.ikubernetes.io (virtual host) - site2.ikubernetes.io (virtual host) - example.com/path1 - example.com/path2
- Service: site1 - pod1 - pod2
- Service: site2 - pod3 - pod4
- 存储卷 - emptyDir - 临时目录, 内存使用, 没有持久性 - gitRepo
- hostPath
- 共享存储 - SAN: iSCSI - NAS: nfs, cifs, http - 分布式存储: - glusterfs - ceph: rbd - cephfs: - 云存储 - EBS - Azure Disk - pvc: persistentVolumeClaim - pv - pvc - pod - volumes -
- secret base64 的 configmap
- configmap: 配置中心 配置容器化应用的方式: 1. 自定义命令行参数; args: [] 2.把配置文件直接copy进镜像 3.环境变量 1.Cloud Native的应用程序一般可直接通过环境变量加载配置 2.通过entrypoint.sh脚本来预处理变量为配置文件中的配置信息 4.存储卷
- 变量注入 --> pod 读取变量 - 挂载存储卷 --> pod 读取配置 - 自定义命令行参数
- StatefulSet: PetSet -> StatefulSet 应对有以下要求的服务: 1.稳定且唯一的网络标识符 2.稳定且持久的存储 3.有序平滑的部署和扩展 4.有序平滑的删除和终止 5.有序的滚动更新
三个组件: - headless service: clusterIP: None - StatefulSet - volumeClaimTemplate
- 认证、授权及准入控制 - Authentication - restful API: token - tls: 双向认证 - user/password
- Authorization - rbac - role - rolebinding - ClusterRole - ClusterRoleBinding
- webhook - abac
- Admission Control
client --> API Server pod --> api server - ServiceAccount? - secret - ServiceAccountName kubectl get secret kubectl get sa | serviceaccount kubectl create serviceaccount my-sa -o yaml --dry-run kubectl get pods myapp -o yaml --export
API request path: http://IP:port/apis/apps/v1/namespaces/default/deployments/myapp-deploy/ http request ver: get, post, put, delete http request --> API requests verb: get, list, create, update, patch, watch, proxy, redirect, delete, delectcollection Resource: Subresource: Namespace: API group:
- ServiceAccount: alex.crt (umask 077; openssl genrsa -out alex.key 2048) openssl req -new -key alex.key -out alex.csr -subj "/CN=alex" openssl x509 -req -in alex.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out alex.crt -days 3650 openssl x509 -in alex.crt -text -noout
kubectl config set-cluster alex-cluster --server="https://192.168.137.131:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --kubeconfig=/tmp/alex.conf kubectl config set-credentials admin \ --client-certificate=datasets/work/admin/admin.pem \ --client-key=datasets/work/admin/admin-key.pem \ --embed-certs=true \ --kubeconfig=/tmp/alex.conf
kubectl config set-context alex@kubernetes --cluster=kubernetes --user=admin --kubeconfig=/tmp/alex.conf kubectl config use-context kubernetes --kubeconfig=/tmp/alex.conf kubectl get pods --kubeconfig=/tmp/alex.conf (error from server forbidden)
- RBAC - role, clusterrole object: resource group resource non-resource url
action: get, list, watch, patch, delete, deletecollection, ...
- rolebinding, clusterrolebinding subject: user group serviceaccount
role:
- role: - operations - objects
- rolebinding: - user account or service account - role
- clusterrole
- clusterrolebinding
# kubectl create role --help # kubectl create rolebinding --help # kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run=client -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: pods-reader namespace: default rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch # kubectl create rolebinding alex-read-pods --role=pods-reader --user=alex --dry-run=client -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: alex-read-pods roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pods-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alex
# 接着上面 ServiceAccount 的例子 # kubectl config use-context kubernetes --kubeconfig=/tmp/alex.conf # kubectl get pods --kubeconfig=/tmp/alex.conf # kubectl get pods -n kube-system --kubeconfig=/tmp/alex.conf (error from server forbidden)
# kubectl create clusterrole --help # kubectl create clusterrolebinding --help
- Kubernetes Dashboard - helm install kubernetes-dashboard/kubernetes-dashboard --version 2.3.0 --name=k8s-dashboard [--namespaces=dashboard] - helm fetch kubernetes-dashboard/kubernetes-dashboard --version 2.3.0 - tar xf kubernetes-dashboard-2.3.0.tgz - vim kubernetes-dashboard/values.yaml - helm install kubernetes-dashboard/kubernetes-dashboard --version 2.3.0 --name=k8s-dashboard -f kubernetes-dashboard/values.yaml [--namespaces=dashboard] - kubectl create clusterrolebinding k8s-dashboard-admin --clusterrole=cluster-admin --serviceaccount=default:k8s-dashboard / [--serviceaccount=dashboard:k8s-dashboard] - kubectl describe sa k8s-dashboard [-n dashboard] - kubectl describe secret k8s-dashboard-token-xxxx [-n dashboard]
- 网络模型及网络策略 flannel kubectl get daemonset -n kube-system kubectl get pods -o wide -n kube-system |grep -i kube-flannel kubectl get configmap -n kube-system kubectl get configmap kube-flannel-cfg -o json -n kube-system
from 10.244.1.59 ping 10.244.2.76 tcpdump -i cni0 -nn icmp tcpdump -i flannel.1 -nn tcpdump -i ens32 -nn host 192.168.137.131 overlay otv
calico
- pod资源调度 调度器: 预选策略: 优先函数:
节点选择器: nodeSelector, nodeName 节点亲和调度: nodeAffinity taint的effect定义对Pod排斥效果: NoSchedule:仅影响调度过程,对现存的Pod对象不产生影响; NoExecute:既影响调度过程,也影响现在的Pod对象;不容忍的Pod对象将被驱逐; PreferNoSchedule:
- crd、自定义资源、自定义控制器及自定义API server - HeapSter (数据采集) - cAdvisor (数据指标检测) - InfluxDB (历史数据记录: 时序数据库系统) - Grafana (数据展示) - RBAC
资源指标: metrics-server: k8s资源聚合器
自定义指标: - prometheus - k8s-prometheus-adapter
- MertricServer - PrometheusOperator - NodeExporter - kubeStateMetrics - Prometheus - Grafana
新一代架构: 核心指标流水线: 由kubelet、metrics-server以及由API server提供的api组成;CPU累积使用率、内存实时使用率、Pod的资源占用率及容器的磁盘占用率; 监控流水线: 用于从系统收集各种指标数据并提供终端用户、存储系统以及HPA,它们包含核心指标及许多非核心指标。非核心指标本身不能被k8s所解析,
metrics-server: API server
- Helm管理器: chart repository Tiller: chart: - 配置清单 - 模板文件 - helm install mem1 stable/memcached
# kubectl explain ingress.spec FIELDS: backend <Object> resource <Object> serviceName <string> servicePort <string>
ingressClassName <string>
rules <[]Object> host <string> http <Object> paths <[]Object> -required- backend <Object> -required- path <string> pathType <string>
tls <[]Object>
Ingress Controller: - Nginx - Traefik - Envoy
namespace: ingress-nginx
Job: 一次性任务 Cronjob: 周期性任务
StatefulSet: 关注个体. 对 EDR: Custom Defined Resources, 1.8+ Operator: etcd
example: - # ReplicaSet - vim rs-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environments: qa sepe: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 - kubectl create -f rs-demo.yaml - kubectl edit rs myapp --> replicas: 5 - kubectl get pods - kubectl edit rs myapp --> image: ikubernetes/myapp:v2 - kubectl get rs -o wide - curl xx.xx.xxx.xx # 只有重建的pod会使用新的image
- # Deployment - vim deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environments: qa sepe: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 - kubectl apply -f deploy-demo.yaml - kubectl get deploy - kubectl get rs -o wide myapp-deploy-69b47bc96d --> 模板的hash值 - kubectl get pods - kubectl get pods -l app=myapp -w # 新开窗口 - vim deploy-demo.yaml --> image: ikubernetes/myapp:v2 - kubectl apply -f deploy-demo.yaml - kubectl get rs -o wide - kubectl rollout history deployment myapp-deploy # 打补丁方式增加pod - kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' - kubectl get pods
- kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' - kubectl describe deployment myapp-deploy - kubectl get pods -l app=myapp -w # 新开窗口-1 - kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy # 暂停rollout, 金丝雀发布 - kubectl rollout status deployment myapp-deploy # 新开窗口-2 - kubectl rollout resume deployment myapp-deploy --> # 查看新开窗口 1, 2 - kubectl rollout history deployment myapp-deploy # 查看历史版本 - kubectl get rs -o wide
# 回滚版本 - kubectl rollout undo --help - kubectl rollout undo deployment myapp-deploy --to-revision=1 - kubectl rollout history deployment myapp-deploy # 查看历史版本
- # DaemonSet - vim ds-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespaces: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: replicas: 2 selector: matchLabels: app: filebeat release: stable template: metadata: name: filebeat-pod labels: app: filebeat release: stable sepe: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alphine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info - kubectl apply -f ds-demo.yaml - kubectl get pods - kubectl expose deployment redis --port=6379 - kubect get svc - kubectl exec -it redist-5bxxxx-xxx -- /bin/sh # netstat -tnl # nslookup redis.default.svc.cluster.local # redis-cli -h redis.default.svc.cluster.local # kyes *
# DaemonSet 滚动更新 - kubectl set image demonsets filebeat-ds filebeat=ikubernets/filebeat-5.6.6-alphine - kubectl get pods -w # 一个一个的更新.
- # Service - vim redis-svc.yaml apiVersion: v1 kind: Service medatada: name: redis namespace: default spec: selector: app: redis role: logstor # clusterIP: 10.96.96.96 type: ClusterIP port: - port: 6349 targetPort: 6379 - kubectl apply -f redis-svc.yaml - kubectl get svc - kubectl describe svc redis -
- curl xx.xx.xxx.xx # 只有重建的pod会使用新的image
- service资源
Helm: like Linux yum/apt/apk ...
k8s Server Master API Server - port: 6443 - auth: 双向认证, /etc/kubernetes/pki - config: ~/.kube/config - kubectl config view - restful api - kubectl api-versions - json - kubectl get pod == curl https://master:6443/v1/...
- communicate with all the service - kubectl - kube-controller-manager - kube-scheduler - etcd - kubelet - kube-proxy
- api 接口中的资源分成多个逻辑组合 apiVersion - 和解循环(Reconciliation Loop): status --> spec
Scheduler
Controller - 和解循环(Reconciliation Loop): status --> spec
Node: - pod - service -> iptables/ipvs -> kube-proxy - kube-proxy
Resource 资源有两个级别: - 集群级别 - Node - Namespace - Role - ClusterRole - RoleBinding - ClusterRoleBinding - PersistentVolume
- 名称空间级别 - pod - service - deploy
- 元数据型资源 - HPA - PodTemplate - LimitRange
资源组成部分: - apiVersion: group/version (kubectl api-versions) - kind: 资源类别 - metadata - spec: 资源期望状态 - labels/tag: kubectl label - annotations: kubectl annotate - initContainers: kubectl explain pods.spec.initContainers: 容器初始化, 运行前, 运行后, 运行时 (存活检测, 就绪检测) - lifecycle - livenessProbe - readinessProbe - startupProbe - - Containers: kubectl explain pods.spec.Containers: 容器初始化, 运行前, 运行后, 运行时 (存活检测, 就绪检测) - lifecycle: preStart hook, preStop hook - livenessProbe - readinessProbe - startupProbe
- name: - command: ["/bin/bash" "-c" "sleep 3600"] - args: - image: - imagePullPolicy: - Never - Always - IfNotPresent - port: - name: - hostIP: - hostPort: - protocol - containerPort
- status: 资源当前状态
资源引用Object URL: /apis/<GROUP>/<VERSION>/namespaces/<NAMESPACE_NAME>/<KIND>[/OBJECT_ID]/ - /api/GROUP/VERSION/namespaces/NAMESPACE/TYPE/NAME - kubectl get pod/nginx-ds-s4hpn - selfLink: /api/v1/namespaces/default/pods/nginx-ds-s4hpn
资源记录: SVC_NAME.NS_NAME.DOMAIN.LTD.
redis.default.svc.cluster.local
- Pod - Pod Controller - Deployment: 类型 --> ngx-deploy --> nginx pod - Service - nginx-svc --> 关联到 nginx pod
kubeadm
kubectl - kubectl explain pods.spec.initContainers - kubectl -h - basic commands beginner - create - expose - run - set
- basic commands intermediate - explan - get - edit - delete - deploy commands - rollout - scale - autoscale
- cluster management commands - certificate - cluster-info - top - cordon - uncordon - drain - taint
- troubleshooting and debugging commands - describe - logs - attach - exec: kubectl exec -it PodName -c ContainerName -- /bin/sh - port-forward
- kubectl config view [-o wide/json/yaml] # kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.137.131:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED
- kubectl api-resources: 支持的资源类型 以及缩写 - kubectl get [-o wide/json/yaml] [-n default/kube-system/...] [-L label-name] [-l label-name ==/!= label-value] - all - nodes - pods - ns/namespace - default - kube-public - kube-system - deploy - svc/service
- kubectl create [ -f file] - job - namespace - kubectl create namespace testing - kubectl create namespace prod - kubectl create namespace develop - kubectl get ns - kubectl delete namespace testing - kubectl delete ns/prod ns/develop
- deployment - kubectl create deployment nginx-deploy --image=nginx:1.14-alpine - kubectl get all -o wide - pod/nginx-deploy-xxxx (ip: 10.244.1.2) - deployment.apps/nginx-deploy - replicaset.apps/nginx-deploy-xxxx - curl 10.244.1.2 ( welcome nginx!) - kubectl delete pod/nginx-deploy-xxxx - kubectl get all -o wide - pod/nginx-deploy-xxxx (ip: 10.244.3.2) ... - curl 10.244.3.2 ( welcome nginx!)
- service - kubectl create service -h - ClusterIP - NodePort
- kubectl create service clusterip nginx-deploy --tcp=80:80 (名字和上面deploy保持一致,就会自动分配IP) - kubectl get svc/nginx-deploy -o yaml - clusterip: 10.110.129.64 - endpoints: 10.244.3.2
- kubectl describe svc/nginx-deploy
# 删除 pod 测试 - curl 10.110.129.64 ( welcome nginx!) - kubectl delete pod/nginx-deploy-xxxx - kubectl get all -o wide - pod/nginx-deploy-xxxx (ip: 10.244.1.3) ...
- kubectl get svc/nginx-deploy -o yaml - clusterip: 10.110.129.64 - endpoints: 10.244.1.3 (自动关联到最新的pod)
# 删除 service 测试 - curl nginx-deploy.default.svc.cluster.local. ( welcome nginx!) - kubectl delete svc/nginx-deploy - kubectl create service clusterip nginx-deploy --tcp=80:80 (名字和上面deploy保持一致,就会自动分配IP) - kubectl describe svc/nginx-deploy -o yaml - clusterip: 10.111.215.249 - endpoints: 10.244.3.2 - curl nginx-deploy.default.svc.cluster.local. ( welcome nginx!)
# 按需伸缩 pod 测试 - kubectl create deploy myapp --image=ikubernetes/myapp:v1 - kubectl get deploy - kubectl get pods -o wide - ip : 10.244.3.3 - curl 10.244.3.3 - curl 10.244.3.3/hostname.html (show the pod name: myapp-xxxx-yyyy)
- kubectl create service clusterip myapp --tcp=80:80 - kubectl describe svc/myapp - IP 10.100.182.218 - Endpoints: 10.244.3.3
- curl nginx-deploy.default.svc.cluster.local. ( welcome myapp!) - curl nginx-deploy.default.svc.cluster.local/hostname.html (show the pod name: myapp-xxxx-yyyy) - kubectl scale --replicas=3 myapp (deploy name)
- kubectl describe svc/myapp - IP 10.100.182.218 - Endpoints: 10.244.3.3, 10.244.1.4, 10.244.2.2
- curl nginx-deploy.default.svc.cluster.local/hostname.html (随机显示不同IP的pod name,多次重复执行查看效果) - kubectl scale --replicas=2 myapp
- kubectl describe svc/myapp - IP 10.100.182.218 - Endpoints: 10.244.3.3, 10.244.1.4
# nodeport 外网访问 - kubectl delete svc/myapp - kubectl create service nodeport -h - kubectl create service nodeport myapp --tcp=80:80 - kubectl get svc - ports: 80:31996/TCP - 集群外部访问所有nodes的 http://nodesip:31996/hostname.html - 自动创建规则在每一个节点的iptables --> kube-proxy - ssh nodes "iptables -t nat -vnL"
eg:
- kubectl expose
- kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
- kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version] kubectl label pods -n dev-namespaces apptag=my-app release=stable deltag-
- kubectl api-versions
- kubectl describe
Network - node network - service network: service 注册/发现 - pod network
外网访问: - Service: NodePort - hostport: - hostNetwork:
ipvs/iptables 4 层调度器 ingress 7 层调度器
|