Kubernetes 企业级实战:全方位解析集群管理与应用部署
在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。kubernetes的最小管理单元是pod而不是容器,只能将容器放在Pod中,kubernetes一般也不会直接管理Pod,而是通过
kubernetes

-
在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年
-
Borg系统运行管理着成千上万的容器应用。
-
Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。
-
Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。
kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:
-
自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
-
弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
-
服务发现:服务可以通过自动发现的形式找到它所依赖的服务
-
负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
-
版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
-
存储编排:可以根据容器自身的需求自动创建存储卷
K8S的设计架构
K8S各个组件用途

一个kubernetes集群主要是由控制节点(master)、工作节点(node)构成,每个节点上都会安装不同的组件
1 master:集群的控制平面,负责集群的决策
-
ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
-
Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
-
ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
-
Etcd :负责存储集群中各种资源对象的信息
2 node:集群的数据平面,负责为容器提供运行环境
-
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
-
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
-
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡
K8S 各组件之间的调用关系
当我们要运行一个web服务时
-
kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
-
web服务的安装请求会首先被发送到master节点的apiServer组件
-
apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
-
apiServer调用controller-manager去调度Node节点安装web服务
-
kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod
-
如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理
K8S 的 常用名词感念
-
Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
-
Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的
-
Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
-
Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
-
Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
-
Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
-
NameSpace:命名空间,用来隔离pod的运行环境
k8S的分层架构

-
核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
-
应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
-
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
-
接口层:kubectl命令行工具、客户端SDK以及集群联邦
-
生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
-
Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
-
Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
1、K8S集群环境搭建
k8s中容器的管理方式

K8S 集群创建方式有3种:
centainerd
默认情况下,K8S在创建集群时使用的方式
docker
Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建
cri-o
CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。
[!NOTE]
docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置
k8s 集群部署
K8S中文官网:Kubernetes
| 主机名 | ip | 角色 |
|---|---|---|
| harbor.timinglee.org | 172.25.254.254 | harbor仓库 |
| k8s-master.timinglee.org | 172.25.254.100 | master,k8s集群控制节点 |
| k8s-node1.timinglee.org | 172.25.254.10 | worker,k8s集群工作节点 |
| k8s-node2.timinglee.org | 172.25.254.20 | worker,k8s集群工作节点 |
-
所有节点禁用selinux和防火墙
-
所有节点同步时间和解析
-
所有节点安装docker-ce
-
所有节点禁用swap,注意注释掉/etc/fstab文件中的定义
注:每台主机都4核4G,都已关闭防火墙以及selinux
每台主机的IP设置就不详细写了
所有主机安装docker
harbor
#解压缩docker
[root@harbor ~]# tar zxf docker.tar.gz
[root@harbor ~]# ls
公共 anaconda-ks.cfg
模板 containerd.io-1.7.20-3.1.el9.x86_64.rpm
视频 docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
图片 docker-ce-27.1.2-1.el9.x86_64.rpm
文档 docker-ce-cli-27.1.2-1.el9.x86_64.rpm
下载 docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
音乐 docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
桌面 docker.tar.gz
[root@harbor ~]# dnf install *.rpm -y
[root@harbor ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptable
s=true #添加这个命令
[root@harbor ~]# systemctl enable --now docker

通过scp将docker复制到其他节点上
[root@harbor ~]# scp *.rpm root@172.25.254.100:/mnt
[root@harbor ~]# scp *.rpm root@172.25.254.10:/mnt
[root@harbor ~]# scp *.rpm root@172.25.254.20:/mnt
k8s-master
[root@k8s-master ~]# dnf install /mnt/*.rpm -y
[root@k8s-master ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptable
s=true #添加这个命令

k8s-node1
[root@k8s-node1 ~]# dnf install /mnt/*.rpm -y
[root@k8s-node1 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptable
s=true #添加这个命令

k8s-node2
[root@k8s-node2 ~]# dnf install /mnt/*.rpm -y
[root@k8s-node2 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptable
s=true #添加这个命令

搭建harbor仓库
#解压缩harbor
[root@harbor ~]# tar zxf harbor-offline-installer-v2.5.4.tgz
#生成证书和key
[root@harbor ~]# mkdir -p /data/certs
[root@harbor ~]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /data/certs/timinglee.org.key -addext "subjectAltName = DNS:reg.timinglee.org" -x509 -days 365 -out /data/certs/timinglee.org.crt
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:shanxi
Locality Name (eg, city) [Default City]:xian
Organization Name (eg, company) [Default Company Ltd]:k8s
Organizational Unit Name (eg, section) []:harbor
Common Name (eg, your name or your server's hostname) []:reg.timinglee.org #这个不要写错了
Email Address []:admin@timinglee.org
[root@harbor ~]# cd harbor/
[root@harbor harbor]# ls
common.sh harbor.v2.5.4.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
#修改配置
[root@harbor harbor]# vim harbor.yml
hostname: reg.timinglee.org
certificate: /data/certs/timinglee.org.crt
private_key: /data/certs/timinglee.org.key
harbor_admin_password: 123456
[root@harbor harbor]# ./install.sh --with-chartmuseum

通过浏览器访问harbor仓库创建成功

所有k8s集群节点 禁用swap并做好解析
k8s-master
[root@k8s-master ~]# vim /etc/fstab
/dev/mapper/rhel_bogon-root / xfs defaults 0 0
UUID=ed271930-7438-4bef-a67b-f65ca1ce12ec /boot xfs defaults 0 0
/dev/mapper/rhel_bogon-home /home xfs defaults 0 0
#/dev/mapper/rhel_bogon-swap none swap defaults 0 0
~
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl mask swap.target
Created symlink /etc/systemd/system/swap.target → /dev/null.
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# swapon -s
[root@k8s-master ~]# reboot
[root@k8s-master docker]# vim /etc/hosts
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.254 reg.timinglee.org

k8s-node1
[root@k8s-node1 ~]# vim /etc/fstab
/dev/mapper/rhel_bogon-root / xfs defaults 0 0
UUID=ed271930-7438-4bef-a67b-f65ca1ce12ec /boot xfs defaults 0 0
/dev/mapper/rhel_bogon-home /home xfs defaults 0 0
#/dev/mapper/rhel_bogon-swap none swap defaults 0 0
~
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl mask swap.target
Created symlink /etc/systemd/system/swap.target → /dev/null.
[root@k8s-node1 ~]# swapoff -a
[root@k8s-node1 ~]# swapon -s
[root@k8s-node1 ~]# reboot
[root@k8s-node1 docker]# vim /etc/hosts
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.254 reg.timinglee.org

k8s-node2
[root@k8s-node2 ~]# vim /etc/fstab
/dev/mapper/rhel_bogon-root / xfs defaults 0 0
UUID=ed271930-7438-4bef-a67b-f65ca1ce12ec /boot xfs defaults 0 0
/dev/mapper/rhel_bogon-home /home xfs defaults 0 0
#/dev/mapper/rhel_bogon-swap none swap defaults 0 0
~
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl mask swap.target
Created symlink /etc/systemd/system/swap.target → /dev/null.
[root@k8s-node2 ~]# swapoff -a
[root@k8s-node2 ~]# swapon -s
[root@k8s-node2 ~]# reboot
[root@k8s-node2 ~]# vim /etc/hosts
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.254 reg.timinglee.org

harbor
[root@harbor ~]# vim /etc/hosts
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.254 reg.timinglee.org
把docker库作为默认库通过scp复制给其他节点
[root@k8s-master docker]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.timinglee.org"]
}
[root@k8s-master docker]# for i in 10 20 ; do scp daemon.json root@172.25.254.$i:/etc/docker/;done
所有k8s节点复制harbor仓库中的证书
#k8s-master
[root@k8s-master ~]# cd /etc/docker/
[root@k8s-master docker]# ls
certs.d daemon.json
[root@k8s-master docker]# cd certs.d/
[root@k8s-master certs.d]# mkdir reg.timinglee.org
[root@k8s-master certs.d]# mv ca.crt reg.timinglee.org
[root@k8s-master certs.d]# systemctl restart docker.service
#k8s-node1
[root@k8s-node1 docker]# cd
[root@k8s-node1 ~]# cd /etc/docker/
[root@k8s-node1 docker]# ls
certs.d daemon.json
[root@k8s-node1 docker]# cd certs.d/
[root@k8s-node1 certs.d]# mkdir reg.timinglee.org
[root@k8s-node1 certs.d]# mv ca.crt reg.timinglee.org
[root@k8s-node1 certs.d]# systemctl restart docker.service
#k8s-node2
[root@k8s-node2 ~]# cd /etc/docker/
[root@k8s-node2 docker]# ls
certs.d daemon.json
[root@k8s-node2 docker]# cd certs.d/
[root@k8s-node2 certs.d]# mkdir reg.timinglee.org
[root@k8s-node2 certs.d]# mv ca.crt reg.timinglee.org
[root@k8s-node2 certs.d]# ls
reg.timinglee.org
[root@k8s-node2 certs.d]# systemctl restart docker.service
[root@k8s-master docker]# docker info
Client: Docker Engine - Community
Version: 27.1.2
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.1
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 27.1.2
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 5.14.0-284.11.1.el9_2.x86_64
Operating System: Red Hat Enterprise Linux 9.2 (Plow)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.55GiB
Name: k8s-master.timinglee.org
ID: 1c077be8-a068-48c6-8cb1-c0a86c877ad9
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://reg.timinglee.org/
Live Restore Enabled: false
[root@k8s-master docker]#
k8s所有节点登陆harbor仓库
[root@k8s-master ~]# docker login reg.timinglee.org
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded

[root@k8s-node1 ~]# docker login reg.timinglee.org
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded

[root@k8s-node2 certs.d]# docker login reg.timinglee.org
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded

安装K8S部署工具
#安装软件
[root@k8s-master ~]# tar zxf k8s-1.30.tar.gz
[root@k8s-master ~]# dnf install *.rpm
[root@k8s-node1 ~]# tar zxf k8s-1.30.tar.gz
[root@k8s-node1 ~]# dnf install *.rpm
[root@k8s-node2 ~]# tar zxf k8s-1.30.tar.gz
[root@k8s-node2 ~]# dnf install *.rpm
设置kubectl命令补齐功能
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc

在所节点安装cri-docker
k8s从1.24版本开始移除了dockershim,所以需要安装cri-docker插件才能使用docker
软件下载:https://github.com/Mirantis/cri-dockerd
将插件复制到各个节点
[root@k8s-master mnt]# ls
cri-dockerd-0.3.14-3.el8.x86_64.rpm libcgroup-0.41-19.el8.x86_64.rpm
[root@k8s-master mnt]# for i in 10 20 ; do scp * root@172.25.254.$i:/mnt;done
#下载rpm
[root@k8s-master mnt]# dnf install *
[root@k8s-node1 mnt]# dnf install *
[root@k8s-node2 mnt]# dnf install *

#指定网络插件名称及基础容器镜像
[root@k8s-master ~]# vim /lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9 #添加该内容
#其他节点也一样改
[root@k8s-master ~]# for i in 10 20 ; do scp /lib/systemd/system/cri-docker.service root@172.25.254.$i:/lib/systemd/system/cri-docker.service;done
root@172.25.254.10's password:
cri-docker.service 100% 1400 97.3KB/s 00:00
root@172.25.254.20's password:
cri-docker.service 100% 1400 482.9KB/s 00:00
#其他节点也是一样的命令
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl start cri-docker
[root@k8s-master ~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0 8月 24 12:53 /var/run/cri-dockerd.sock
[root@k8s-master ~]#


在master节点拉取K8S所需镜像
#拉取k8s集群所需要的镜像
[root@k8s-master ~]# docker load -i k8s_docker_images-1.30.tar
#上传镜像到harbor仓库
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'
[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'

集群初始化
#启动kubelet服务:所有主机都打开
[root@k8s-master ~]# systemctl enable --now kubelet.service
[root@k8s-node1 ~]# systemctl enable --now kubelet.service
[root@k8s-node2 ~]# systemctl enable --now kubelet.service
#执行初始化命令
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
--image-repository reg.timinglee.org/k8s \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock
#指定集群配置文件变量
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-master ~]# source ~/.bash_profile
#当前节点没有就绪,因为还没有安装网络插件,容器没有运行
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master.timinglee.org NotReady control-plane 4m25s v1.30.0
注:在此阶段如果生成的集群token找不到了可以重新生成
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token mus7t7.c0oo1qvac70sla8t --discovery-token-ca-cert-hash sha256:2d4da9944b1458dbf002935cceba4fe3e0c56be299eebdc3f89e1d3eb661386f
安装flannel网络插件
官方网站:https://github.com/flannel-io/flannel
#导入镜像
[root@k8s-master ~]# docker load -i flannel-0.25.5.tag.gz
#上传到镜像仓库
[root@k8s-master ~]# docker tag flannel/flannel:v0.25.5 reg.timinglee.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
#编辑kube-flannel.yml 修改镜像下载位置
[root@k8s-master ~]# vim kube-flannel.yml
#需要修改以下几行
[root@k8s-master ~]# grep -n image kube-flannel.yml
146: image: reg.timinglee.org/flannel/flannel:v0.25.5
173: image: reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
184: image: reg.timinglee.org/flannel/flannel:v0.25.5
#安装flannel网络插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#出现Ready成功了
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.timinglee.org Ready control-plane 26m v1.30.0
k8s-node1.timinglee.org Ready <none> 11m v1.30.0
k8s-node2.timinglee.org Ready <none> 10m v1.30.0
[root@k8s-master ~]# kubectl get pods
No resources found in default namespace.
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-q2r5m 1/1 Running 0 92s
kube-flannel kube-flannel-ds-smfvx 1/1 Running 0 92s
kube-flannel kube-flannel-ds-xw5dr 1/1 Running 0 92s
kube-system coredns-647dc95897-hf7qr 1/1 Running 0 27m
kube-system coredns-647dc95897-tlm2p 1/1 Running 0 27m
kube-system etcd-k8s-master.timinglee.org 1/1 Running 0 27m
kube-system kube-apiserver-k8s-master.timinglee.org 1/1 Running 0 27m
kube-system kube-controller-manager-k8s-master.timinglee.org 1/1 Running 0 27m
kube-system kube-proxy-5zsj6 1/1 Running 0 12m
kube-system kube-proxy-gt8ql 1/1 Running 0 27m
kube-system kube-proxy-jq8rq 1/1 Running 0 11m
kube-system kube-scheduler-k8s-master.timinglee.org 1/1 Running 0 27m
[root@k8s-master ~]#

节点扩容
我已经加入了
在master阶段中查看所有node的状态
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.timinglee.org Ready control-plane 29m v1.30.0
k8s-node1.timinglee.org Ready <none> 14m v1.30.0
k8s-node2.timinglee.org Ready <none> 13m v1.30.0
所有阶段的STATUS为Ready状态,那么恭喜你,你的kubernetes就装好了!
k8s部署已完成
2、Kubernetes中pod的管理及优化
资源管理介绍
-
在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。
-
kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务
-
所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。
-
kubernetes的最小管理单元是pod而不是容器,只能将容器放在
Pod中, -
kubernetes一般也不会直接管理Pod,而是通过
Pod控制器来管理Pod的。 -
Pod中服务的访问是由kubernetes提供的
Service资源来实现。 -
Pod中程序的数据需要持久化是由kubernetes提供的各种存储系统来实现

资源管理方式
-
命令式对象管理:直接使用命令去操作kubernetes资源
kubectl run nginx-pod --image=nginx:latest --port=80 -
命令式对象配置:通过命令配置和配置文件去操作kubernetes资源
kubectl create/patch -f nginx-pod.yaml -
声明式对象配置:通过apply命令和配置文件去操作kubernetes资源
kubectl apply -f nginx-pod.yaml
| 类型 | 适用环境 | 优点 | 缺点 |
|---|---|---|---|
| 命令式对象管理 | 测试 | 简单 | 只能操作活动对象,无法审计、跟踪 |
| 命令式对象配置 | 开发 | 可以审计、跟踪 | 项目大时,配置文件多,操作麻烦 |
| 声明式对象配置 | 开发 | 支持目录操作 | 意外情况下难以调试 |
命令式对象管理
kubectl是kubernetes集群的命令行工具,通过它能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署
kubectl命令的语法如下
kubectl [command] [type] [name] [flags]
comand:指定要对资源执行的操作,例如create、get、delete
type:指定资源类型,比如deployment、pod、service
name:指定资源的名称,名称大小写敏感
flags:指定额外的可选参数1.2.2 资源类型
# 查看所有pod
kubectl get pod# 查看某个pod
kubectl get pod pod_name# 查看某个pod,以yaml格式展示结果
kubectl get pod pod_name -o yam
资源类型
kubernetes中所有的内容都抽象为资源
kubectl api-resources
常用资源类型

kubect 常见命令操作

基本命令示例
kubectl的详细说明地址:Kubectl Reference Docs
#显示集群版本
[root@k8s-master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
#显示集群信息
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://172.25.254.100:6443
CoreDNS is running at https://172.25.254.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
#创建一个webcluster控制器,控制器中pod数量为2
[root@k8s-master ~]# kubectl create deployment webcluseter --image nginx --replicas 2
#查看控制器
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 3/3 3 3 69m
#查看资源帮助
[root@k8s-master ~]# kubectl explain deployment
GROUP: apps
KIND: Deployment
VERSION: v1
#编辑控制器配置
[root@k8s-master ~]# kubectl edit deployments.apps web
@@@@省略内容@@@@@@
spec:
progressDeadlineSeconds: 600
replicas: 2
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 2/2 2 2 73m
#利用补丁更改控制器配置
[root@k8s-master ~]# kubectl patch deployments.apps web -p '{"spec":{"replicas":4}}'
deployment.apps/web patched
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 4/4 4 4 74m
#删除资源
[root@k8s-master ~]# kubectl delete deployments.apps web
deployment.apps "web" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.
运行和调试命令示例
#运行pod
[root@k8s-master ~]# kubectl run testpod --image nginx
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 6s
#端口暴漏
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 59m
[root@k8s-master ~]# kubectl expose pod testpod --port 80 --target-port 80
service/testpod exposed
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 60m
testpod ClusterIP 10.109.79.4 <none> 80/TCP 5s
[root@k8s-master ~]# curl 10.109.79.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master ~]#

#查看资源详细信息
[root@k8s-master ~]# kubectl describe pods testpod
#查看资源日志
[root@k8s-master ~]# kubectl logs pods/testpod
高级命令示例
#利用命令生成yaml模板文件
[root@k8s-master ~]# kubectl create deployment --image nginx webcluster --dry-run=client -o yaml > webcluster.yml
#利用yaml文件生成资源
[root@k8s-master ~]# vim webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webcluster
name: webcluster
spec:
replicas: 2
selector:
matchLabels:
app: webcluster
template:
metadata:
labels:
app: webcluster
spec:
containers:
- image: nginx
name: nginx
[root@k8s-master ~]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 17s
[root@k8s-master ~]# kubectl delete -f webcluster.yml
deployment.apps "webcluster" deleted

什么是pod
-
Pod是可以创建和管理Kubernetes计算的最小可部署单元
-
一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。
-
一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)
-
多个容器间共享IPC、Network和UTC namespace。
利用控制器管理pod(推荐)
高可用性和可靠性:
-
自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。
-
健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。
可扩展性:
-
轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。
-
水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。
版本管理和更新:
-
滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影响。
-
回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。
声明式配置:
-
简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。
-
期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。
服务发现和负载均衡:
-
自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。
-
流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。
多环境一致性:
-
一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。
示例:
#建立控制器并自动运行pod
[root@k8s-master ~]# kubectl create deployment timinglee --image nginx
deployment.apps/timinglee created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-859fbf84d6-ncw7m 1/1 Running 0 6s
#为timinglee扩容
[root@k8s-master ~]# kubectl scale deployment timinglee --replicas 6
deployment.apps/timinglee scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-859fbf84d6-bcg9s 1/1 Running 0 5s
timinglee-859fbf84d6-lxlbb 1/1 Running 0 5s
timinglee-859fbf84d6-ncw7m 1/1 Running 0 23s
timinglee-859fbf84d6-pwcvq 1/1 Running 0 5s
timinglee-859fbf84d6-s44sh 1/1 Running 0 5s
timinglee-859fbf84d6-sqbg6 1/1 Running 0 5s
#为timinglee缩容
[root@k8s-master ~]# kubectl scale deployment timinglee --replicas 2
deployment.apps/timinglee scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-859fbf84d6-ncw7m 1/1 Running 0 34s
timinglee-859fbf84d6-s44sh 1/1 Running 0 16s
[root@k8s-master ~]#

应用版本的更新
#利用控制器建立pod
[root@k8s-master ~]# kubectl create deployment timinglee --image myapp:v1 --replicas 2
deployment.apps/timinglee created
#暴漏端口
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80
service/timinglee exposed
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d17h
timinglee ClusterIP 10.110.195.120 <none> 80/TCP 8s
#访问服务
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.110.195.120
#产看历史版本
[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION CHANGE-CAUSE
1 <none>
#更新控制器镜像版本
[root@k8s-master ~]# kubectl set image deployments/timinglee myapp=myapp:v2
deployment.apps/timinglee image updated
#查看历史版本
[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION CHANGE-CAUSE
1 <none>
2 <none>
#访问内容测试
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.110.195.120
#版本回滚
[root@k8s-master ~]# kubectl rollout undo deployment timinglee --to-revision 1
deployment.apps/timinglee rolled back
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
利用yaml文件部署应用
声明式配置:
-
清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
-
可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
-
团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性:
-
丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
-
组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成:
-
与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
-
命令行工具支持:Kubernetes 的命令行工具
kubectl对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
资源清单参数
| 参数名称 | 类型 | 参数说明 |
|---|---|---|
| version | String | 这里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查询 |
| kind | String | 这里指的是yaml文件定义的资源类型和角色,比如:Pod |
| metadata | Object | 元数据对象,固定值就写metadata |
| metadata.name | String | 元数据对象的名字,这里由我们编写,比如命名Pod的名字 |
| metadata.namespace | String | 元数据对象的命名空间,由我们自身定义 |
| Spec | Object | 详细定义对象,固定值就写Spec |
| spec.containers[] | list | 这里是Spec对象的容器列表定义,是个列表 |
| spec.containers[].name | String | 这里定义容器的名字 |
| spec.containers[].image | string | 这里定义要用到的镜像名称 |
| spec.containers[].imagePullPolicy | String | 定义镜像拉取策略,有三个值可选: (1) Always: 每次都尝试重新拉取镜像 (2) IfNotPresent:如果本地有镜像就使用本地镜像 (3) )Never:表示仅使用本地镜像 |
| spec.containers[].command[] | list | 指定容器运行时启动的命令,若未指定则运行容器打包时指定的命令 |
| spec.containers[].args[] | list | 指定容器运行参数,可以指定多个 |
| spec.containers[].workingDir | String | 指定容器工作目录 |
| spec.containers[].volumeMounts[] | list | 指定容器内部的存储卷配置 |
| spec.containers[].volumeMounts[].name | String | 指定可以被容器挂载的存储卷的名称 |
| spec.containers[].volumeMounts[].mountPath | String | 指定可以被容器挂载的存储卷的路径 |
| spec.containers[].volumeMounts[].readOnly | String | 设置存储卷路径的读写模式,ture或false,默认为读写模式 |
| spec.containers[].ports[] | list | 指定容器需要用到的端口列表 |
| spec.containers[].ports[].name | String | 指定端口名称 |
| spec.containers[].ports[].containerPort | String | 指定容器需要监听的端口号 |
| spec.containers[] ports[].hostPort | String | 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突) |
| spec.containers[].ports[].protocol | String | 指定端口协议,支持TCP和UDP,默认值为 TCP |
| spec.containers[].env[] | list | 指定容器运行前需设置的环境变量列表 |
| spec.containers[].env[].name | String | 指定环境变量名称 |
| spec.containers[].env[].value | String | 指定环境变量值 |
| spec.containers[].resources | Object | 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限) |
| spec.containers[].resources.limits | Object | 指定设置容器运行时资源的运行上限 |
| spec.containers[].resources.limits.cpu | String | 指定CPU的限制,单位为核心数,1=1000m |
| spec.containers[].resources.limits.memory | String | 指定MEM内存的限制,单位为MIB、GiB |
| spec.containers[].resources.requests | Object | 指定容器启动和调度时的限制设置 |
| spec.containers[].resources.requests.cpu | String | CPU请求,单位为core数,容器启动时初始化可用数量 |
| spec.containers[].resources.requests.memory | String | 内存请求,单位为MIB、GIB,容器启动的初始化可用数量 |
| spec.restartPolicy | string | 定义Pod的重启策略,默认值为Always. (1)Always: Pod-旦终止运行,无论容器是如何 终止的,kubelet服务都将重启它 (2)OnFailure: 只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它 (3) Never: Pod终止后,kubelet将退出码报告给Master,不会重启该 |
| spec.nodeSelector | Object | 定义Node的Label过滤标签,以key:value格式指定 |
| spec.imagePullSecrets | Object | 定义pull镜像时使用secret名称,以name:secretkey格式指定 |
| spec.hostNetwork | Boolean | 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机 上启动第二个副本 |
如何获得资源帮助
kubectl explain pod.spec.containers
运行简单的单个容器pod
用命令获取yaml模板
[root@k8s-master ~]# kubectl run timinglee --image myapp:v1 --dry-run=client -o yaml > pod.yml
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timing #pod标签
name: timinglee #pod名称
spec:
containers:
- image: myapp:v1 #pod镜像
name: timinglee #容器名称
理解pod间的网络整合
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: myapp:v1
name: myapp1
- image: busyboxplus:latest
name: busyboxplus
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 6m9s
[root@k8s-master ~]# kubectl exec test -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

端口映射
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: myapp:v1
name: myapp1
ports:
- name: http
containerPort: 80
hostPort: 80
protocol: TCP
#测试
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 98s 10.244.2.21 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.2.21
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#

如何设定环境变量
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","echo $NAME;sleep 3000000"]
env:
- name: NAME
value: timinglee
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl logs pods/test busybox
timinglee

容器启动管理
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
restartPolicy: Always
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 20s 10.244.2.23 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]#

pod的生命周期
INIT 容器
官方文档:Pod | Kubernetes

-
Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
-
Init 容器与普通的容器非常像,除了如下两点:
-
它们总是运行到完成
-
init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
-
-
如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
容器的功能
-
Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
-
Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
-
应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
-
Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
-
由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。
INIT 容器示例
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: initpod
name: initpod
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1
name: myapp
initContainers:
- name: init-myservice
image: busybox
command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/initpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 10s
[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 33s

探针
探针是由 kubelet 对容器执行的定期诊断:
-
ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
-
TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
-
HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
-
成功:容器通过了诊断。
-
失败:容器未通过诊断。
-
未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
-
livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其重启策略的影响。如果容器不提供存活探针,则默认状态为 Success。
-
readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
-
startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
ReadinessProbe 与 LivenessProbe 的区别
-
ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。
-
LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施
StartupProbe 与 ReadinessProbe、LivenessProbe 的区别
-
如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。
-
另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是在容器启动后按照配置满足一次后,不在进行后续的探测。
探针实例
存活探针示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: liveness
name: liveness
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1
name: myapp
livenessProbe:
tcpSocket: #检测端口存在性
port: 8080
initialDelaySeconds: 3 #容器启动后要等待多少秒后就探针开始工作,默认是 0
periodSeconds: 1 #执行探测的时间间隔,默认为 10s
timeoutSeconds: 1 #探针执行检测请求后,等待响应的超时时间,默认为 1s
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/liveness created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 1/1 Running 1 (1s ago) 7s
[root@k8s-master ~]#

就绪探针示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: readiness
name: readiness
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1
name: myapp
readinessProbe:
httpGet:
path: /test.html
port: 80
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 1
#测试:
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 9s
[root@k8s-master ~]# kubectl describe pods readiness
Warning Unhealthy 26s (x66 over 5m43s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.76.37
IPs: 10.96.76.37
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: #没有暴漏端口,就绪探针探测不满足暴漏条件
Session Affinity: None
Events: <none>
[root@k8s-master ~]# kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 3m29s
[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.76.37
IPs: 10.96.76.37
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.26:80 #满组条件端口暴漏
Session Affinity: None
Events: <none>
[root@k8s-master ~]#

3、k8s中的控制器应用
什么是控制器
官方文档:
控制器也是管理pod的一种手段
-
自主式pod:pod退出或意外关闭后不会被重新创建
-
控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod
当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复
控制器常用类型
| 控制器名称 | 控制器用途 |
|---|---|
| Replication Controller | 比较原始的pod控制器,已经被废弃,由ReplicaSet替代 |
| ReplicaSet | ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行 |
| Deployment | 一个 Deployment 为 Pod 和 ReplicaSet 提供声明式的更新能力 |
| DaemonSet | DaemonSet 确保全指定节点上运行一个 Pod 的副本 |
| StatefulSet | StatefulSet 是用来管理有状态应用的工作负载 API 对象。 |
| Job | 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束 |
| CronJob | Cron Job 创建基于时间调度的 Jobs。 |
| HPA全称Horizontal Pod Autoscaler | 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放 |
replicaset控制器

replicaset功能
-
ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet
-
ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求
-
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
-
虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制
replicaset参数说明
| 参数名称 | 字段类型 | 参数说明 |
|---|---|---|
| spec | Object | 详细定义对象,固定值就写Spec |
| spec.replicas | integer | 指定维护pod数量 |
| spec.selector | Object | Selector是对pod的标签查询,与pod数量匹配 |
| spec.selector.matchLabels | string | 指定Selector查询标签的名称和值,以key:value方式指定 |
| spec.template | Object | 指定对pod的描述信息,比如lab标签,运行容器的信息等 |
| spec.template.metadata | Object | 指定pod属性 |
| spec.template.metadata.labels | string | 指定pod标签 |
| spec.template.spec | Object | 详细定义对象 |
| spec.template.spec.containers | list | Spec对象的容器列表定义 |
| spec.template.spec.containers.name | string | 指定容器名称 |
| spec.template.spec.containers.image | string | 指定容器镜像 |
replicaset 示例
#生成yml文件
[root@k8s-master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml
[root@k8s-master ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset #指定pod名称,一定小写,如果出现大写报错
spec:
replicas: 2 #指定维护pod数量为2
selector: #指定检测匹配方式
matchLabels: #指定匹配方式为匹配标签
app: myapp #指定匹配的标签为app=myapp
template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
app: myapp
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-f8d4q 1/1 Running 0 2m2s app=myapp
replicaset-fb4cm 1/1 Running 0 2m2s app=myapp
[root@k8s-master ~]# kubectl label pod replicaset-f8d4q app=timinglee --overwrite
pod/replicaset-f8d4q labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-f8d4q 1/1 Running 0 2m32s app=timinglee
replicaset-fb4cm 1/1 Running 0 2m32s app=myapp
replicaset-jcn6q 1/1 Running 0 6s app=myapp

deployment 控制器
deployment控制器的功能

-
为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。
-
Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod
-
Deployment管理ReplicaSet,ReplicaSet管理Pod
-
Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法
-
在Deployment中ReplicaSet相当于一个版本
典型的应用场景:
-
用来创建Pod和ReplicaSet
-
滚动更新和回滚
-
扩容和缩容
-
暂停与恢复
deployment控制器示例
#生成yaml文件
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1
name: myapp
#建立pod
root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
#查看pod信息
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-6cb7db6dfb-jj544 1/1 Running 0 5s app=myapp,pod-template-hash=6cb7db6dfb
deployment-6cb7db6dfb-k847n 1/1 Running 0 5s app=myapp,pod-template-hash=6cb7db6dfb
deployment-6cb7db6dfb-qn6pg 1/1 Running 0 5s app=myapp,pod-template-hash=6cb7db6dfb
deployment-6cb7db6dfb-vkd8b 1/1 Running 0 5s app=myapp,pod-template-hash=6cb7db6dfb

版本迭代
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-6cb7db6dfb-jj544 1/1 Running 0 2m29s 10.244.2.30 k8s-node2.timinglee.org <none> <none>
deployment-6cb7db6dfb-k847n 1/1 Running 0 2m29s 10.244.1.21 k8s-node1.timinglee.org <none> <none>
deployment-6cb7db6dfb-qn6pg 1/1 Running 0 2m29s 10.244.1.20 k8s-node1.timinglee.org <none> <none>
deployment-6cb7db6dfb-vkd8b 1/1 Running 0 2m29s 10.244.2.29 k8s-
node2.timinglee.org <none> <none>
#pod运行容器版本为v1
[root@k8s-master ~]# curl 10.244.2.30
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#更新容器运行版本
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间5秒
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v2
#更新为版本2
name: myapp
[root@k8s2 pod]# kubectl apply -f deployment.yml
#测试更新效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-54485c5c7c-9j5m9 1/1 Running 0 2m14s 10.244.2.31 k8s-node2.timinglee.org <none> <none>
deployment-54485c5c7c-bzjdt 1/1 Running 0 2m14s 10.244.1.22 k8s-node1.timinglee.org <none> <none>
deployment-54485c5c7c-vxxdq 1/1 Running 0 2m8s 10.244.1.23 k8s-node1.timinglee.org <none> <none>
deployment-54485c5c7c-z4wm7 1/1 Running 0 2m8s 10.244.2.32 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.2.31
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

版本回滚
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: reg.timinglee.org/timinglee/myapp:v1 #回滚到之前版本
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured
#测试回滚效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-6cb7db6dfb-28rg9 1/1 Running 0 5s 10.244.1.25 k8s-node1.timinglee.org <none> <none>
deployment-6cb7db6dfb-hjn4c 1/1 Running 0 7s 10.244.1.24 k8s-node1.timinglee.org <none> <none>
deployment-6cb7db6dfb-ncmhc 1/1 Running 0 5s 10.244.2.34 k8s-node2.timinglee.org <none> <none>
deployment-6cb7db6dfb-rd52z 1/1 Running 0 7s 10.244.2.33 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.1.25
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#

daemonset控制器
daemonset功能

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod ,当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
DaemonSet 的典型用法:
-
在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
-
在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
-
在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
-
一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用
-
一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求
daemonset 示例
[root@k8s2 pod]# cat daemonset-example.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
tolerations: #对于污点节点的容忍
- effect: NoSchedule
operator: Exists
containers:
- name: nginx
image: nginx
[root@k8s-master ~]# kubectl apply -f daemonset-example.yml
daemonset.apps/daemonset-example created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-example-5s2xp 1/1 Running 0 104s 10.244.0.3 k8s-master.timinglee.org <none> <none>
daemonset-example-ks9tp 1/1 Running 0 104s 10.244.1.26 k8s-node1.timinglee.org <none> <none>
daemonset-example-mtgng 1/1 Running 0 104s 10.244.2.35 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]#
#回收
[root@k8s2 pod]# kubectl delete -f daemonset-example.yml

job 控制器
job控制器功能

Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务
Job特点如下:
-
当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
-
当成功结束的pod达到指定的数量时,Job将完成执行
[root@k8s-master ~]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 6 #一共完成任务数为6
parallelism: 2 #每次并行完成2个
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 计算Π的后2000位
restartPolicy: Never #关闭后不自动重启
backoffLimit: 4 #运行失败后尝试4重新运行
[root@k8s-master ~]# kubectl apply -f job.yml
[!NOTE]
关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器
而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod
并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了
cronjob 控制器
cronjob 控制器功能

-
Cron Job 创建基于时间调度的 Jobs。
-
CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,
-
CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。
-
CronJob可以在特定的时间点(反复的)去运行job任务。
cronjob 控制器 示例
[root@k8s-master ~]# cat cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-29267088-kpq89 0/1 Completed 0 53s

4、kubernetes中的微服务
什么是微服务
用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问
-
Service是一组提供相同服务的Pod对外开放的接口。
-
借助Service,应用可以实现服务发现和负载均衡。
-
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

微服务的类型
| 微服务类型 | 作用描述 |
|---|---|
| ClusterIP | 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问 |
| NodePort | 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP |
| LoadBalancer | 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用 |
| ExternalName | 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定 |
示例:
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment timinglee --image myapp:v1 --replicas 2 --dry-run=client -o yaml > timinglee.yaml
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: timinglee
name: timinglee
spec:
replicas: 2
selector:
matchLabels:
app: timinglee
template:
metadata:
creationTimestamp: null
labels:
app: timinglee
spec:
containers:
- image: myapp:v1
name: myapp
--- #不同资源间用---隔开
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee created
[root@k8s-master ~]# kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h9m <none>
timinglee ClusterIP 10.96.113.169 <none> 80/TCP 105m app=timinglee

微服务默认使用iptables调度
[root@k8s-master ~]# kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h10m <none>
timinglee ClusterIP 10.96.113.169 <none> 80/TCP 105m app=timinglee
[root@k8s-master ~]#

ipvs模式
-
Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
-
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
-
IPVS模式的service,可以使K8s集群支持更多量级的Pod
ipvs模式配置方式
1 在所有节点中安装ipvsadm
[root@k8s-所有节点 pod]yum install ipvsadm -y
2 修改master节点的代理配置
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
metricsBindAddress: ""
mode: "ipvs" #设置kube-proxy使用ipvs模式
nftables:
3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-5zsj6" deleted
pod "kube-proxy-gt8ql" deleted
pod "kube-proxy-jq8rq" deleted
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.1.2:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.2:9153 Masq 1 0 0
-> 10.244.1.2:9153 Masq 1 0 0
TCP 10.96.113.169:80 rr
-> 10.244.1.29:80 Masq 1 0 0
-> 10.244.2.41:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.1.2:53 Masq 1 0 0
[root@k8s-master ~]#

注:切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP
微服务类型详解
clusterip
特点:
clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能
示例:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
service创建后集群DNS提供解析
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27827
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 057d9ff344fe9a3a (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A 10.97.59.25
;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:44:30 CST 2024
;; MSG SIZE rcvd: 127
ClusterIP中的特殊模式headless
headless(无头服务)
对于无头 Services 并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: timinglee
name: timinglee
spec:
replicas: 2
selector:
matchLabels:
app: timinglee
template:
metadata:
creationTimestamp: null
labels:
app: timinglee
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
clusterIP: None
[root@k8s-master ~]# kubectl delete -f timinglee.yaml
deployment.apps "timinglee" deleted
service "timinglee" deleted
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee created
#测试
[root@k8s-master ~]# kubectl get services timinglee
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee ClusterIP None <none> 80/TCP 7s
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49466
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 3f2b9caf6783f9bc (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A 10.244.1.30
timinglee.default.svc.cluster.local. 30 IN A 10.244.2.42
;; Query time: 18 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Aug 24 17:12:45 CST 2025
;; MSG SIZE rcvd: 178
#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup timinglee-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'timinglee-service'
/ # nslookup timinglee
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: timinglee
Address 1: 10.244.2.42 10-244-2-42.timinglee.default.svc.cluster.local
Address 2: 10.244.1.30 10-244-1-30.timinglee.default.svc.cluster.local
/ # curl timinglee
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee/hostname.html
timinglee-c56f584cf-grmj7

nodeport
过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务
其访问过程为:

示例:
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: NodePort
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee unchanged
service/timinglee-service created
[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service NodePort 10.105.174.55 <none> 80:31709/TCP 5s
[root@k8s-master ~]# for i in {1..5} ; do curl 172.25.254.100:31709/hostname.html ;done
timinglee-c56f584cf-grmj7
timinglee-c56f584cf-dq8z4
timinglee-c56f584cf-grmj7
timinglee-c56f584cf-dq8z4
timinglee-c56f584cf-grmj7
[root@k8s-master ~]#

注:
nodeport默认端口
nodeport默认端口是30000-32767,超出会报错
loadbalancer
云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: LoadBalancer
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee unchanged
service/timinglee-service configured
默认无法分配外部访问IP
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h32m
timinglee ClusterIP None <none> 80/TCP 13m
timinglee-service LoadBalancer 10.105.174.55 <pending> 80:31709/TCP 3m38s
LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

metalLB
官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes

metalLB功能:为LoadBalancer分配vip
部署方式
1.设置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
2.下载部署文件(我是上传的)
[root@k8s-master ~]# docker load -i metalLB.tag.gz
3.修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8
4.上传镜像到harbor
[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/controller:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/speaker:v0.14.8
部署服务
[root@k8s-master ~]# kubectl apply -f metallb-native.yaml
[root@k8s-master ~]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-b8tzc 1/1 Running 0 7m9s
speaker-kx6rt 1/1 Running 0 7m9s
speaker-ltd6q 1/1 Running 0 7m9s
speaker-stl5r 1/1 Running 0 7m9s
配置分配地址段
[root@k8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool #地址池名称
namespace: metallb-system
spec:
addresses:
- 172.25.254.50-172.25.254.99 #修改为自己本地地址段
--- #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool #使用地址池
[root@k8s-master ~]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h44m
timinglee ClusterIP None <none> 80/TCP 25m
timinglee-service LoadBalancer 10.105.174.55 172.25.254.50 80:31709/TCP 15m
#通过分配地址从集群外访问服务
[root@k8s-master ~]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

externalname
-
开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题
-
一般应用于外部业务和pod沟通或外部业务迁移到pod内时
-
在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。
-
集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题
示例:
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee-service
name: timinglee-service
spec:
selector:
app: timinglee
type: ExternalName
externalName: www.timinglee.org
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service ExternalName <none> www.timinglee.org <none> 21m
[root@k8s-master ~]#

Ingress-nginx
官网:
Installation Guide - Ingress-Nginx Controller
ingress-nginx功能

-
一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层
-
Ingress由两部分组成:Ingress controller和Ingress服务
-
Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。
-
业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。
部署ingress
下载部署文件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml
[root@k8s-master ingress-1.11.2]# docker load -i ingress-nginx-1.11.2.tag.gz
[root@k8s-master ingress-1.11.2]# docker tag reg.timinglee.org/ingress-nginx/controller:v1.11.2 reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ingress-1.11.2]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ingress-1.11.2]# docker tag reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ingress-1.11.2]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3
安装ingress
[root@k8s-master ~]# vim deploy.yaml
445 image: ingress-nginx/controller:v1.11.2
546 image: ingress-nginx/kube-webhook-certgen:v1.4.3
599 image: ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ~]# kubectl apply -f deploy.yaml
[root@k8s-master ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ng5kq 0/1 Completed 0 53s
ingress-nginx-admission-patch-dzp2k 0/1 Completed 1 53s
ingress-nginx-controller-bb7d8f97c-xv2tx 1/1 Running 0 53s
[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.111.141.120 <none> 80:32293/TCP,443:30329/TCP 3m30s
ingress-nginx-controller-admission ClusterIP 10.110.232.236 <none> 443/TCP 3m30s
#修改微服务为loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49 type: LoadBalancer
[root@k8s-master ~]# kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.111.141.120 172.25.254.50 80:32293/TCP,443:30329/TCP 4m14s
ingress-nginx-controller-admission ClusterIP 10.110.232.236 <none> 443/TCP 4m14s
[root@k8s-master ~]#
#在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

测试ingress
#生成yaml文件
[root@k8s-master ~]# kubectl create ingress webcluster --rule '*/=timinglee-svc:80' --dry-run=client -o yaml > timinglee-ingress.yml
[root@k8s-master ~]# vim timinglee-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: timinglee-service
port:
number: 80
#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f timinglee-ingress.yml
ingress.networking.k8s.io/webserver created
[root@k8s-master ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> * 172.25.254.20 80 2m21s
[root@k8s-master ~]# for n in {1..5}; do curl 172.25.254.50/hostname.html; done
timinglee-d798ddbc8-d4jbj
timinglee-d798ddbc8-d4jbj
timinglee-d798ddbc8-d4jbj
timinglee-d798ddbc8-fflt4
timinglee-d798ddbc8-fflt4
ingress必须和输出的service资源处于同一namespace

ingress 的高级用法
基于路径的访问
1.建立用于测试的控制器myapp
[root@k8s-master ~]# kubectl create deployment myapp-v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yaml
[root@k8s-master ~]# kubectl create deployment myapp-v2 --image myapp:v2 --dry-run=client -o yaml > myapp-v2.yaml
[root@k8s-master ~]# vim myapp-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v1
strategy: {}
template:
metadata:
labels:
app: myapp-v1
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v1
[root@k8s-master ~]# kubectl apply -f myapp-v1.yaml
deployment.apps/myapp-v1 created
service/myapp-v1 created
[root@k8s-master ~]# vim myapp-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v2
template:
metadata:
labels:
app: myapp-v2
spec:
containers:
- image: myapp:v2
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v2
[root@k8s-master ~]# kubectl apply -f myapp-v2.yaml
deployment.apps/myapp-v2 created
service/myapp-v2 created
[root@k8s-master ~]# kubectl expose deployment myapp-v1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml
[root@k8s-master ~]# kubectl expose deployment myapp-v2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml
[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h57m
myapp-v1 ClusterIP 10.103.108.227 <none> 80/TCP 52s
myapp-v2 ClusterIP 10.104.67.232 <none> 80/TCP 23s
timinglee ClusterIP None <none> 80/TCP 98m
timinglee-service LoadBalancer 10.111.111.52 172.25.254.51 80:31073/TCP 20m

2.建立ingress的yaml
[root@k8s-master ~]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / #访问路径后加任何内容都被定向到/
name: ingress1
spec:
ingressClassName: nginx
rules:
- host: www.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /v1
pathType: Prefix
- backend:
service:
name: myapp-v2
port:
number: 80
path: /v2
pathType: Prefix
[root@k8s-master ~]# kubectl apply -f ingress1.yml
[root@k8s-master ~]# echo 172.25.254.50 www.timinglee.org >> /etc/hosts
[root@k8s-master ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现
[root@k8s-master ~]# curl www.timinglee.org/v2/aaaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

基于域名的访问
#在测试主机中设定解析
[root@harbor ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org
[root@k8s-master ~]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress2
spec:
ingressClassName: nginx
rules:
- host: myappv1.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
- host: myappv2.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
#利用文件建立ingress
[root@k8s-master ~]# kubectl apply -f ingress2.yml
ingress.networking.k8s.io/ingress2 created
[root@k8s-master ~]# kubectl describe ingress ingress2
Name: ingress2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myappv1.timinglee.org
/ myapp-v1:80 (10.244.1.35:80)
myappv2.timinglee.org
/ myapp-v2:80 (10.244.2.49:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s nginx-ingress-controller Scheduled for sync

#在测试主机中测试
[root@harbor ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@harbor ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@harbor ~]#

建立tls加密
[root@harbor ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.254 reg.timinglee.org harbor
172.25.254.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org myapp-tls.timinglee.org
#建立证书
[root@k8s-master ~]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
#建立加密资源类型secret
[root@k8s-master ~]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created
[root@k8s-master ~]# kubectl get secrets
NAME TYPE DATA AGE
web-tls-secret kubernetes.io/tls 2 6s

#建立ingress3基于tls认证的yml文件
[root@k8s-master ~]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress3
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#测试
[root@harbor ~]# curl -k https://myapp-tls.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

建立auth认证
#建立认证文件
[root@k8s-master ~]# dnf install httpd-tools -y
[root@k8s-master ~]# htpasswd -cm auth lee
New password:
Re-type new password:
Adding password for user lee
[root@k8s-master ~]# cat auth
lee:$apr1$cY8oNP9X$L.YFCMvgLNUxOChU/n7RR0
#建立认证类型资源
[root@k8s-master ~]# kubectl create secret generic auth-web --from-file auth
secret/auth-web created
[root@k8s-master ~]# kubectl describe secrets auth-web
Name: auth-web
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
auth: 42 bytes
#建立ingress4基于用户认证的yaml文件
[root@k8s-master ~]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress4
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#建立ingress4
[root@k8s-master ~]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/ingress4 created
[root@k8s-master ~]# kubectl describe ingress ingress4
Name: ingress4
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.timinglee.org
Rules:
Host Path Backends
---- ---- --------
myapp-tls.timinglee.org
/ myapp-v1:80 (10.244.1.35:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s nginx-ingress-controller Scheduled for sync

#测试:
[root@harbor ~]# curl -k https://myapp-tls.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@harbor ~]# curl -k https://myapp-tls.timinglee.org -ulee:123456
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@harbor ~]#

rewrite重定向
[root@k8s-master ~]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress5
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master ~]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/ingress5 created
[root@k8s-master ~]# kubectl describe ingress ingress5
Name: ingress5
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.timinglee.org
Rules:
Host Path Backends
---- ---- --------
myapp-tls.timinglee.org
/ myapp-v1:80 (10.244.1.35:80)
Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
#测试:
[root@harbor ~]# curl -Lk https://myapp-tls.timinglee.org -ulee:123456
myapp-v1-7479d6c54d-tkfxz
[root@harbor ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123456
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>
#解决重定向路径问题
[root@k8s-master ~]# vim ingress6.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress6
spec:
tls:
- hosts:
- myapp-tls.timinglee.org
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
- backend:
service:
name: myapp-v1
port:
number: 80
path: /lee(/|$)(.*) #正则表达式匹配/lee/,/lee/abc
pathType: ImplementationSpecific
[root@k8s-master ~]# kubectl apply -f ingress6.yml
#测试
[root@harbor ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123456
myapp-v1-7479d6c54d-tkfxz


Canary金丝雀发布

么是金丝雀发布
金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。
主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。
是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。
Canary发布方式

其中header和weiht中的最多
基于header(http包头)灰度

-
通过Annotaion扩展
-
创建灰度ingress,配置灰度头部key以及value
-
灰度流量验证完毕后,切换正式ingress到新版本
-
之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。
示例:
[root@k8s-master ~]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
name: myapp-v1-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master ~]# kubectl apply -f ingress7.yml
ingress.networking.k8s.io/myapp-v1-ingress created
[root@k8s-master ~]# kubectl describe ingress myapp-v1-ingress
Name: myapp-v1-ingress
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.timinglee.org
/ myapp-v1:80 (10.244.1.35:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7s nginx-ingress-controller Scheduled for sync
#建立基于header的ingress
[root@k8s-master ~]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "version"
nginx.ingress.kubernetes.io/canary-by-header-value: "2"
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.timinglee.org
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master ~]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
[root@k8s-master ~]# kubectl describe ingress myapp-v2-ingress
Name: myapp-v2-ingress
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.timinglee.org
/ myapp-v2:80 (10.244.2.49:80)
Annotations: nginx.ingress.kubernetes.io/canary: true
nginx.ingress.kubernetes.io/canary-by-header: version
nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s nginx-ingress-controller Scheduled for sync
#测试:
[root@harbor ~]# curl myapp.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@harbor ~]# curl -H "version: 2" myapp.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

5、k8s中是存储管理
configmap
configmap的功能
-
configMap用于保存配置数据,以键值对形式存储。
-
configMap 资源提供了向 Pod 注入配置数据的方法。
-
镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
-
etcd限制了文件大小不能超过1M
configmap的使用场景
-
填充环境变量的值
-
设置容器内的命令行参数
-
填充卷的配置文件
configmap创建方式
字面值创建
[root@k8s-master ~]# kubectl create cm lee-config --from-literal fname=timing --from-literal name=lee
configmap/lee-config created
[root@k8s-master ~]# kubectl describe cm lee-config
Name: lee-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
fname:
----
timing
name:
----
lee
BinaryData
====
Events: <none>

通过文件创建
[root@k8s-master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search timinglee.org
nameserver 8.8.8.8
[root@k8s-master ~]# kubectl create cm lee2-config --from-file /etc/resolv.conf
configmap/lee2-config created
[root@k8s-master ~]# kubectl describe cm lee2-config
Name: lee2-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
resolv.conf:
----
# Generated by NetworkManager
search timinglee.org
nameserver 8.8.8.8
BinaryData
====
Events: <none>

通过目录创建
[root@k8s-master ~]# mkdir leeconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local leeconfig/
[root@k8s-master ~]# kubectl create cm lee3-config --from-file leeconfig/
configmap/lee3-config created
[root@k8s-master ~]# kubectl describe cm lee3-config
Name: lee3-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
fstab:
----
#
# /etc/fstab
# Created by anaconda on Tue Jul 8 03:54:28 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel_bogon-root / xfs defaults 0 0
UUID=ed271930-7438-4bef-a67b-f65ca1ce12ec /boot xfs defaults 0 0
/dev/mapper/rhel_bogon-home /home xfs defaults 0 0
#/dev/mapper/rhel_bogon-swap none swap defaults 0 0
rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
mount /dev/cdrom /rhel9.2
BinaryData
====
Events: <none>
[root@k8s-master ~]#

通过yaml文件创建
[root@k8s-master ~]# kubectl create cm lee4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > lee-config.yaml
[root@k8s-master ~]# vim lee-config.yaml
[root@k8s-master ~]# kubectl apply -f lee-config.yaml
configmap/lee4-config created
[root@k8s-master ~]# kubectl describe cm lee4-config
Name: lee4-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db_host:
----
172.25.254.100
db_port:
----
3306
BinaryData
====
Events: <none>
[root@k8s-master ~]#

configmap的使用方式
-
通过环境变量的方式直接传递给pod
-
通过pod的 命令行运行方式
-
作为volume的方式挂载到pod内
使用configmap填充环境变量
[root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
env:
- name: key1
valueFrom:
configMapKeyRef:
name: lee4-config
key: db_host
- name: key2
valueFrom:
configMapKeyRef:
name: lee4-config
key: db_port
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod1.yml
pod/testpod unchanged
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.103.108.227
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.104.67.232
HOME=/
TIMINGLEE_SERVICE_SERVICE_HOST=10.111.111.52
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.103.108.227:80
MYAPP_V2_PORT=tcp://10.104.67.232:80
MYAPP_V2_SERVICE_PORT=80
TIMINGLEE_SERVICE_SERVICE_PORT=80
TIMINGLEE_SERVICE_PORT=tcp://10.111.111.52:80
MYAPP_V1_PORT_80_TCP_ADDR=10.103.108.227
MYAPP_V2_PORT_80_TCP_ADDR=10.104.67.232
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PROTO=tcp
TIMINGLEE_SERVICE_PORT_80_TCP_ADDR=10.111.111.52
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
TIMINGLEE_SERVICE_PORT_80_TCP_PORT=80
TIMINGLEE_SERVICE_PORT_80_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.103.108.227:80
MYAPP_V2_PORT_80_TCP=tcp://10.104.67.232:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
TIMINGLEE_SERVICE_PORT_80_TCP=tcp://10.111.111.52:80

#把cm中的值直接映射为变量
[root@k8s-master ~]# vim testpod2.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
envFrom:
- configMapRef:
name: lee4-config
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod2.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
MYAPP_V1_SERVICE_HOST=10.103.108.227
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.104.67.232
HOME=/
db_port=3306
TIMINGLEE_SERVICE_SERVICE_HOST=10.111.111.52
MYAPP_V1_PORT=tcp://10.103.108.227:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.104.67.232:80
MYAPP_V2_SERVICE_PORT=80
TIMINGLEE_SERVICE_PORT=tcp://10.111.111.52:80
TIMINGLEE_SERVICE_SERVICE_PORT=80
MYAPP_V1_PORT_80_TCP_ADDR=10.103.108.227
MYAPP_V2_PORT_80_TCP_ADDR=10.104.67.232
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
TIMINGLEE_SERVICE_PORT_80_TCP_ADDR=10.111.111.52
KUBERNETES_PORT_443_TCP_PROTO=tcp
TIMINGLEE_SERVICE_PORT_80_TCP_PORT=80
TIMINGLEE_SERVICE_PORT_80_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.103.108.227:80
MYAPP_V2_PORT_80_TCP=tcp://10.104.67.232:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
TIMINGLEE_SERVICE_PORT_80_TCP=tcp://10.111.111.52:80
db_host=172.25.254.100

#在pod命令行中使用变量
[root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- echo ${db_host} ${db_port} #变量调用需
envFrom:
- configMapRef:
name: lee4-config
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod3.yml
pod/testpod created
#在pod命令行中使用变量
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.100 3306

通过数据卷使用configmap
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- cat /config/db_host
volumeMounts: #调用卷策略
- name: config-volume #卷名称
mountPath: /config
volumes: #声明卷的配置
- name: config-volume #卷名称
configMap:
name: lee4-config
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f testpod4.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs testpod
172.25.254.100

利用configMap填充pod的配置文件
#建立配置文件模板
[root@k8s-master ~]# vim nginx.conf
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
#利用模板生成cm
[root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf
configmap/nginx-conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf
Name: nginx-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
nginx.conf:
----
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
BinaryData
====
Events: <none>
#建立nginx控制器文件
[root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml
[root@k8s-master ~]# vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-conf
[root@k8s-master ~]# kubectl apply -f nginx.yml
#测试
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-jkj6x 1/1 Running 0 7m54s 10.244.1.40 k8s-node1.timinglee.org <none> <none>
test 1/1 Running 1 (3h51m ago) 3h51m 10.244.2.44 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.1.40:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master ~]#

通过热更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx-conf
apiVersion: v1
apiVersion: v1
data:
nginx.conf: |
server {
listen 8080; #端口改为8080
server_name _;
root /usr/share/nginx/html;
index index.html;
}
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-24T13:00:35Z"
name: nginx-conf
namespace: default
resourceVersion: "44379"
uid: 714af600-91b9-4844-984f-c1e2109e8f18
#查看配置文件
[root@k8s-master ~]# kubectl exec pods/nginx-8487c65cfc-jkj6x -- cat /etc/nginx/conf.d/nginx.conf
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
配置文件修改后不会生效,需要删除pod后控制器会重建pod,这时就生效了
[root@k8s-master ~]# kubectl delete pods nginx-8487c65cfc-jkj6x
pod "nginx-8487c65cfc-jkj6x" deleted
[root@k8s-master ~]# curl 10.244.2.52:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

secrets配置管理
secrets的功能介绍
-
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
-
敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
-
Pod 可以用两种方式使用 secret:
-
作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
-
当 kubelet 为 pod 拉取镜像时使用。
-
-
Secret的类型:
-
Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
-
Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
-
kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息
-
secrets的创建
从文件创建
[root@k8s-master ~]# echo -n timinglee > username.txt
[root@k8s-master ~]# echo -n lee > password.txt
[root@k8s-master ~]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@k8s-master ~]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
password.txt: bGVl
username.txt: dGltaW5nbGVl
kind: Secret
metadata:
creationTimestamp: "2025-08-24T13:16:14Z"
name: userlist
namespace: default
resourceVersion: "44931"
uid: dde2189a-01b1-4681-a574-6c9b9b0dffbb
type: Opaque

编写yaml文件
[root@k8s-master ~]# echo -n timinglee | base64
dGltaW5nbGVl
[root@k8s-master ~]# echo -n lee | base64
bGVl
[root@k8s-master ~]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml
[root@k8s-master ~]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: null
name: userlist
type: Opaque
data:
username: dGltaW5nbGVl
password: bGVl
[root@k8s-master ~]# kubectl apply -f userlist.yml
secret/userlist configured
[root@k8s-master ~]# kubectl describe secrets userlist
Name: userlist
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 3 bytes
password.txt: 3 bytes
username: 9 bytes
username.txt: 9 bytes
[root@k8s-master ~]#

Secret的使用方法
将Secret挂载到Volume中
[root@k8s-master ~]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml
#向固定路径映射
[root@k8s-master ~]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
[root@k8s-master ~]# kubectl apply -f pod1.yaml
pod/nginx created
[root@k8s-master ~]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password password.txt username username.txt
root@nginx:/secret# cat password
leeroot@nginx:/secret# cat username
timingleeroot@nginx:/secret#

向指定路径映射 secret 密钥
[root@k8s-master ~]# vim pod2.yaml
[root@k8s-master ~apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx1
name: nginx1
spec:
containers:
- image: nginx
name: nginx1
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
items:
- key: username
path: my-users/username]# kubectl apply -f pod2.yaml
pod/nginx1 created
[root@k8s-master ~]# kubectl exec pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username
timingleeroot@nginx1:/secret/my-users#

将Secret设置为环境变量
[root@k8s-master ~]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
command:
- /bin/sh
- -c
- env
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: userlist
key: username
- name: PASS
valueFrom:
secretKeyRef:
name: userlist
key: password
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f pod3.yaml
pod/busybox created
[root@k8s-master ~]# kubectl logs pods/busybox
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=busybox
MYAPP_V1_SERVICE_HOST=10.103.108.227
MYAPP_V2_SERVICE_HOST=10.104.67.232
SHLVL=1
HOME=/root
TIMINGLEE_SERVICE_SERVICE_HOST=10.111.111.52
MYAPP_V1_PORT=tcp://10.103.108.227:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.104.67.232:80
MYAPP_V2_SERVICE_PORT=80
TIMINGLEE_SERVICE_SERVICE_PORT=80
TIMINGLEE_SERVICE_PORT=tcp://10.111.111.52:80
MYAPP_V1_PORT_80_TCP_ADDR=10.103.108.227
USERNAME=timinglee
MYAPP_V2_PORT_80_TCP_ADDR=10.104.67.232
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V1_PORT_80_TCP_PROTO=tcp
MYAPP_V2_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
TIMINGLEE_SERVICE_PORT_80_TCP_ADDR=10.111.111.52
KUBERNETES_PORT_443_TCP_PROTO=tcp
TIMINGLEE_SERVICE_PORT_80_TCP_PORT=80
TIMINGLEE_SERVICE_PORT_80_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.103.108.227:80
MYAPP_V2_PORT_80_TCP=tcp://10.104.67.232:80
PASS=lee
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
TIMINGLEE_SERVICE_PORT_80_TCP=tcp://10.111.111.52:80
[root@k8s-master ~]#

存储docker registry的认证信息
建立私有仓库并上传镜像

#登陆仓库
[root@k8s-master ~]# docker login reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded
[root@k8s-master ~]# cd packages/
[root@k8s-master packages]# ls
1panel-v1.10.13-lts-linux-amd64.tar.gz mario.tar.gz
busybox-latest.tar.gz myapp.tar.gz
busyboxplus.tar.gz mysql-5.7.tar.gz
centos-7.tar.gz nginx-1.23.tar.gz
debian11.tar.gz nginx-latest.tar.gz
docker-images.tar.gz phpmyadmin-latest.tar.gz
game2048.tar.gz registry.tag.gz
haproxy-2.3.tar.gz rpm
harbor-offline-installer-v2.5.4.tgz ubuntu-latest.tar.gz
[root@k8s-master packages]# docker load -i game2048.tar.gz
#上传镜像
[root@k8s-master ~]# docker tag timinglee/game2048:latest reg.timinglee.org/timinglee/game2048:latest
[root@k8s-master ~]# docker push reg.timinglee.org/timinglee/game2048:latest
#建立用于docker认证的secret
[root@k8s-master ~]# kubectl create secret docker-registry docker-auth --docker-server reg.timinglee.org --docker-username admin --docker-password 123456 --docker-email timinglee@timinglee.org
secret/docker-auth created
[root@k8s-master ~]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: game2048
name: game2048
spec:
containers:
- image: reg.timinglee.org/timinglee/game2048:latest
name: game2048
imagePullSecrets: #不设定docker认证时无法下载镜像
- name: docker-auth
[root@k8s-master ~]# kubectl apply -f pod3.yml
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
game2048 1/1 Running 0 14s

第二种方法:
[root@k8s-master ~]# kubectl edit sa default
apiVersion: v1
imagePullSecrets:
- name: docker-auth #加入这一内容
kind: ServiceAccount
metadata:
creationTimestamp: "2025-08-24T05:53:49Z"
name: default
namespace: default
resourceVersion: "389"
uid: d5105c84-2eaf-43a8-9f18-6631c1e1016d
[root@k8s-master ~]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: game2048
name: game2048
spec:
containers:
- image: reg.timinglee.org/test/game2048:latest
name: game2048
imagePullSecrets:
#把这一行name: docker-auth 去掉
[root@k8s-master ~]# kubectl apply -f pod3.yml
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 0/1 Completed 0 12m
game2048 1/1 Running 0 39s

volumes配置管理
-
容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题
-
当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。
-
当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
-
Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同
-
卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留
-
当一个 Pod 不再存在时,卷也将不再存在。
-
Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
-
卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
kubernets支持的卷的类型
k8s支持的卷的类型如下:
-
awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
-
downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
-
gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
-
nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
-
scaleIO、secret、storageos、vsphereVolume
emptyDir卷
功能:
当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除
emptyDir 的使用场景:
-
缓存空间,例如基于磁盘的归并排序。
-
耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
-
在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
示例:
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: busyboxplus:latest
name: vm1
command:
- /bin/sh
- -c
- sleep 30000000
volumeMounts:
- mountPath: /cache
name: cache-vol
- image: nginx:latest
name: vm2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
emptyDir:
medium: Memory
sizeLimit: 100Mi
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/vol1 created
#查看pod中卷的使用情况
[root@k8s-master ~]# kubectl describe pods vol1
#测试效果
[root@k8s-master ~]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # ls
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo timinglee > index.html
/cache # curl localhost
timinglee
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
/cache #

hostpath卷
功能:
hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
hostPath 的一些用法
-
运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
-
在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
-
允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在
hostPath的安全隐患
-
具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
-
当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
-
基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。
示例:
[root@k8s-master ~]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
hostPath:
path: /data
type: DirectoryOrCreate #当/data目录不存在时自动建立
[root@k8s-master ~]# kubectl apply -f pod2.yml
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 39s 10.244.1.46 k8s-node1.timinglee.org <none> <none>
vol1 1/1 Running 0 2m18s 10.244.1.45 k8s-node1.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.1.45
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
[root@k8s-node1 ~]# echo timinglee > /data/index.html
[root@k8s-master ~]# curl 10.244.1.45
timinglee
#当pod被删除后hostPath不会被清理
[root@k8s-master ~]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-node1 ~]# ls /data/
index.html
nfs卷
NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用
例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。
部署一台nfs共享主机并在所有k8s节点中安装nfs-utils
#部署nfs主机
[root@harbor ~]# dnf install nfs-utils -y
[root@harbor ~]# systemctl enable --now nfs-server.service
[root@harbor ~]# mkdir -p /nfsdata
[root@harbor ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@harbor ~]# exportfs -rv
exporting *:/nfsdata
#在k8s所有节点中安装nfs-utils
[root@k8s-master & node1 & node2 ~]# dnf install nfs-utils -y
部署nfs卷
[root@k8s-master ~]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
nfs:
server: 172.25.254.254
path: /nfsdata
[root@k8s-master ~]# kubectl apply -f pod3.yml
pod/vol1 created
#测试
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 12m 10.244.1.46 k8s-node1.timinglee.org <none> <none>
vol1 1/1 Running 0 6s 10.244.2.57 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.2.57
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
[root@k8s-master ~]# curl 10.244.2.57
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
##在nfs主机中
[root@harbor ~]# echo timinglee > /nfsdata/index.html
[root@k8s-master ~]# curl 10.244.2.57
timinglee

存储类storageclass
官网: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
StorageClass说明
-
StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
-
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到
StorageClass的属性
属性说明:存储类 | Kubernetes
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
存储分配器NFS Client Provisioner
源码地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
-
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
-
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
-
PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)
部署NFS Client Provisioner
[root@k8s-master ~]# vim rbac.yml
apiVersion: v1
kind: Namespace
metadata:
name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
#查看rbac信息
[root@k8s-master ~]# kubectl apply -f rbac.yml
[root@k8s-master ~]# kubectl -n nfs-client-provisioner get sa
NAME SECRETS AGE
default 0 6s
nfs-client-provisioner 0 6s
#拉取镜像
[root@k8s-master ~]# docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 reg.timinglee.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@k8s-master ~]# docker push reg.timinglee.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2

部署应用
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.254.254
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.254.254
path: /nfsdata
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/nfs-client-provisioner configured
[root@k8s-master ~]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 5m5s

创建存储类
[root@k8s-master ~]# vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
[root@k8s-master ~]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master ~]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 7s

创建pvc
[root@k8s-master ~]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
[root@k8s-master ~]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@k8s-master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-fc552783-357b-4e8d-a4c9-3bd8225cfa14 1G RWX nfs-client <unset> 5s
创建测试pod
[root@k8s-master ~]# vim pod.yml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@k8s-master ~]# kubectl apply -f pod.yml
[root@harbor nfsdata]# ls /nfsdata/default-test-claim-pvc-fc552783-357b-4e8d-a4c9-3bd8225cfa14/
SUCCESS

statefulset控制器
功能特性
-
Statefulset是为了管理有状态服务的问提设计的
-
StatefulSet将应用状态抽象成了两种情况:
-
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
-
存储状态:应用的多个实例分别绑定了不同存储数据。
-
StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号),从0开始。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。
StatefulSet的组成部分
-
Headless Service:用来定义pod网络标识,生成可解析的DNS记录
-
volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。
-
StatefulSet:管理pod的
构建方法
[root@k8s-master ~]# vim headless.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
[root@k8s-master ~]# kubectl apply -f headless.yml
service/nginx-svc created
#建立statefulset
[root@k8s-master ~]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@k8s-master ~]# kubectl apply -f statefulset.yml
statefulset.apps/web created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-8487c65cfc-sskm5 1/1 Running 0 48m
test-pod 0/1 Completed 0 10m
vol1 1/1 Running 0 36m
web-0 1/1 Running 0 22s
web-1 1/1 Running 0 18s
web-2 1/1 Running 0 14s
[root@harbor ~]# ls /nfsdata/
default-test-claim-pvc-fc552783-357b-4e8d-a4c9-3bd8225cfa14
default-www-web-0-pvc-78f4ba2f-996a-43f9-8e7b-b189f53fb5f5
default-www-web-1-pvc-62074b3c-7b56-4fc3-9bea-ab4065982479
default-www-web-2-pvc-82e8296c-de78-487c-a5c6-6e4cd4b35cbc
index.html

测试:
#为每个pod建立index.html文件
[root@harbor ~]# cd /nfsdata/
[root@harbor nfsdata]# echo web-0 > default-www-web-0-pvc-78f4ba2f-996a-43f9-8e7b-b189f53fb5f5/index.html
[root@harbor nfsdata]# echo web-1 > default-www-web-1-pvc-62074b3c-7b56-4fc3-9bea-ab4065982479/index.html
[root@harbor nfsdata]# echo web-2 > default-www-web-2-pvc-82e8296c-de78-487c-a5c6-6e4cd4b35cbc/index.html
#建立测试pod访问web-0~2
[root@k8s-master ~]# kubectl run -it testpod --image busyboxplus
If you don't see a command prompt, try pressing enter.
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
#删掉重新建立statefulset
[root@k8s-master ~]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master ~]# kubectl apply -f statefulset.yml
statefulset.apps/web created
[root@k8s-master ~]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
/ #

statefulset的弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
用命令改变副本数
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
通过编辑配置改变副本数
$ kubectl edit statefulsets.apps <stateful-set-name>
statefulset有序回收
[root@k8s-master ~]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master ~]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master ~]# kubectl delete pvc --all
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
6、kubernetes中网络通信与调度
k8s通信整体架构
-
k8s通过CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannel,calico等
-
CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist
-
插件使用的解决方案如下
-
虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信。
-
多路复用:MacVLAN,多个容器共用一个物理网卡进行通信。
-
硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。
-
-
容器间通信:
-
同一个pod内的多个容器间的通信,通过lo即可实现pod之间的通信
-
同一节点的pod之间通过cni网桥转发数据包。
-
不同节点的pod之间的通信需要网络插件支持
-
-
pod和service通信: 通过iptables或ipvs实现通信,ipvs取代不了iptables,因为ipvs只能做负载均衡,而做不了nat转换
-
pod和外网通信:iptables的MASQUERADE
-
Service与集群外部客户端的通信;(ingress、nodeport、loadbalancer)
flannel网络插件
插件组成:
| 插件 | 功能 |
|---|---|
| VXLAN | 即Virtual Extensible LAN(虚拟可扩展局域网),是Linux本身支持的一网种网络虚拟化技术。VXLAN可以完全在内核态实现封装和解封装工作,从而通过“隧道”机制,构建出覆盖网络(Overlay Network) |
| VTEP | VXLAN Tunnel End Point(虚拟隧道端点),在Flannel中 VNI的默认值是1,这也是为什么宿主机的VTEP设备都叫flannel.1的原因 |
| Cni0 | 网桥设备,每创建一个pod都会创建一对 veth pair。其中一端是pod中的eth0,另一端是Cni0网桥中的端口(网卡) |
| Flannel.1 | TUN设备(虚拟网卡),用来进行 vxlan 报文的处理(封包和解包)。不同node之间的pod数据流量都从overlay设备以隧道的形式发送到对端 |
| Flanneld | flannel在每个主机中运行flanneld作为agent,它会为所在主机从集群的网络地址空间中,获取一个小的网段subnet,本主机内所有容器的IP地址都将从中分配。同时Flanneld监听K8s集群数据库,为flannel.1设备提供封装数据时必要的mac、ip等网络数据信息 |
flannel跨主机通信原理

-
当容器发送IP包,通过veth pair 发往cni网桥,再路由到本机的flannel.1设备进行处理。
-
VTEP设备之间通过二层数据帧进行通信,源VTEP设备收到原始IP包后,在上面加上一个目的MAC地址,封装成一个内部数据帧,发送给目的VTEP设备。
-
内部数据桢,并不能在宿主机的二层网络传输,Linux内核还需要把它进
一步封装成为宿主机的一个普通的数据帧,承载着内部数据帧通过宿主机的eth0进行传输。
-
Linux会在内部数据帧前面,加上一个VXLAN头,VXLAN头里有一个重要的标志叫VNI,它是VTEP识别某个数据桢是不是应该归自己处理的重要标识。
-
flannel.1设备只知道另一端flannel.1设备的MAC地址,却不知道对应的宿主机地址是什么。在linux内核里面,网络设备进行转发的依据,来自FDB的转发数据库,这个flannel.1网桥对应的FDB信息,是由flanneld进程维护的。
-
linux内核在IP包前面再加上二层数据帧头,把目标节点的MAC地址填进去,MAC地址从宿主机的ARP表获取。
-
此时flannel.1设备就可以把这个数据帧从eth0发出去,再经过宿主机网络来到目标节点的eth0设备。目标主机内核网络栈会发现这个数据帧有VXLAN Header,并且VNI为1,Linux内核会对它进行拆包,拿到内部数据帧,根据VNI的值,交给本机flannel.1设备处理,flannel.1拆包,根据路由表发往cni网桥,最后到达目标容器。
flannel支持的后端模式
| vxlan | 报文封装,默认模式 |
|---|---|
| Directrouting | 直接路由,跨网段使用vxlan,同网段使用host-gw模式 |
| host-gw | 主机网关,性能好,但只能在二层网络中,不支持跨网络 如果有成千上万的Pod,容易产生广播风暴,不推荐 |
| 网络模式 | 功能 |
| UDP | 性能差,不推荐 |
更改flannel的默认模式
[root@k8s-master ~]# kubectl -n kube-flannel edit cm kube-flannel-cfg
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "host-gw" #更改内容
}
}
[root@k8s-master ~]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-q2r5m" deleted
pod "kube-flannel-ds-smfvx" deleted
pod "kube-flannel-ds-xw5dr" deleted
[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 172.25.254.10 dev eth0
10.244.2.0/24 via 172.25.254.20 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100
[root@k8s-master ~]#

calico网络插件
官网:
Installing on on-premises deployments | Calico Documentation
calico简介:
-
纯三层的转发,中间没有任何的NAT和overlay,转发效率最好。
-
Calico 仅依赖三层路由可达。Calico 较少的依赖性使它能适配所有 VM、Container、白盒或者混合环境场景。
calico网络架构

-
Felix:监听ECTD中心的存储获取事件,用户创建pod后,Felix负责将其网卡、IP、MAC都设置好,然后在内核的路由表里面写一条,注明这个IP应该到这张网卡。同样如果用户制定了隔离策略,Felix同样会将该策略创建到ACL中,以实现隔离。
-
BIRD:一个标准的路由程序,它会从内核里面获取哪一些IP的路由发生了变化,然后通过标准BGP的路由协议扩散到整个其他的宿主机上,让外界都知道这个IP在这里,路由的时候到这里
部署calico
删除flannel插件
[root@k8s-master ~]# kubectl delete -f kube-flannel.yml
删除所有节点上flannel配置文件,避免冲突
[root@k8s-master & node1-2 ~]# rm -rf /etc/cni/net.d/10-flannel.conflist
下载部署文件
[root@k8s-master ~]# docker load -i calico-3.28.1.tar
6b2e64a0b556: Loading layer 3.69MB/3.69MB
38ba74eb8103: Loading layer 205.4MB/205.4MB
5f70bf18a086: Loading layer 1.024kB/1.024kB
Loaded image: calico/cni:v3.28.1
3831744e3436: Loading layer 366.9MB/366.9MB
Loaded image: calico/node:v3.28.1
4f27db678727: Loading layer 75.59MB/75.59MB
Loaded image: calico/kube-controllers:v3.28.1
993f578a98d3: Loading layer 67.61MB/67.61MB
Loaded image: calico/typha:v3.28.1
下载镜像上传至仓库:
[root@k8s-master ~]# docker tag calico/cni:v3.28.1 reg.timinglee.org/calico/cni:v3.28.1
[root@k8s-master ~]# docker push reg.timinglee.org/calico/cni:v3.28.1
[root@k8s-master ~]# docker tag calico/node:v3.28.1 reg.timinglee.org/calico/node:v3.28.1
[root@k8s-master ~]# docker push reg.timinglee.org/calico/node:v3.28.1
[root@k8s-master ~]# docker tag calico/kube-controllers:v3.28.1
[root@k8s-master ~]# docker push reg.timinglee.org/calico/kube-controllers:v3.28.1
[root@k8s-master ~]# docker tag calico/typha:v3.28.1 reg.timinglee.org/calico/typha:v3.28.1
[root@k8s-master ~]# docker push reg.timinglee.org/calico/typha:v3.28.1
更改yml设置
[root@k8s-master ~]# vim calico.yaml
4835 image: reg.timinglee.org/ calico/cni:v3.28.1
4863 image: reg.timinglee.org/calico/cni:v3.28.1
4906 image: reg.timinglee.org/calico/node:v3.28.1
4932 image: reg.timinglee.org/calico/node:v3.28.1
5160 image: reg.timinglee.org/calico/kube-controllers:v3.28.1
5249 - image: reg.timinglee.org/calico/typha:v3.28.1
4970 - name: CALICO_IPV4POOL_IPIP
4971 value: "Never"
4999 - name: CALICO_IPV4POOL_CIDR
5000 value: "10.244.0.0/16"
5001 - name: CALICO_AUTODETECTION_METHOD
5002 value: "interface=eth0"
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-b9c79b596-2xgmn 1/1 Running 0 103s
calico-node-4nhw5 1/1 Running 0 24s
calico-node-r9l6s 1/1 Running 0 35s
calico-node-vdz8t 1/1 Running 0 45s
calico-typha-55df74b8b4-x5642 1/1 Running 0 103s
coredns-647dc95897-hf7qr 1/1 Running 0 9h
coredns-647dc95897-tlm2p 1/1 Running 0 9h
etcd-k8s-master.timinglee.org 1/1 Running 0 9h
kube-apiserver-k8s-master.timinglee.org 1/1 Running 0 9h
kube-controller-manager-k8s-master.timinglee.org 1/1 Running 0 9h
kube-proxy-2l5bq 1/1 Running 0 5h38m
kube-proxy-45d2w 1/1 Running 0 5h38m
kube-proxy-pp94m 1/1 Running 0 5h38m
kube-scheduler-k8s-master.timinglee.org 1/1 Running 0 9h
[root@k8s-master ~]#
测试:
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 91m 10.244.1.46 k8s-node1.timinglee.org <none> <none>
test-pod 0/1 Completed 0 53m 10.244.1.47 k8s-node1.timinglee.org <none> <none>
testpod 1/1 Running 2 (35m ago) 38m 10.244.1.49 k8s-node1.timinglee.org <none> <none>
vol1 1/1 Running 0 79m 10.244.2.57 k8s-node2.timinglee.org <none> <none>
web 1/1 Running 0 17s 10.244.15.64 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]# curl 10.244.15.64
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#


k8s调度(Scheduling)
调度在Kubernetes中的作用
-
调度是指将未调度的Pod自动分配到集群中的节点的过程
-
调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod
-
调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行
调度原理:
-
创建Pod
-
用户通过Kubernetes API创建Pod对象,并在其中指定Pod的资源需求、容器镜像等信息。
-
-
调度器监视Pod
-
Kubernetes调度器监视集群中的未调度Pod对象,并为其选择最佳的节点。
-
-
选择节点
-
调度器通过算法选择最佳的节点,并将Pod绑定到该节点上。调度器选择节点的依据包括节点的资源使用情况、Pod的资源需求、亲和性和反亲和性等。
-
-
绑定Pod到节点
-
调度器将Pod和节点之间的绑定信息保存在etcd数据库中,以便节点可以获取Pod的调度信息。
-
-
节点启动Pod
-
节点定期检查etcd数据库中的Pod调度信息,并启动相应的Pod。如果节点故障或资源不足,调度器会重新调度Pod,并将其绑定到其他节点上运行。
-
调度器种类
-
默认调度器(Default Scheduler):
-
是Kubernetes中的默认调度器,负责对新创建的Pod进行调度,并将Pod调度到合适的节点上。
-
-
自定义调度器(Custom Scheduler):
-
是一种自定义的调度器实现,可以根据实际需求来定义调度策略和规则,以实现更灵活和多样化的调度功能。
-
-
扩展调度器(Extended Scheduler):
-
是一种支持调度器扩展器的调度器实现,可以通过调度器扩展器来添加自定义的调度规则和策略,以实现更灵活和多样化的调度功能。
-
-
kube-scheduler是kubernetes中的默认调度器,在kubernetes运行后会自动在控制节点运行
常用调度方法
2.4.1 nodename
-
nodeName 是节点选择约束的最简单方法,但一般不推荐
-
如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法
-
使用 nodeName 来选择节点的一些限制
-
如果指定的节点不存在。
-
如果指定的节点没有资源来容纳 pod,则pod 调度失败。
-
云环境中的节点名称并非总是可预测或稳定的
-
实例:
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
nodeName: k8s-node2
containers:
- image: myapp:v1
name: testpod
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 103m 10.244.1.46 k8s-node1.timinglee.org <none> <none>
test-pod 0/1 Completed 0 64m 10.244.1.47 k8s-node1.timinglee.org <none> <none>
testpod 0/1 Pending 0 22s <none> k8s-node2 <none> <none>
vol1 1/1 Running 0 90m 10.244.2.57 k8s-node2.timinglee.org <none> <none>
web 1/1 Running 0 11m 10.244.15.64 k8s-node2.timinglee.org <none> <none>
[root@k8s-master ~]#
affinity(亲和性)
官方文档 :
亲和与反亲和
-
nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
-
使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
nodeAffinity节点亲和
-
那个节点服务指定条件就在那个节点运行
-
requiredDuringSchedulingIgnoredDuringExecution 必须满足,但不会影响已经调度
-
preferredDuringSchedulingIgnoredDuringExecution 倾向满足,在无法满足情况下也会调度pod
-
IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。
-
-
nodeaffinity还支持多种规则匹配条件的配置如
| 匹配规则 | 功能 |
|---|---|
| ln | label 的值在列表内 |
| Notln | label 的值不在列表内 |
| Gt | label 的值大于设置的值,不支持Pod亲和性 |
| Lt | label 的值小于设置的值,不支持pod亲和性 |
| Exists | 设置的label 存在 |
| DoesNotExist | 设置的 label 不存在 |
nodeAffinity示例
[root@k8s-master ~]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In #In | NotIn两个结果相反
values:
- ssd
[root@k8s-master ~]# kubectl apply -f pod3.yml
pod/node-affinity created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-8487c65cfc-sskm5 1/1 Running 0 106m
node-affinity 0/1 Pending 0 9s
[root@k8s-master ~]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: NotIn #两个结果相反
values:
- ssd
[root@k8s-master ~]# kubectl apply -f pod3.yml
pod/node-affinity created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-8487c65cfc-sskm5 1/1 Running 0 107m
node-affinity 1/1 Running 0 7s


Podaffinity(pod的亲和)
-
那个节点有符合条件的POD就在那个节点运行
-
podAffinity 主要解决POD可以和哪些POD部署在同一个节点中的问题
-
podAntiAffinity主要解决POD不能和哪些POD部署在同一个节点中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
-
Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,
-
Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。
Podaffinity示例
[root@k8s-master ~]# vim example4.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAffinity: #亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
[root@k8s-master ~]# kubectl apply -f example4.yml
deployment.apps/nginx-deployment created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 111m 10.244.1.46 k8s-node1.timinglee.org <none> <none>
nginx-deployment-658496fff-czr92 1/1 Running 0 23s 10.244.18.67 k8s-node1.timinglee.org <none> <none>
nginx-deployment-658496fff-pqjxn 1/1 Running 0 23s 10.244.18.65 k8s-node1.timinglee.org <none> <none>
nginx-deployment-658496fff-sz7xw 1/1 Running 0 23s 10.244.18.66 k8s-node1.timinglee.org <none> <none>

Podantiaffinity(pod反亲和)
Podantiaffinity示例
[root@k8s-master ~]# vim example5.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAntiAffinity: #反亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
[root@k8s-master ~]# kubectl apply -f example5.yml
deployment.apps/nginx-deployment configured
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-sskm5 1/1 Running 0 114m 10.244.1.46 k8s-node1.timinglee.org <none> <none>
nginx-deployment-5f5fc7b8b9-rwgc9 1/1 Running 0 36s 10.244.15.65 k8s-node2.timinglee.org <none> <none>
nginx-deployment-5f5fc7b8b9-zdm2w 0/1 Pending 0 34s <none> <none> <none> <none>
nginx-deployment-658496fff-czr92 1/1 Running 0 3m19s 10.244.18.67 k8s-node1.timinglee.org <none> <none>

Taints(污点模式,禁止调度)
-
Taints(污点)是Node的一个属性,设置了Taints后,默认Kubernetes是不会将Pod调度到这个Node上
-
Kubernetes如果为Pod设置Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去
-
可以使用命令 kubectl taint 给节点增加一个 taint:
$ kubectl taint nodes <nodename> key=string:effect #命令执行方法
$ kubectl taint nodes node1 key=value:NoSchedule #创建
$ kubectl describe nodes server1 | grep Taints #查询
$ kubectl taint nodes node1 key- #删除
其中[effect] 可取值:
| effect值 | 解释 |
|---|---|
| NoSchedule | POD 不会被调度到标记为 taints 节点 |
| PreferNoSchedule | NoSchedule 的软策略版本,尽量不调度到此节点 |
| NoExecute | 如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出 |
Taints示例
[root@k8s-master ~]# vim example6.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
[root@k8s-master ~]# kubectl apply -f example6.yml
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-2vgjt 1/1 Running 0 33s 10.244.15.67 k8s-node2.timinglee.org <none> <none>
web-7c56dcdb9b-xm4xr 1/1 Running 0 33s 10.244.18.72 k8s-node1.timinglee.org <none> <none>
#设定污点为NoSchedule
[root@k8s-master ~]# kubectl taint node k8s-node1.timinglee.org name=lee:NoSchedule
node/k8s-node1.timinglee.org tainted
[root@k8s-master ~]# kubectl describe nodes k8s-node1 | grep Tain
Taints: name=lee:NoSchedule
#删除污点
[root@k8s-master ~]# kubectl taint node k8s-node1.timinglee.org name-
node/k8s-node1.timinglee.org untainted
[root@k8s-master ~]# kubectl describe nodes k8s-node1 | grep Tain
Taints: <none>
[root@k8s-master ~]#
#设定污点为NoExecute
[root@k8s-master ~]# kubectl taint node k8s-node1.timinglee.org name=lee:NoExecute
node/k8s-node1.timinglee.org tainted
[root@k8s-master ~]# kubectl describe nodes k8s-node1 | grep Tain
Taints: name=lee:NoExecute
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-2vgjt 1/1 Running 0 4m4s 10.244.15.67 k8s-node2.timinglee.org <none> <none>
web-7c56dcdb9b-wh4zc 1/1 Running 0 16s 10.244.15.70 k8s-node2.timinglee.org <none> <none>

7、kubernetes中的认证授权

Authentication(认证)
-
认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再进行其它方式的认证。通常启用X509 Client Certs和Service Accout Tokens两种认证方式。
-
Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,它只是形式上存在。
Authorization(授权)
-
必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒绝请求。授权方式现共有6种,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默认集群强制开启RBAC。
Admission Control(准入控制)
-
用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求API资源对象进行修改和校验。

UserAccount与ServiceAccount
-
用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。
-
用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不会做 namespace 隔离, 服务账户是 namespace 隔离的。
-
集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务创建服务账户 ( 即权限最小化原则 )。
-
ServiceAccount
-
服务账户控制器(Service account controller)
-
服务账户管理器管理各命名空间下的服务账户
-
每个活跃的命名空间下存在一个名为 “default” 的服务账户
-
-
服务账户准入控制器(Service account admission controller)
-
相似pod中 ServiceAccount默认设为 default。
-
保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
-
如果pod不包含ImagePullSecrets设置那么ServiceAccount中的ImagePullSecrets 被添加到pod中
-
将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中
-
将一个包含用于 API 访问的 token 的 volume 添加到 pod 中
-
ServiceAccount示例:
建立名字为admin的ServiceAccount
[root@k8s-master ~]# kubectl create sa timinglee
serviceaccount/timinglee created
[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
#建立secrets
[root@k8s-master ~]# kubectl create secret docker-registry docker-login --docker-username admin --docker-password 123456 --docker-server reg.timinglee.org --docker-email lee@timinglee.org
[root@k8s-master ~]# kubectl describe secrets docker-login
Name: docker-login
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 126 bytes
将secrets注入到sa中
[root@k8s-master ~]# kubectl edit sa timinglee
apiVersion: v1
imagePullSecrets:
- name: docker-login #写这个内容
kind: ServiceAccount
metadata:
creationTimestamp: "2024-09-08T15:44:04Z"
name: timinglee
namespace: default
resourceVersion: "262259"
uid: 7645a831-9ad1-4ae8-a8a1-aca7b267ea2d
[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: docker-login
Mountable secrets: <none>
Tokens: <none>
Events: <none>
建立私有仓库并且利用pod访问私有仓库
pod绑定sa
[root@k8s-master ~]# vim example1.yml
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
serviceAccountName: timinglee
containers:
- image: reg.timinglee.org/library/nginx:latest
name: testpod
[root@k8s-master ~]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 72s
[root@k8s-master ~]#

认证(在k8s中建立认证用户)
创建UserAccount
#建立证书
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# openssl genrsa -out timinglee.key 2048
[root@k8s-master pki]# openssl req -new -key timinglee.key -out timinglee.csr -subj "/CN=timinglee"
[root@k8s-master pki]# openssl x509 -req -in timinglee.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out timinglee.crt -days 365
Certificate request self-signature ok
subject=CN = timinglee
[root@k8s-master pki]# openssl x509 -in timinglee.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
5b:ad:d5:5f:0b:d1:c6:de:c0:7c:b2:cf:f4:c0:78:fa:63:eb:c0:33
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Aug 24 16:48:44 2025 GMT
Not After : Aug 24 16:48:44 2026 GMT
Subject: CN = timinglee
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c7:bd:5d:8a:a6:01:91:48:16:98:25:9b:e1:00:
fe:b4:95:50:c7:84:fa:1e:77:5a:c7:36:e3:35:91:
86:be:6a:68:9b:af:56:09:92:3e:92:3f:d7:30:54:
db:3b:cc:72:c7:37:be:41:d9:ff:b2:fc:58:a3:ae:
bd:47:2b:71:d8:87:89:b3:3a:7c:95:93:8e:81:ec:
fc:e6:29:5b:1b:66:f6:a2:4b:f2:77:7b:65:a5:c9:
8c:2f:3a:3c:05:f9:2c:95:30:d7:65:89:3e:5f:cb:
d7:80:28:65:ee:08:da:85:b1:5d:ee:d1:4a:fa:b9:
e4:68:b6:87:70:d8:76:65:32:ce:83:63:24:ae:b5:
8b:d0:23:61:56:4b:a7:98:39:b7:d9:c9:d2:c5:4b:
bc:60:3a:81:02:1e:85:93:1e:30:12:3a:84:04:0e:
69:6d:fa:1c:e9:71:67:f2:aa:f5:72:48:05:64:ca:
3e:7d:a4:a8:50:2c:10:e4:6e:dc:08:ed:f2:c6:f6:
66:19:6e:af:95:af:ea:8b:67:c1:9c:3c:c4:f8:da:
32:8f:13:2c:da:c3:c9:52:0e:f8:8d:6d:13:e6:53:
4d:77:80:36:51:b2:42:21:3b:b6:8f:2d:32:35:52:
9a:74:ff:42:ea:06:8f:5c:cd:62:4a:95:fe:ea:47:
b8:a9
Exponent: 65537 (0x10001)
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
b7:f0:e0:17:30:80:9a:14:f7:a0:43:bb:7b:ac:0f:28:85:00:
00:0d:65:b7:6c:22:ba:27:95:2a:1f:e4:1c:70:b0:9a:77:40:
1a:11:f4:81:00:04:16:d2:df:9f:34:11:81:68:fe:a3:19:54:
78:88:ca:35:6e:64:ec:29:0d:c2:7e:4b:72:0b:f4:a8:36:f3:
db:f5:ff:78:d0:ed:13:1e:94:ab:79:94:57:30:2c:01:6c:96:
7d:61:57:5b:f9:50:a0:b8:66:48:cb:58:2b:00:f2:b4:77:8e:
eb:7a:28:94:d3:df:52:40:4a:40:0a:59:1a:61:bb:ce:1a:4f:
39:96:2c:f0:b9:4b:a1:cc:38:20:21:ac:c5:b5:9c:5e:05:4e:
42:92:6d:de:92:75:e5:e5:63:8f:32:23:89:cc:ce:e5:cb:16:
1f:c1:5d:b2:d7:f5:c0:99:30:2b:a5:c0:85:06:be:d0:97:88:
a9:9a:d0:06:43:09:cb:42:ae:3f:23:90:69:a6:0e:28:00:01:
1e:5e:f1:9f:a8:ff:58:d8:a0:5a:30:4d:68:f9:01:5f:0b:29:
d3:d2:49:71:76:25:ff:d8:35:e8:a0:70:06:d9:1e:0d:5e:a2:
31:4b:5e:aa:48:b5:12:fd:4d:d4:8f:81:71:23:da:90:cb:93:
58:92:d3:56
#建立k8s中的用户
[root@k8s-master pki]# kubectl config set-credentials timinglee --client-certificate /etc/kubernetes/pki/timinglee.crt --client-key /etc/kubernetes/pki/timinglee.key --embed-certs=true
User "timinglee" set.
[root@k8s-master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.25.254.100:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: timinglee
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
#为用户创建集群的安全上下文
[root@k8s-master pki]# kubectl config set-context timinglee@kubernetes --cluster kubernetes --user timinglee
Context "timinglee@kubernetes" created.
#切换用户,用户在集群中只有用户身份没有授权
[root@k8s-master pki]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@k8s-master pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "timinglee" cannot list resource "pods" in API group "" in the namespace "default"
#切换会集群管理
[root@k8s-master pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
#如果需要删除用户
[root@k8s-master pki]# kubectl config delete-user timinglee
deleted user timinglee from /etc/kubernetes/admin.conf


RBAC(Role Based Access Control)
基于角色访问控制授权:

-
允许管理员通过Kubernetes API动态配置授权策略。RBAC就是用户通过角色与权限进行关联。
-
RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可
-
RBAC的三个基本概念
-
Subject:被作用者,它表示k8s中的三类主体, user, group, serviceAccount
-
-
Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。
-
RoleBinding:定义了“被作用者”和“角色”的绑定关系
-
RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding
-
Role 和 ClusterRole
-
Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。
-
-
ClusterRole 跟 Role 类似,但是可以在集群中全局使用。
-
Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用
-
cluster-amdin、admin、edit、view
role授权实施
#生成role的yaml文件
[root@k8s-master ~]# kubectl create role myrole --dry-run=client --verb=get --resource pods -o yaml > myrole.yml
#更改文件内容
[root@k8s-master ~]# vim myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: myrole
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- create
- update
- path
- delete
[root@k8s-master ~]# kubectl apply -f myrole.yml
#创建role
[root@k8s-master ~]# kubectl describe role myrole
Name: myrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list create update path delete]
[root@k8s-master ~]#

#建立角色绑定
[root@k8s-master ~]# kubectl create rolebinding timinglee --role myrole --namespace default --user timinglee --dry-run=client -o yaml > rolebinding-myrole.yml
[root@k8s-master ~]# vim rolebinding-myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: timinglee
namespace: default #角色绑定必须指定namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: timinglee
[root@k8s-master ~]# kubectl apply -f rolebinding-myrole.yml
[root@k8s-master ~]# kubectl get rolebindings.rbac.authorization.k8s.io timinglee
NAME ROLE AGE
timinglee Role/myrole 6s
#切换用户测试授权
[root@k8s-master ~]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@k8s-master ~]# kubectl get svc #只针对pod进行了授权,所以svc依然不能操作
Error from server (Forbidden): services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" in the namespace "default"
#切换回管理员
[root@k8s-master ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
clusterrole授权实施
#建立集群角色
[root@k8s-master ~]# kubectl create clusterrole myclusterrole --resource=deployment --verb get --dry-run=client -o yaml > myclusterrole.yml
[root@k8s-master ~]# vim myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myclusterrole
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- update
- path
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- path
- delete
[root@k8s-master ~]# kubectl apply -f myclusterrole.yml
[root@k8s-master ~]# kubectl describe clusterrole myclusterrole
Name: myclusterrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get list watch create update path delete]
deployments.apps [] [] [get list watch create update path delete]
#建立集群角色绑定
[root@k8s-master ~]# kubectl create clusterrolebinding clusterrolebind-myclusterrole --clusterrole myclusterrole --user timinglee --dry-run=client -o yaml > clusterrolebind-myclusterrole.yml
[root@k8s-master ~]# vim clusterrolebind-myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: clusterrolebind-myclusterrole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: timinglee
[root@k8s-master ~]# kubectl apply -f clusterrolebind-myclusterrole.yml
[root@k8s-master ~]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io clusterrolebind-myclusterrole
Name: clusterrolebind-myclusterrole
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: myclusterrole
Subjects:
Kind Name Namespace
---- ---- ---------
User timinglee
#测试:
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default testpod 1/1 Running 0 22m
ingress-nginx ingress-nginx-admission-patch-dzp2k 0/1 Completed 1 6h50m
ingress-nginx ingress-nginx-controller-bb7d8f97c-xv2tx 1/1 Running 0 6h50m
kube-system calico-kube-controllers-b9c79b596-bpj2p 1/1 Running 0 76m
kube-system calico-node-4nhw5 1/1 Running 0 116m
kube-system calico-node-r9l6s 1/1 Running 0 116m
kube-system calico-node-vdz8t 1/1 Running 0 117m
kube-system calico-typha-55df74b8b4-x5642 1/1 Running 0 118m
kube-system coredns-647dc95897-7fcdz 1/1 Running 0 76m
kube-system coredns-647dc95897-hf7qr 1/1 Running 0 11h
kube-system etcd-k8s-master.timinglee.org 1/1 Running 0 11h
kube-system kube-apiserver-k8s-master.timinglee.org 1/1 Running 0 11h
kube-system kube-controller-manager-k8s-master.timinglee.org 1/1 Running 0 11h
kube-system kube-proxy-2l5bq 1/1 Running 0 7h35m
kube-system kube-proxy-45d2w 1/1 Running 0 7h35m
kube-system kube-proxy-pp94m 1/1 Running 0 7h35m
kube-system kube-scheduler-k8s-master.timinglee.org 1/1 Running 0 11h
metallb-system controller-65957f77c8-d5bkp 1/1 Running 0 76m
metallb-system speaker-kx6rt 1/1 Running 0 7h30m
metallb-system speaker-ppvpc 1/1 Running 0 74m
metallb-system speaker-stl5r 1/1 Running 0 7h30m
nfs-client-provisioner nfs-client-provisioner-5cf69474f9-k667p 1/1 Running 0 171m
[root@k8s-master ~]# kubectl get deployments.apps -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx ingress-nginx-controller 1/1 1 1 6h51m
kube-system calico-kube-controllers 1/1 1 1 118m
kube-system calico-typha 1/1 1 1 118m
kube-system coredns 2/2 2 2 11h
metallb-system controller 1/1 1 1 7h30m
nfs-client-provisioner nfs-client-provisioner 1/1 1 1 172m
[root@k8s-master ~]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11h
default nginx-svc ClusterIP None <none> 80/TCP 151m
ingress-nginx ingress-nginx-controller LoadBalancer 10.111.141.120 172.25.254.50 80:32293/TCP,443:30329/TCP 6h51m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.110.232.236 <none> 443/TCP 6h51m
kube-system calico-typha ClusterIP 10.104.149.154 <none> 5473/TCP 120m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11h
metallb-system metallb-webhook-service ClusterIP 10.96.199.135 <none> 443/TCP 7h30m
[root@k8s-master ~]#

服务账户的自动化
服务账户准入控制器(Service account admission controller)
-
如果该 pod 没有 ServiceAccount 设置,将其 ServiceAccount 设为 default。
-
保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
-
如果 pod 不包含 ImagePullSecrets 设置,那么 将 ServiceAccount 中的 ImagePullSecrets 信息添加到 pod 中。
-
将一个包含用于 API 访问的 token 的 volume 添加到 pod 中。
-
将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中。
服务账户控制器(Service account controller)
服务账户管理器管理各命名空间下的服务账户,并且保证每个活跃的命名空间下存在一个名为 “default” 的服务账户
更多推荐

所有评论(0)