kubeadm部署k8s集群

kubeadm部署k8s集群

一、环境配置 CentOS7服务器*3 2核4G 更新YUM镜像源 yum update yum upgrade 1.各个节点安装、运行docker p

一、环境配置

CentOS7服务器*3 2核4G

更新YUM镜像源

yum update
yum upgrade

1.各个节点安装、运行docker

p

2.各个节点安装kubeadm、kubectl、kubelet

安装阿里源

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
#阿里镜像站
#更新yum源
yum update

禁用selinux

#临时禁用
setenforce 0
#修改配置文件
#将 /etc/selinx/config文件中的SELINUX=enforcing 改为SELINUX=disabled 或SELINUX=enforcing

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭swap分区

为了保证kubelet正常运行,必须禁用swap分区

#临时禁用
swap off -a
#修改配置文件,进入/etc/fstab等配置文件中注释掉swap分区

安装并启动kubelet

yum install -y kubelet kubeadm kubectl 

#设置开机自启动
systemctl enable kubelet

3.部署主节点

查看kubeadm版本信息

# kubeadm config print init-defaults
...
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
...

编写kubeadm.yaml文件

其中,apiversion与kubernetesVersion需要与kubeadm版本信息一致

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
imageRepository: registry.aliyuncs.com/google_containers
#imageRepository: k8s.gcr.io
controllerManager: {}
apiServer:
  extraArgs: # extraArgs 字段由 key: value 对组成
    runtime-config: "api/all=true"
etcd:
  local:
    dataDir: /data/k8s/etcd
scheduler: {}

启用kubelet.service

systemctl enable kubelet.service

启动容器运行时containerd

rm -rf /etc/containerd/config.toml
 
//修改 containerd 配置,添加镜像加速:
//  1) 基于默认配置之上,编辑 containerd 配置
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
 
//修改 /etc/containerd/config.toml :
  ...
  [plugins."io.containerd.grpc.v1.cri"]
    # 修改: 1行配置
    # sandbox_image = "registry.k8s.io/pause:3.6"
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
 
    [plugins."io.containerd.grpc.v1.cri".registry]
      ...
      # 新增:3+2行配置(不含注释行或空行)
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          # 阿里云镜像加速获取 from https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
          endpoint = ["https://xxx.mirror.aliyuncs.com", "https://registry-1.docker.io"]
 
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
          endpoint = ["https://registry.aliyuncs.com/google_containers"]
    ...

systemctl restart containerd
systemctl status containerd

使网桥支持ipv6

cd /etc/sysctl.d

vim k8s.conf
#添加如下文本:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
#使其立即生效
sysctl --system

时间同步

yum install ntpdate -y

ntpdate time.windows.com

主节点初始化

# kubeadm init --config ~/k8s-deployments/kubeadm.yaml
[init] Using Kubernetes version: v1.28.0
...
...
...
[preflight] Running pre-flight checks

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:  //为了启动使用你的集群,您需要以【普通用户】身份运行以下内容:
  ###主节点执行
  mkdir -p $HOME/.kube  ##k8s默认会在此存放集群配置文件
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  ##把管理员文件复制到当前用户.kube目录,admin.conf是集群最高权限配置
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:  //或者,如果您是 root 用户,您也可以运行
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.  //您现在应该向集群部署一个 Pod 网络。
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:  //然后,您可以通过在每个节点上以 root 身份运行以下命令来连接任意数量的工作者节点。
 
kubeadm join 192.168.xx.211:6443 --token qi82d7.glltv3hltpe4aq08 \
	--discovery-token-ca-cert-hash sha256:6054b8402053e9eb8f6cb134c066f3e28ae80aa5fd28cec002af1f4199383890
###如果 join 指令没有及时记住,还可以在 master 节点上生成加入命令: kubeadm token create --print-join-command
//查看 kubernetes service 的 ClusterIP
# kubectl get svc kubernetes -n default
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   10h
 
//查看所有 service 的网段分布
# kubectl get svc --all-namespaces -o wide
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  10h   <none>
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   10h   k8s-app=kube-dns

4.主节点启动集群

主节点执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者 root 用户执行: export KUBECONFIG=/etc/kubernetes/admin.conf

查看集群节点

# kubectl get nodes
NAME   STATUS     ROLES           AGE   VERSION
vm-a   NotReady   control-plane   9h    v1.28.2

5.工作节点加入集群

node节点执行

安装kubelet/kubectl/kubeadm

kubeadm : 初始化集群的命令。

kubectl : 与集群通信的命令行工具。

kubelet : 在集群中的每个节点上用来启动 Pod 和 容器等。

加入集群

kubeadm join 192.168.xx.211:6443 --token qi82d7.glltv3hltpe4aq08 \
	--discovery-token-ca-cert-hash sha256:6054b8402053e9eb8f6cb134c066f3e28ae80aa5fd28cec002af1f4199383890

6.主从节点安装CNI网络插件:Calico

下载calico到contained

#1.拉取镜像
docker pull calico/cni:v3.26.1
docker pull calico/node:v3.26.1
docker pull calico/kube-controllers:v3.26.1

#无法直接访问docker.io可指定代理站前缀
     docker pull m.daocloud.io/docker.io/calico/cni:v3.26.1
  	 docker pull m.daocloud.io/docker.io/calico/node:v3.26.1
  	 docker pull m.daocloud.io/docker.io/calico/kube-controllers:v3.26.1

#改回原名!
  docker tag m.daocloud.io/docker.io/calico/cni:v3.26.1 docker.io/calico/cni:v3.26.1
  docker tag m.daocloud.io/docker.io/calico/node:v3.26.1 docker.io/calico/node:v3.26.1
  docker tag m.daocloud.io/docker.io/calico/kube-controllers:v3.26.1 docker.io/calico/kube-controllers:v3.26.1

2.导入 docker 镜像 到 containerd
ctr -n k8s.io images import <(docker save calico/cni:v3.26.1)
ctr -n k8s.io images import <(docker save calico/node:v3.26.1)
ctr -n k8s.io images import <(docker save calico/kube-controllers:v3.26.1)
touch ~/k8s-deployments/calico.yaml
 
#手动下载 calico.yaml 并copy到 `~/k8s-deployments/calico.yaml
  https://github.com/projectcalico/calico/blob/v3.26.1/manifests/calico.yaml(github地址)
#网络不通可先配置docker加速器
 针对Docker客户端版本大于 1.10.0 的用户

您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://7sn9lxja.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

[root@master ~]#kubectl apply -f ~/k8s-deployments/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
...
...
...
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
deployment.apps/calico-kube-controllers configured
#确认 Calico 就绪后,检查 CoreDNS
kubectl get pod -n kube-system -o wide

如果 CoreDNS 仍卡住,重启它们 
kubectl rollout restart deployment coredns -n kube-system

kubectl get nodes -o wide

7.主节点验证部署结果

#查看kubelet状态
systemctl status kubelet
#查看镜像是否拉取成功
crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock images | grep pause
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB
#


总结:

kubeadm init 初始化失败问题 :

kubelet未正常运行或者容器运行时配置问题,要确保kubelet正常运行,containerd正常运行

kubelet配置错误:

检查kubelet配置文件:/var/lib/kubelet/config.yaml。检查驱动cgroupDriver 与 containerd 不匹配(应为 systemd)

节点处于NotReady状态:

检查节点是否正确安装CNI网络插件或检查kubelet状态

评论