Ubuntu 24.04 部署 Kubernetes 1.34

2026-01-20 16:37   5   0  

Kubernetes镜像源

(比如需要安装 1.36 版本,则需要将如下配置中的 v1.35 替换成 v1.36)

# 安装组件
apt-get update && apt-get install -y apt-transport-https
# 下载用于 Kubernetes 软件包仓库的公共签名密钥
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/deb/Release.key |
   gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# 更改镜像源地址
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.36/deb/ /" |
   tee /etc/apt/sources.list.d/kubernetes.list
# 更新缓存
apt-get update

添加 Hosts 解析

cat >> /etc/hosts << EOF
172.30.11.99 vip
172.30.11.0 node00
172.30.11.1 node01
172.30.11.2 node02
172.30.11.3 node03
172.30.11.4 node04
172.30.11.5 node05
172.30.11.6 node06
172.30.11.7 node07
172.30.11.8 node08
172.30.11.9 node09
EOF

时间同步

# 设置时区
timedatectl set-timezone Asia/Shanghai
# 安装 NTP 工具
apt install ntpdate -y
# 手动同步时间
ntpdate time1.aliyun.com
# 定时同步时间
crontab -e
0 0 * * * ntpdate time1.aliyun.com

系统参数优化

# 安装 ipset ipvsadm 组件
apt install ipset ipvsadm -y
# 加载内核模块
cat << EOF |tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
cat << EOF |tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 重启
reboot
# 测试模块加载情况
lsmod | grep ip_vs
lsmod | grep nf_conntrack
lsmod | egrep "overlay"
lsmod | egrep "br_netfilter"

内核转发

# 添加内核参数
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 测试
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --system

安装 Containerd

# 下载 containerd 文件
wget https://github.com/containerd/containerd/releases/download/v2.2.1/containerd-2.2.1-linux-amd64.tar.gz
tar xf containerd-2.2.1-linux-amd64.tar.gz
mv bin/* /usr/local/bin/
# 下载服务启动文件
wget -O /etc/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
# 生成默认配置文件
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/' /etc/containerd/config.toml
# 设置开启自启动
systemctl daemon-reload
systemctl enable --now containerd

安装 Runc

# 下载并安装测试 Runc
wget https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64
chmod +x runc.amd64
mv runc.amd64 /usr/local/bin/runc
runc --version

安装 CNI

wget https://github.com/containernetworking/plugins/releases/download/v1.9.0/cni-plugins-linux-amd64-v1.9.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin/ cni-plugins-linux-amd64-v1.9.0.tgz

安装 K8S 组件

# 查看安装包信息
apt-cache policy kubeadm kubectl kubelet
# 安装指定版本
apt-get install -y kubelet=1.35.0-1.1 kubeadm=1.35.0-1.1 kubectl=1.35.0-1.1
# 锁定版本
apt-mark hold kubelet kubeadm kubectl
# 解锁
apt-mark unhold kubelet kubeadm kubectl
# 修改kubelet配置,使用systemd
vi /etc/default/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
#KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock"
# 设置开机启动
systemctl enable --now kubelet
# 优化crictl配置
cat << EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 30
debug: false
pull-image-on-create: false
disable-pull-on-run: false
EOF

生成k8s配置文件并修改(仅在Master上执行)

# 创建初始化配置文件
kubeadm config print init-defaults > /etc/kubernetes/init-default.yaml
# 修改为国内阿里源
sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/' /etc/kubernetes/init-default.yaml
# 设置 apiServerIP 地址. 请自行替换172.25.14.101为自己宿主机IP
sed -i 's/1.2.3.4/172.30.11.0/' /etc/kubernetes/init-default.yaml

初始化 K8S

# 下载镜像
kubeadm config images pull --kubernetes-version=v1.35.0 --image-repository registry.aliyuncs.com/google_containers
# 创建集群(高可用)
kubeadm init --image-repository registry.aliyuncs.com/google_containers --control-plane-endpoint "172.30.11.99:6443" --upload-certs
[init] Using Kubernetes version: v1.35.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node00] and IPs [10.96.0.1 172.30.11.0 172.30.11.99]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node00] and IPs [172.30.11.0 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node00] and IPs [172.30.11.0 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001997335s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://172.30.11.0:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 1.507151155s
[control-plane-check] kube-scheduler is healthy after 3.446758151s
[control-plane-check] kube-apiserver is healthy after 5.002678983s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3de23fd54db568ec167d4378415d6b8850698c2683beb5020a22614cbd38ccd4
[mark-control-plane] Marking the node node00 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node00 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 6xs2ak.vz8makzbnq00oi8p
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

 export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes running the following command on each as root:

 kubeadm join 172.30.11.99:6443 --token 6xs2ak.vz8makzbnq00oi8p \
       --discovery-token-ca-cert-hash sha256:bef71f22a3c03db9c2dd993f0bc438c814e974e850e744541a67f153595e6d37 \
       --control-plane --certificate-key 3de23fd54db568ec167d4378415d6b8850698c2683beb5020a22614cbd38ccd4

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.30.11.99:6443 --token 6xs2ak.vz8makzbnq00oi8p \
       --discovery-token-ca-cert-hash sha256:bef71f22a3c03db9c2dd993f0bc438c814e974e850e744541a67f153595e6d37

部署 K8S 网络插件 Calico

wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f /etc/kubernetes/calico.yaml

部署 metrics 插件

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

部署 Ingress 插件

wget https://github.com/kubernetes/ingress-nginx/archive/refs/tags/controller-v1.14.1.zip
unzip controller-v1.14.1.zip
kubectl apply -f ingress-nginx-controller-v1.14.1/deploy/static/provider/baremetal/deploy.yaml

部署 NFS 插件

wget https://github.com/kubernetes-csi/csi-driver-nfs/archive/refs/tags/v4.12.1.zip
unzip v4.12.1.zip
cd csi-driver-nfs-4.12.1
./deploy/install-driver.sh v4.12.1 local
下一篇
没有了
博客评论
还没有人评论,赶紧抢个沙发~
发表评论
说明:请文明发言,共建和谐网络,您的个人信息不会被公开显示。