Click the card below,Star it!
Reply “1024” to get2TB of learning resources!
Setting up a K8s cluster version 1.27.0 using kubeadm successfully in one go, the specific steps are as follows:
-
Cluster planning and architecture -
System initialization preparation (synchronized operation on all nodes) -
Install and configure the cri-dockerd plugin -
Install kubeadm (synchronized operation on all nodes) -
Initialize the cluster -
Add Node nodes to the cluster -
Install the Calico network component -
Test CoreDNS resolution availability
Cluster planning and architecture
Official documentation: https://kubernetes.io/docs/reference/setup-tools/kubeadm/
Environment planning
-
Pod network segment: 10.244.0.0/16 -
Service network segment: 10.10.0.0/16 -
Note: Pod and service network segments must not conflict; if they do, K8S cluster installation will fail.

System initialization preparation (synchronized operation on all nodes)
Disable firewall
Configure domain name resolution
Modify the hostname on the specified host.
Configure server time to stay consistent
Add a timed synchronization to automatically sync time at 1 AM every day
Disable swap partition (kubernetes requires it to be disabled)
Prevent swap partition from starting automatically on boot
Modify Linux kernel parameters, add bridge filtering and address forwarding functions
Load the bridge filtering module
Configure ipvs functionality
In Kubernetes, there are two proxy models for Service, one based on iptables and the other based on ipvs; ipvs has higher performance compared to iptables. If you want to use the ipvs model, you need to manually load the ipvs module.
Install Docker container component
Docker configuration acceleration source:
Restart the server (can be skipped)
Install and configure the cri-dockerd plugin
Download from the official website: https://github.com/Mirantis/cri-dockerd/releases
Note: Operate simultaneously on three servers
Install the cri-dockerd plugin
Backup and update cri-docker.service file
Start cir-dockerd
Install kubeadm (synchronized operation on all nodes)
Configure domestic yum source
One-click installation of kubeadm, kubelet, kubectl.
Kubeadm will use the kubelet service to deploy the main services of Kubernetes in container mode, so you need to start the kubelet service first.
Initialize the cluster
Operate on the master-1 host
Generate default configuration file for initialization
We modify the default configuration file according to our needs. I mainly changed the following configurations:
-
advertiseAddress: changed to the IP address of the master -
criSocket: specify the container runtime -
imageRepository: configure the domestic acceleration source address -
podSubnet: pod network segment address -
serviceSubnet: services network segment address -
Added the specification to use ipvs at the end, enabling systemd -
nodeRegistration.name: changed to the current host name
The final initialization configuration file is as follows:
Perform initialization
Initialization was successful, and the output is as follows:
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0504 22:24:16.508649 4725 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1] and IPs [10.96.0.1 16.32.15.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0504 22:24:34.897353 4725 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.002479 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 16.32.15.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:afef55c724c1713edb7926d98f8c4063fbae928fc4eb11282589d6485029b9a6
Configure the kubectl configuration file, which is equivalent to authorizing kubectl, so that the kubectl command can manage the k8s cluster using this certificate.
Verify that you can use kubectl command
Add Node nodes to the cluster
Perform operations on the two node nodes
Use the following command to create and view the token
Execute on the two node nodes
Note to add --cri-socket=
to specify cri-dockerd.sock.
Successfully joined the cluster as shown in the figure below:
Label the two node nodes
Execute on the master-1 host
View cluster nodes

Install the Calico network component
-
Calico online documentation address: https://docs.projectcalico.org/manifests/calico.yaml
Upload the calico.yaml file to the server, the following provides the content of the calico.yaml file: execute on the master host.
Check cluster status && check built-in Pod status
Check the component status to see if it is in Running status as shown below:

Test CoreDNS resolution availability
Download busybox:1.28 image
Test coredns
Note: Busybox must use the specified version 1.28, the latest version cannot be used; the latest version cannot resolve DNS and IP.
Source: https://blog.csdn.net/weixin_45310323/article /details/130494823
Build a high-quality technical communication community, welcome those engaged in backend development and operation and maintenance technology to join the group (Note the position; please do not re-add if already in the technical exchange group). Mainly focused on technical exchanges, internal referrals, and industry discussions, please speak civilly. Advertisers are not allowed, and do not trust private chats to prevent being scammed.
Scan to add me as a friend, and I will pull you into the group
Tencent announced: 154.625 billion!
Suddenly collapsed! 340 billion giants filed for bankruptcy…
The strongest domestic open-source monitoring system recommendation! Really awesome…
Abandon Docker! It has been proven to be more awesome…
Still decided to go to Huawei!!!
Quickly build a virtual development environment in 10 minutes, efficiency artifact!

PS: Because the public account platform has changed the push rules, if you don’t want to miss the content, remember to read it to the end and click “Looking”, and add a “Star”, so that every time a new article is pushed, it will appear in your subscription list as soon as possible. Click “Looking” to support us!