We use kubectl
commands and YAML files to deploy Pods. When K8s receives a deployment task for a Pod resource object, its Scheduler component decides which nodes the Pod should run on based on scheduling policies.
The kubelet
components on those nodes then work together with the container runtime to run the containers in the Pod on the nodes. In this article, we will take a deeper look at K8s node management and scheduling policies.
Managing Nodes
Nodes are physical machines, virtual hosts, or cloud servers that join a K8s cluster. They form the hardware infrastructure of the K8s cluster and serve as the carriers where containers in Pods run.
Nodes are divided into master nodes and worker nodes, each running different K8s components. Typically, a K8s cluster consists of at least two nodes to be considered a true cluster, enabling high availability for business applications. K8s also treats nodes as resource objects, bringing them under the management of the Controller Manager component.
Adding a New Node
To add a new node, you need to ensure that the new node and the existing nodes in the K8s cluster are in the same internal network and can access each other.