Nodes

142

Kubernetes runs your workload by placing containers into Pods to run on Nodes.

A Node is a worker machine in Kubernetes, and may be either a virtual or a physical machine, depending on the cluster. A Pod always runs on a Node. Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node – every cluster has at least one worker node.

The components on a node include the kubelet, a container runtime, and the kube-proxy.

Stuff you wanna know:

    1. The name of a Node object must be a valid DNS subdomain name.
    2. Two Nodes cannot have the same name at the same time.
    3. Each Node is managed by the control plane.
    4. The node controller is a Kubernetes control plane component that manages various aspects of nodes.
    5. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.
    6. The control plane’s automatic scheduling takes into account the available resources on each Node.
    7. The node controller does not force delete pods until it is confirmed that they have stopped running in the cluster.
    8. When problems occur on nodes, the Kubernetes control plane automatically creates taints that match the conditions affecting the node. (The scheduler takes the Node’s taints into consideration when assigning a Pod to a Node).
    9. Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
    10. Every Kubernetes Node runs Kubelet and a container runtime like Docker.
    11. You can add Nodes to the API server in two ways — 1. The kubelet on a node self-registers to the control plane, or 2. You manually add a Node object.
    12. After you create a Node object, or the kubelet on a node self-registers, the control plane checks whether the new Node object is valid.
    13. If the node is healthy (i.e. all necessary services are running), then it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity until it becomes healthy.
    14. Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy.
    15. You, or a controller, must explicitly delete the Node object to stop that health checking.
    16. If the Node needs to be replaced or updated significantly, the existing Node object needs to be removed from API server first and re-added after the update.
    17. When the Node authorization mode and NodeRestriction admission plugin are enabled, kubelets are only authorized to create/modify their own Node resource.
    18. When Node configuration needs to be updated, Kubernetes recommends re-registering the node with the API server.
    19. Pods already scheduled on the Node may misbehave or cause issues if the Node configuration will be changed on kubelet restart.
    20. You can create and modify Node objects using kubectl.
    21. When you want to create Node objects manually, set the kubelet flag --register-node=false.
    22. You can modify Node objects regardless of the setting of --register-node. For example, you can set labels on an existing Node or mark it unschedulable.
    23. You can use labels on Nodes in conjunction with node selectors on Pods to control scheduling. For example, you can constrain a Pod to only be eligible to run on a subset of the available nodes.
    24. Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node but does not affect existing Pods on the Node. (This is useful as a preparatory step before a node reboot or other maintenance.)
    25. To mark a Node unschedulable, run:
      kubectl cordon $NODENAME
    26. A Node’s status contains — Address, Conditions, Capacity and Allocatable, and Info.
    27. Node addresses include HostName, ExternalIP, and InternalIP.
    28. The conditions field describes the status of all Running nodes.
    29. Node Capacity and Allocatable describe the resources available on the node: CPU, memory, and the maximum number of pods that can be scheduled onto the node.
    30. The fields in the capacity block indicate the total amount of resources that a Node has.
    31. The Nodes Info describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), container runtime details, and which operating system the node uses.
    32. You can use kubectl to view a Node’s status and other details.
    33. Heartbeats, sent by Kubernetes nodes, help your cluster determine the availability of each node, and to take action when failures are detected.
    34. The node controller is a Kubernetes control plane component that manages various aspects of nodes.
    35. Node objects track information about the Node’s resource capacity. If you manually add a Node, then you need to set the node’s capacity information when you add it.
    36. Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications.
    37. In the Kubernetes API, a node’s condition is represented as part of the .status of the Node resource.
    38. A key reason for spreading your nodes across availability zones is so that the workload can be shifted to healthy zones when one entire zone goes down.
    39. Node objects track information about the Node’s resource capacity: for example, the amount of memory available and the number of CPUs.
    40. The kubelet attempts to detect node system shutdown and terminates pods running on the node.
    41. A node shutdown action may not be detected by kubelet’s Node Shutdown Mananger, either because the command does not trigger the inhibitor locks mechanism used by kubelet or because of a user error, i.e., the ShutdownGracePeriod and ShutdownGracePeriodCriticalPods are not configured properly.
    42. When a node is shutdown but not detected by kubelet’s Node Shutdown Manager, the pods that are part of a StatefulSet will be stuck in terminating status on the shutdown node and cannot move to a new running node.
    43. To enable swap on a node, the NodeSwap feature gate must be enabled on the kubelet, and the --fail-swap-on command line flag or failSwapOn configuration setting must be set to false.

More stuff: