1 Assign Memory Resources to Containers And Pods
Jason Israel edited this page 2025-09-24 16:36:36 +08:00


This web page shows methods to assign a memory request and a memory restrict to a Container. A Container is guaranteed to have as a lot memory as it requests, but just isn't allowed to use more memory than its limit. You could have a Kubernetes cluster, and the kubectl command-line software must be configured to communicate with your cluster. It is strongly recommended to run this tutorial on a cluster with at the least two nodes that are not acting as management aircraft hosts. To verify the version, enter kubectl version. Each node in your cluster will need to have at the very least 300 MiB of memory. A couple of of the steps on this web page require you to run the metrics-server service in your cluster. You probably have the metrics-server working, you possibly can skip those steps. Create a namespace so that the assets you create on this train are isolated from the rest of your cluster. To specify a memory request for a Container, embody the assets:requests field in the Container's resource manifest.


To specify a memory restrict, include sources:limits. In this exercise, you create a Pod that has one Container. The Container has a memory request of 100 MiB and a memory restrict of 200 MiB. The args part within the configuration file provides arguments for the Container when it begins. The "--vm-bytes", "150M" arguments tell the Container to try and allocate a hundred and fifty MiB of memory. The output reveals that the one Container in the Pod has a memory request of 100 MiB and Memory Wave a memory restrict of 200 MiB. The output reveals that the Pod is using about 162,900,000 bytes of Memory Wave Program, which is about 150 MiB. That is better than the Pod's 100 MiB request, however throughout the Pod's 200 MiB restrict. A Container can exceed its memory request if the Node has memory obtainable. However a Container isn't allowed to make use of greater than its memory restrict. If a Container allocates more memory than its restrict, the Container turns into a candidate for termination.
reference.com


If the Container continues to eat memory past its limit, the Container is terminated. If a terminated Container may be restarted, the kubelet restarts it, as with any other sort of runtime failure. In this exercise, you create a Pod that makes an attempt to allocate more memory than its restrict. Within the args part of the configuration file, you possibly can see that the Container will try to allocate 250 MiB of memory, which is nicely above the 100 MiB restrict. At this level, the Container is perhaps working or Memory Wave killed. The Container in this train might be restarted, so the kubelet restarts it. Memory requests and limits are related to Containers, but it is useful to think about a Pod as having a memory request and limit. The memory request for the Pod is the sum of the memory requests for all of the Containers in the Pod. Likewise, the memory restrict for the Pod is the sum of the limits of all the Containers within the Pod.


Pod scheduling is predicated on requests. A Pod is scheduled to run on a Node provided that the Node has sufficient available memory to satisfy the Pod's memory request. In this train, you create a Pod that has a memory request so huge that it exceeds the capability of any Node in your cluster. Here is the configuration file for a Pod that has one Container with a request for 1000 GiB of memory, which seemingly exceeds the capability of any Node in your cluster. The output exhibits that the Pod standing is PENDING. The memory resource is measured in bytes. You may specific memory as a plain integer or a fixed-level integer with one of those suffixes: E, P, T, G, M, Ok, Ei, Pi, Ti, Gi, Mi, Ki. The Container has no upper bound on the amount of memory it uses. The Container might use the entire memory out there on the Node where it is running which in flip may invoke the OOM Killer.