← Back to News

Kubernetes v1.36: Pod-Level Resource Managers (Alpha)

Kubernetes v1.36 introduces Pod-Level Resource Managers as an alpha feature, marking a significant shift in how the platform handles resource allocation for performance-sensitive workloads. Previously, resource management policies in Kubernetes were set at the node level through kubelet configuration—a one-size-fits-all approach that often forced teams into uncomfortable compromises. The new feature allows you to define resource management strategies at the pod level, giving you granular control over CPU pinning, memory management, and topology-aware scheduling without requiring node-level changes or multiple kubelet configurations.

Under the hood, this feature extends three existing kubelet managers—the Topology Manager, CPU Manager, and Memory Manager—to accept pod-level directives. When you deploy a pod with specific resource manager hints, the kubelet interprets these preferences and applies them to just that pod’s containers, rather than applying a single policy to all workloads on the node. For example, you can now specify that a particular pod requires CPU affinity and strict socket-level memory binding, while another pod on the same node uses the default resource allocation strategy. This is implemented through pod specification annotations and new fields in the PodSpec, allowing the kubelet to make finer-grained decisions about CPU assignment, NUMA topology awareness, and memory allocation patterns.

The practical impact becomes clear when you consider real-world scenarios. A financial services firm running latency-sensitive trading algorithms needs CPU pinning and predictable memory locality, but their logging sidecars don’t. A machine learning platform might deploy both inference pods (requiring memory bandwidth guarantees) and batch processing pods (optimized for throughput) on identical hardware. Previously, you’d either compromise both workload types with a middle-ground node configuration or maintain separate node pools entirely—both expensive in complexity or infrastructure costs. With pod-level resource managers, you configure each workload’s needs independently while maintaining scheduling flexibility and resource utilization on the same nodes.

For teams already working with performance-critical applications in Kubernetes, this feature removes a painful constraint. You no longer need to choose between standardized configurations or infrastructure sprawl. If you’re running HPC workloads, real-time processing, or any application where CPU cache locality or memory latency matters, Pod-Level Resource Managers deserve attention as you plan your 2025 upgrades. Keep in mind it’s currently alpha, so expect API changes and ensure you test thoroughly in non-production environments first. As it matures toward beta and stable releases, this will likely become a standard tool in the performance optimization toolkit for any organization running demanding workloads on Kubernetes.

Source
↗ Kubernetes Blog