Roshanboss
📖 Tutorial

Kubernetes 1.36 and Beyond: SELinux Volume Mount Optimization Becomes Stable

Last updated: 2026-05-01 14:33:05 Intermediate
Complete guide
Follow along with this comprehensive guide

Introduction

If your Kubernetes cluster runs on Linux with SELinux in enforcing mode, now is the time to pay attention. The upcoming release, anticipated to be v1.37, is expected to enable the SELinuxMount feature gate by default. This change significantly speeds up volume setup for most workloads, but it can also break applications that rely on the older recursive relabeling model—especially those sharing a single volume between privileged and unprivileged Pods on the same node. Kubernetes v1.36 provides the ideal window to audit your cluster and either adapt or opt out of this optimization. If your nodes do not use SELinux, you can skip this article entirely: the kubelet simply bypasses the entire SELinux logic when SELinux is unavailable or disabled in the Linux kernel.

Kubernetes 1.36 and Beyond: SELinux Volume Mount Optimization Becomes Stable

This article builds on earlier work described in the Kubernetes 1.27: Efficient SELinux Relabeling (Beta) post, which introduced the SELinuxMountReadWriteOncePod feature gate. The core problem remains the same, but the solution now extends to all volume types.

The Challenge of SELinux Volume Relabeling

On Linux systems with SELinux enabled, every object—files, directories, network sockets—carries a security label. Access control decisions are made based on these labels. Historically, Kubernetes worked with the container runtime to apply the Pod's SELinux label to all its volumes. The process went like this:

  • The container runtime received the SELinux label from the Pod's securityContext field.
  • It then recursively changed the label on every file visible to the Pod's containers—a potentially slow operation, especially on remote filesystems with many files.

This recursive relabeling introduced significant latency during Pod startup, particularly when volumes contained thousands of inodes. A notable exception: if a container used a subPath of a volume, only that subpath was relabeled. This allowed two Pods with different SELinux labels to share the same volume, provided they each used distinct subpaths.

When a Pod did not specify an SELinux label in the Kubernetes API, the container runtime assigned a unique random label. This prevented a process that escaped the container boundary from accessing data of other containers on the host. However, the runtime still recursively relabeled all Pod volumes with that random label, adding to the startup cost.

How the Recursive Model Worked

  1. Pod YAML defined securityContext.seLinuxOptions (or none).
  2. Container runtime received the volume mount information.
  3. For each mounted volume (excluding subPath variations), it called chcon or similar to recursively change SELinux labels.
  4. All files under the mount point were updated—a linear scan that could take seconds or even minutes for large volumes.

This model was simple but inefficient, and it offered no fine-grained performance guarantees.

The New Approach: Mount-Level Labeling

Kubernetes now introduces a smarter approach: instead of relabeling every file after mounting, the kubelet can mount the volume with the -o context=<label> option. The kernel then applies the correct SELinux label to all inodes on that mount automatically, without any recursive traversal. This eliminates the startup delay and reduces filesystem wear.

For this to work, several conditions must be met:

  • The node’s filesystem and mount utilities must support the context mount option.
  • The Pod must expose a sufficient SELinux label (at minimum, the level field in seLinuxOptions).
  • The volume driver must opt in. For CSI drivers, that means setting spec.seLinuxMount: true in the CSIDriver object.

Phased Rollout of the Feature

The project carefully rolled out this optimization in two phases:

  • Phase 1: ReadWriteOncePod volumes — Controlled by the SELinuxMountReadWriteOncePod feature gate, this became enabled by default in v1.28 and reached General Availability (GA) in v1.36. Only single-access volumes benefited.
  • Phase 2: Broader coverage — The SELinuxMount feature gate (GA in v1.36) extends the optimization to all volume types. It is paired with a new Pod-level field: spec.securityContext.seLinuxChangePolicy, which lets administrators control how SELinux labels are applied. The default policy is MountOption (use mount-level labeling when possible) with a fallback to recursive relabeling when the mount option is unsupported. Users can also set the policy to Recursive to force the old behavior.

Implications for v1.37 and Migration Advice

With v1.37 expected to enable SELinuxMount by default, clusters that still rely on recursive relabeling may encounter subtle issues. The most common scenario: sharing a volume between a privileged Pod and an unprivileged Pod on the same node. Under the new model, the mount-level label applies uniformly to the entire volume; the kernel will not allow two different contexts on the same mount. If your workloads depend on that coexistence (e.g., a privileged sidecar and an unprivileged application reading/writing different parts of a single volume), you must take action.

Recommended steps:

  1. Audit your workloads: Identify any Pods that share volumes with different SELinux labels. Check for subPath usage—the old behavior is preserved for subPaths.
  2. Test in v1.36: Run your cluster with the custom resource flag --feature-gates=SELinuxMount=false to disable the optimization if needed. But ideally, adjust your volume sharing strategy.
  3. Use the seLinuxChangePolicy field: Set spec.securityContext.seLinuxChangePolicy: Recursive on Pods that require the legacy behavior. This overrides the default.
  4. Update CSI drivers: Ensure your CSI driver sets seLinuxMount: true to benefit from the optimization.

What This Means for Different Users

For clusters without SELinux—nothing changes. The kubelet skips the entire SELinux logic, so performance remains identical.

For clusters with SELinux but no volume sharing:

  • Expect faster Pod startup times, especially for large volumes.
  • No action required; the default policy (MountOption) works seamlessly.

For clusters sharing volumes between Pods with different labels:

  • If you use subPath, you are safe—recursive relabeling applies per subpath.
  • If you rely on full-volume sharing, you must either refactor to use subPaths or explicitly set seLinuxChangePolicy: Recursive on the Pods.

In conclusion, the GA of SELinuxMount in v1.36 is a significant performance improvement for the vast majority of workloads. The v1.37 default-enable is a natural progression but requires awareness and, for some, a migration plan. Audit your cluster now to ensure a smooth transition.