On-disk files in a Container are ephemeral, which presents some problems for
non-trivial applications when running in Containers. First, when a Container
crashes, kubelet will restart it, but the files will be lost - the
Container starts with a clean state. Second, when running Containers together
in a Pod
it is often necessary to share files between those Containers. The
Kubernetes Volume
abstraction solves both of these problems.
Familiarity with Pods is suggested.
Docker also has a concept of volumes, though it is somewhat looser and less managed. In Docker, a volume is simply a directory on disk or in another Container. Lifetimes are not managed and until very recently there were only local-disk-backed volumes. Docker now provides volume drivers, but the functionality is very limited for now (e.g. as of Docker 1.7 only one volume driver is allowed per Container and there is no way to pass parameters to volumes).
A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.
To use a volume, a Pod specifies what volumes to provide for the Pod (the
.spec.volumes
field) and where to mount those into Containers (the
.spec.containers.volumeMounts
field).
A process in a container sees a filesystem view composed from their Docker image and volumes. The Docker image is at the root of the filesystem hierarchy, and any volumes are mounted at the specified paths within the image. Volumes can not mount onto other volumes or have hard links to other volumes. Each Container in the Pod must independently specify where to mount each volume.
Kubernetes supports several types of Volumes:
awsElasticBlockStore
azureDisk
azureFile
cephfs
configMap
csi
downwardAPI
emptyDir
fc
(fibre channel)flocker
gcePersistentDisk
gitRepo
glusterfs
hostPath
iscsi
local
nfs
persistentVolumeClaim
projected
portworxVolume
quobyte
rbd
scaleIO
secret
storageos
vsphereVolume
We welcome additional contributions.
An awsElasticBlockStore
volume mounts an Amazon Web Services (AWS) EBS
Volume into your Pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of an EBS
volume are preserved and the volume is merely unmounted. This means that an
EBS volume can be pre-populated with data, and that data can be “handed off”
between Pods.
Important: You must create an EBS volume usingaws ec2 create-volume
or the AWS API before you can use it.
There are some restrictions when using an awsElasticBlockStore
volume:
Before you can use an EBS volume with a Pod, you need to create it.
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume type are suitable for your use!)
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
A azureDisk
is used to mount a Microsoft Azure Data Disk into a Pod.
More details can be found here.
A azureFile
is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0)
into a Pod.
More details can be found here.
A cephfs
volume allows an existing CephFS volume to be
mounted into your Pod. Unlike emptyDir
, which is erased when a Pod is
removed, the contents of a cephfs
volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with data, and
that data can be “handed off” between Pods. CephFS can be mounted by multiple
writers simultaneously.
Important: You must have your own Ceph server running with the share exported before you can use it.
See the CephFS example for more details.
The configMap
resource
provides a way to inject configuration data into Pods.
The data stored in a ConfigMap
object can be referenced in a volume of type
configMap
and then consumed by containerized applications running in a Pod.
When referencing a configMap
object, you can simply provide its name in the
volume to reference it. You can also customize the path to use for a specific
entry in the ConfigMap.
For example, to mount the log-config
ConfigMap onto a Pod called configmap-pod
,
you might use the YAML below:
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: log-config
items:
- key: log_level
path: log_level
The log-config
ConfigMap is mounted as a volume, and all contents stored in
its log_level
entry are mounted into the Pod at path “/etc/config/log_level
”.
Note that this path is derived from the volume’s mountPath
and the path
keyed with log_level
.
Important: You must create a ConfigMap before you can use it.
Note: A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
A downwardAPI
volume is used to make downward API data available to applications.
It mounts a directory and writes the requested data in plain text files.
Note: A Container using Downward API as a subPath volume mount will not receive Downward API updates.
See the downwardAPI
volume example for more details.
An emptyDir
volume is first created when a Pod is assigned to a Node, and
exists as long as that Pod is running on that node. As the name says, it is
initially empty. Containers in the Pod can all read and write the same
files in the emptyDir
volume, though that volume can be mounted at the same
or different paths in each Container. When a Pod is removed from a node for
any reason, the data in the emptyDir
is deleted forever.
Note: a Container crashing does NOT remove a Pod from a node, so the data in anemptyDir
volume is safe across Container crashes.
Some uses for an emptyDir
are:
By default, emptyDir
volumes are stored on whatever medium is backing the
node - that might be disk or SSD or network storage, depending on your
environment. However, you can set the emptyDir.medium
field to "Memory"
to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
node reboot and any files you write will count against your Container’s
memory limit.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
An fc
volume allows an existing fibre channel volume to be mounted in a Pod.
You can specify single or multiple target World Wide Names using the parameter
targetWWNs
in your volume configuration. If multiple WWNs are specified,
targetWWNs expect that those WWNs are from multi-path connections.
Important: You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
See the FC example for more details.
Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
A flocker
volume allows a Flocker dataset to be mounted into a Pod. If the
dataset does not already exist in Flocker, it needs to be first created with the Flocker
CLI or by using the Flocker API. If the dataset already exists it will be
reattached by Flocker to the node that the Pod is scheduled. This means data
can be “handed off” between Pods as required.
Important: You must have your own Flocker installation running before you can use it.
See the Flocker example for more details.
A gcePersistentDisk
volume mounts a Google Compute Engine (GCE) Persistent
Disk into your Pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be “handed off” between Pods.
Important: You must create a PD usinggcloud
or the GCE API or UI before you can use it.
There are some restrictions when using a gcePersistentDisk
:
A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1.
Before you can use a GCE PD with a Pod, you need to create it.
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Kubernetes v1.10
beta
The Regional Persistent Disks feature allows the creation of Persistent Disks that are available in two zones within the same region. In order to use this feature, the volume must be provisioned as a PersistentVolume; referencing the volume directly from a pod is not supported.
Dynamic provisioning is possible using a StorageClass for GCE PD. Before creating a PersistentVolume, you must create the PD:
gcloud beta compute disks create --size=500GB my-data-disk
--region us-central1
--replica-zones us-central1-a,us-central1-b
Example PersistentVolume spec:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
labels:
failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
A gitRepo
volume is an example of what can be done as a volume plugin. It
mounts an empty directory and clones a git repository into it for your Pod to
use. In the future, such volumes may be moved to an even more decoupled model,
rather than extending the Kubernetes API for every such use case.
Here is an example for gitRepo volume:
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
A glusterfs
volume allows a Glusterfs (an open
source networked filesystem) volume to be mounted into your Pod. Unlike
emptyDir
, which is erased when a Pod is removed, the contents of a
glusterfs
volume are preserved and the volume is merely unmounted. This
means that a glusterfs volume can be pre-populated with data, and that data can
be “handed off” between Pods. GlusterFS can be mounted by multiple writers
simultaneously.
Important: You must have your own GlusterFS installation running before you can use it.
See the GlusterFS example for more details.
A hostPath
volume mounts a file or directory from the host node’s filesystem
into your Pod. This is not something that most Pods will need, but it offers a
powerful escape hatch for some applications.
For example, some uses for a hostPath
are:
hostPath
of /var/lib/docker
hostPath
of /sys
hostPath
should exist prior to the
Pod running, whether it should be created, and what it should exist asIn addition to the required path
property, user can optionally specify a type
for a hostPath
volume.
The supported values for field type
are:
Value | Behavior |
---|---|
Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. | |
DirectoryOrCreate |
If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. |
Directory |
A directory must exist at the given path |
FileOrCreate |
If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. |
File |
A file must exist at the given path |
Socket |
A UNIX socket must exist at the given path |
CharDevice |
A character device must exist at the given path |
BlockDevice |
A block device must exist at the given path |
Watch out when using this type of volume, because:
hostPath
hostPath
volumeapiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
An iscsi
volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod. Unlike emptyDir
, which is erased when a Pod is removed, the
contents of an iscsi
volume are preserved and the volume is merely
unmounted. This means that an iscsi volume can be pre-populated with data, and
that data can be “handed off” between Pods.
Important: You must have your own iSCSI server running with the volume created before you can use it.
A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
See the iSCSI example for more details.
Kubernetes v1.10
beta
Note: The alpha PersistentVolume NodeAffinity annotation has been deprecated and will be removed in a future release. Existing PersistentVolumes using this annotation must be updated by the user to use the new PersistentVolumeNodeAffinity
field.
A local
volume represents a mounted local storage device such as a disk,
partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath
volumes, local volumes can be used in a durable and
portable manner without manually scheduling Pods to nodes, as the system is aware
of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.
The following is an example PersistentVolume spec using a local
volume and
nodeAffinity
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
PersistentVolume nodeAffinity
is required when using local volumes. It enables
the Kubernetes scheduler to correctly schedule Pods using local volumes to the
correct node.
PersistentVolume volumeMode
can now be set to “Block” (instead of the default
value “Filesystem”) to expose the local volume as a raw block device. The
volumeMode
field requires BlockVolume
Alpha feature gate to be enabled.
When using local volumes, it is recommended to create a StorageClass with
volumeBindingMode
set to WaitForFirstConsumer
. See the
example. Delaying volume binding ensures
that the PersistentVolumeClaim binding decision will also be evaluated with any
other node constraints the Pod may have, such as node resource requirements, node
selectors, Pod affinity, and Pod anti-affinity.
An external static provisioner can be run separately for improved management of the local volume lifecycle. Note that this provisioner does not support dynamic provisioning yet. For an example on how to run an external local provisioner, see the local volume provisioner user guide.
Note: The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle.
An nfs
volume allows an existing NFS (Network File System) share to be
mounted into your Pod. Unlike emptyDir
, which is erased when a Pod is
removed, the contents of an nfs
volume are preserved and the volume is merely
unmounted. This means that an NFS volume can be pre-populated with data, and
that data can be “handed off” between Pods. NFS can be mounted by multiple
writers simultaneously.
Important: You must have your own NFS server running with the share exported before you can use it.
See the NFS example for more details.
A persistentVolumeClaim
volume is used to mount a
PersistentVolume into a Pod. PersistentVolumes are a
way for users to “claim” durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
See the PersistentVolumes example for more details.
A projected
volume maps several existing volume sources into the same directory.
Currently, the following types of volume sources can be projected:
secret
downwardAPI
configMap
serviceAccountToken
All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
The projection of service account tokens is a feature introduced in Kubernetes
1.11. To enable this feature, you need to explicitly set the TokenRequestProjection
feature gate to
True.
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- secret:
name: mysecret2
items:
- key: password
path: my-group/my-password
mode: 511
Each projected volume source is listed in the spec under sources
. The
parameters are nearly the same with two exceptions:
secretName
field has been changed to name
to be consistent
with ConfigMap naming.defaultMode
can only be specified at the projected level and not for each
volume source. However, as illustrated above, you can explicitly set the mode
for each individual projection.When the TokenRequestProjection
feature is enabled, you can inject the token
for the current service account
into a Pod at a specified path. Below is an example:
apiVersion: v1
kind: Pod
metadata:
name: sa-token-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: token-vol
mountPath: "/sevice-account"
readOnly: true
volumes:
- name: token-vol
projected:
sources:
- serviceAccountToken:
audience: api
expirationSeconds: 3600
path: token
The example Pod has a projected volume containing the injected service account
token. This token can be used by Pod containers to access the Kubernetes API
server, for example. The audience
field contains the intended audience of the
token. A recipient of the token must identify itself with an identifier specified
in the audience of the token, and otherwise should reject the token. This field
is optional and it defaults to the identifier of the API server.
The expirationSeconds
is the expected duration of validity of the service account
token. It defaults to 1 hour and must be at least 10 minutes (600 seconds).
The path
field specifies a relative path to the mount point of the projected
volume.
Note: A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
A portworxVolume
is an elastic block storage layer that runs hyperconverged with
Kubernetes. Portworx fingerprints storage in a server, tiers based on capabilities,
and aggregates capacity across multiple servers. Portworx runs in-guest in virtual
machines or on bare metal Linux nodes.
A portworxVolume
can be dynamically created through Kubernetes or it can also
be pre-provisioned and referenced inside a Kubernetes Pod.
Here is an example Pod referencing a pre-provisioned PortworxVolume:
apiVersion: v1
kind: Pod
metadata:
name: test-portworx-volume-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /mnt
name: pxvol
volumes:
- name: pxvol
# This Portworx volume must already exist.
portworxVolume:
volumeID: "pxvol"
fsType: "<fs-type>"
Important: Make sure you have an existing PortworxVolume with namepxvol
before using it in the Pod.
More details and examples can be found here.
A quobyte
volume allows an existing Quobyte volume to
be mounted into your Pod.
Important: You must have your own Quobyte setup running with the volumes created before you can use it.
See the Quobyte example for more details.
An rbd
volume allows a Rados Block
Device volume to be mounted into your
Pod. Unlike emptyDir
, which is erased when a Pod is removed, the contents of
a rbd
volume are preserved and the volume is merely unmounted. This
means that a RBD volume can be pre-populated with data, and that data can
be “handed off” between Pods.
Important: You must have your own Ceph installation running before you can use RBD.
A feature of RBD is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.
See the RBD example for more details.
ScaleIO is a software-based storage platform that can use existing hardware to
create clusters of scalable shared block networked storage. The scaleIO
volume
plugin allows deployed Pods to access existing ScaleIO
volumes (or it can dynamically provision new volumes for persistent volume claims, see
ScaleIO Persistent Volumes).
Important: You must have an existing ScaleIO cluster already setup and running with the volumes created before you can use them.
The following is an example Pod configuration with ScaleIO:
apiVersion: v1
kind: Pod
metadata:
name: pod-0
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: pod-0
volumeMounts:
- mountPath: /test-pd
name: vol-0
volumes:
- name: vol-0
scaleIO:
gateway: https://localhost:443/api
system: scaleio
protectionDomain: sd0
storagePool: sp1
volumeName: vol-0
secretRef:
name: sio-secret
fsType: xfs
For further detail, please the see the ScaleIO examples.
A secret
volume is used to pass sensitive information, such as passwords, to
Pods. You can store secrets in the Kubernetes API and mount them as files for
use by Pods without coupling to Kubernetes directly. secret
volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
Important: You must create a secret in the Kubernetes API before you can use it.
Note: A Container using a Secret as a subPath volume mount will not receive Secret updates.
Secrets are described in more detail here.
A storageos
volume allows an existing StorageOS
volume to be mounted into your Pod.
StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster. Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
At its core, StorageOS provides block storage to Containers, accessible via a file system.
The StorageOS Container requires 64-bit Linux and has no additional dependencies. A free developer license is available.
Important: You must run the StorageOS Container on each node that wants to access StorageOS volumes or that will contribute storage capacity to the pool. For installation instructions, consult the StorageOS documentation.
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
storageos:
# The `redis-vol01` volume must already exist within StorageOS in the `default` namespace.
volumeName: redis-vol01
fsType: ext4
For more information including Dynamic Provisioning and Persistent Volume Claims, please see the StorageOS examples.
Prerequisite: Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer vSphere getting started guide.
A vsphereVolume
is used to mount a vSphere VMDK Volume into your Pod. The contents
of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore.
Important: You must create VMDK using one of the following method before using with Pod.
Choose one of the following methods to create a VMDK.
First ssh into ESX, then use the following command to create a VMDK:
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
Use the following command to create a VMDK:
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[DatastoreName] volumes/myDisk"
fsType: ext4
More examples can be found here.
Sometimes, it is useful to share one volume for multiple uses in a single Pod. The volumeMounts.subPath
property can be used to specify a sub-path inside the referenced volume instead of its root.
Here is an example of a Pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume.
The HTML contents are mapped to its html
folder, and the databases will be stored in its mysql
folder:
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
Kubernetes v1.11
alpha
subPath
directory names can also be constructed from Downward API environment variables.
Before you use this feature, you must enable the VolumeSubpathEnvExpansion
feature gate.
In this example, a Pod uses subPath
to create a directory pod1
within the hostPath volume /var/log/pods
, using the pod name from the Downward API. The host directory /var/log/pods/pod1
is mounted at /logs
in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPath: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
The storage media (Disk, SSD, etc.) of an emptyDir
volume is determined by the
medium of the filesystem holding the kubelet root dir (typically
/var/lib/kubelet
). There is no limit on how much space an emptyDir
or
hostPath
volume can consume, and no isolation between Containers or between
Pods.
In the future, we expect that emptyDir
and hostPath
volumes will be able to
request a certain amount of space using a resource
specification, and to select the type of media to use, for clusters that have
several media types.
The Out-of-tree volume plugins include the Container Storage Interface (CSI
)
and FlexVolume
. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.
Before the introduction of CSI
and FlexVolume
, all volume plugins (like
volume types listed above) were “in-tree” meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API. This meant that adding a new storage system to Kubernetes (a
volume plugin) required checking code into the core Kubernetes code repository.
Both CSI
and FlexVolume
allow volume plugins to be developed independent of
the Kubernetes code base, and deployed (installed) on Kubernetes clusters as
extensions.
For storage vendors looking to create an out-of-tree volume plugin, please refer to this FAQ.
Kubernetes v1.10
beta
Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
Please read the CSI design proposal for more information.
CSI support was introduced as alpha in Kubernetes v1.9 and moved to beta in Kubernetes v1.10.
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the csi
volume type to attach, mount, etc. the volumes exposed by the
CSI driver.
The csi
volume type does not support direct reference from Pod and may only be
referenced in a Pod via a PersistentVolumeClaim
object.
The following fields are available to storage administrators to configure a CSI persistent volume:
driver
: A string value that specifies the name of the volume driver to use.
This value must correspond to the value returned in the GetPluginInfoResponse
by the CSI driver as defined in the CSI spec.
It is used by Kubernetes to identify which CSI driver to call out to, and by
CSI driver components to identify which PV objects belong to the CSI driver.volumeHandle
: A string value that uniquely identifies the volume. This value
must correspond to the value returned in the volume.id
field of the
CreateVolumeResponse
by the CSI driver as defined in the CSI spec.
The value is passed as volume_id
on all calls to the CSI volume driver when
referencing the volume.readOnly
: An optional boolean value indicating whether the volume is to be
“ControllerPublished” (attached) as read only. Default is false. This value is
passed to the CSI driver via the readonly
field in the
ControllerPublishVolumeRequest
.fsType
: If the PV’s VolumeMode
is Filesystem
then this field may be used
to specify the filesystem that should be used to mount the volume. If the
volume has not been formatted and formatting is supported, this value will be
used to format the volume. If a value is not specified, ext4
is assumed.
This value is passed to the CSI driver via the VolumeCapability
field of
ControllerPublishVolumeRequest
, NodeStageVolumeRequest
, and
NodePublishVolumeRequest
.volumeAttributes
: A map of string to string that specifies static properties
of a volume. This map must correspond to the map returned in the
volume.attributes
field of the CreateVolumeResponse
by the CSI driver as
defined in the CSI spec.
The map is passed to the CSI driver via the volume_attributes
field in the
ControllerPublishVolumeRequest
, NodeStageVolumeRequest
, and
NodePublishVolumeRequest
.controllerPublishSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
ControllerPublishVolume
and ControllerUnpublishVolume
calls. This field is
optional, and may be empty if no secret is required. If the secret object
contains more than one secret, all secrets are passed.nodeStageSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
NodeStageVolume
call. This field is optional, and may be empty if no secret
is required. If the secret object contains more than one secret, all secrets
are passed.nodePublishSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
NodePublishVolume
call. This field is optional, and may be empty if no
secret is required. If the secret object contains more than one secret, all
secrets are passed.Kubernetes v1.11
alpha
Starting with version 1.11, CSI introduced support for raw block volumes, which relies on the raw block volume feature that was introduced in a previous version of Kubernetes. This feature will make it possible for vendors with external CSI drivers to implement raw block volumes support in Kubernetes workloads.
CSI block volume support is feature-gated and turned off by default. To run CSI with block volume support enabled, a cluster administrator must enable the feature for each Kubernetes component using the following feature gate flags:
--feature-gates=BlockVolume=true,CSIBlockVolume=true
Learn how to setup your PV/PVC with raw block volume support.
FlexVolume
is an out-of-tree plugin interface that has existed in Kubernetes
since version 1.2 (before CSI). It uses an exec-based model to interface with
drivers. FlexVolume driver binaries must be installed in a pre-defined volume
plugin path on each node (and in some cases master).
Pods interact with FlexVolume drivers through the flexVolume
in-tree plugin.
More details can be found here.
Kubernetes v1.10
beta
Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
If the “MountPropagation
” feature is disabled or a Pod does not explicitly
specify specific mount propagation, volume mounts in the Pod’s Containers are
not propagated. That is, Containers run with private
mount propagation as
described in the Linux kernel documentation.
Mount propagation of a volume is controlled by mountPropagation
field in Container.volumeMounts.
Its values are:
None
- This volume mount will not receive any subsequent mounts
that are mounted to this volume or any of its subdirectories by the host.
In similar fashion, no mounts created by the Container will be visible on
the host. This is the default mode.This mode is equal to private
mount propagation as described in the
Linux kernel documentation
HostToContainer
- This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.In other words, if the host mounts anything inside the volume mount, the Container will see it mounted there.
Similarly, if any Pod with Bidirectional
mount propagation to the same
volume mounts anything there, the Container with HostToContainer
mount
propagation will see it.
This mode is equal to rslave
mount propagation as described in the
Linux kernel documentation
Bidirectional
- This volume mount behaves the same the HostToContainer
mount.
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.A typical use case for this mode is a Pod with a FlexVolume
or CSI
driver or
a Pod that needs to mount something on the host using a hostPath
volume.
This mode is equal to rshared
mount propagation as described in the
Linux kernel documentation
Caution:Bidirectional
mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged Containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by Containers in Pods must be destroyed (unmounted) by the Containers on termination.
Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.
Edit your Docker’s systemd
service file. Set MountFlags
as follows:
MountFlags=shared
Or, remove MountFlags=slave
if present. Then restart the Docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker