Container Hosts, Images, and Kubernetes Control

Containers and Kubernetes split responsibility across more layers than a simple VM model.

Containers and Kubernetes split responsibility across more layers than a simple VM model. The provider may operate the infrastructure, and in some offerings it may also operate the managed control plane or worker-node substrate. The customer still owns important security decisions around image hygiene, cluster configuration, workload policy, secrets handling, service exposure, and runtime privileges inside the cluster.

This makes container environments powerful but easy to misread. Teams often hear “managed Kubernetes” and conclude that cluster security is mostly a provider concern. That is too broad. A provider-managed control plane does not automatically secure customer images, namespaces, admission rules, pod privileges, network policies, or service accounts.

The layer model helps:

    flowchart TD
	    A["Provider layer"] --> B["Cluster infrastructure and possibly control plane"]
	    C["Customer layer"] --> D["Container images"]
	    C --> E["Kubernetes policies and namespaces"]
	    C --> F["Service accounts and workload exposure"]
	    C --> G["Application code and runtime behavior"]

What to notice:

  • the provider may own more of the cluster foundation than in IaaS
  • the customer still owns many of the controls closest to workload behavior
  • container security is distributed across image, cluster, and workload layers

What Customers Still Own in Containers

Customer-owned control areas often include:

  • base image selection and scanning
  • image provenance and update cadence
  • pod privilege model and security context
  • namespace and network policy design
  • service-account scope
  • ingress and service exposure

Even in managed clusters, those controls remain customer-governed because they describe how the workloads behave inside the orchestrated environment.

A Practical Kubernetes Baseline

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: orders-api
 5spec:
 6  template:
 7    spec:
 8      serviceAccountName: orders-api
 9      containers:
10        - name: app
11          image: registry.example.com/orders-api:1.8.4
12          securityContext:
13            runAsNonRoot: true
14            readOnlyRootFilesystem: true

What this demonstrates:

  • workload security settings sit in customer-controlled configuration
  • image choice and pod privilege are not provider defaults
  • strong container security depends on customer policy inside the orchestrated layer

Why Managed Kubernetes Can Still Be Weak

Managed cluster infrastructure reduces some operational burden, especially around control-plane uptime and lower-level host maintenance. It does not automatically solve workload sprawl, risky images, cluster-admin overuse, or unrestricted pod-to-pod trust. Those remain customer-side architecture and governance problems.

Common Mistakes

  • assuming managed control-plane ownership means cluster security is mostly provider-owned
  • letting images age or drift without update and provenance review
  • using overly privileged pods or broad service accounts
  • running clusters with little namespace, network, or admission control discipline

Design Review Question

A company uses a managed Kubernetes service and says most container security responsibility sits with the provider because the provider operates the cluster control plane. The workloads still use broad service accounts, permissive pod security settings, and inconsistent image hygiene. Is that a strong conclusion?

No. The stronger answer is that provider-managed cluster infrastructure helps at one layer, but the customer still owns image quality, cluster policy, workload privilege, and exposure design.

Check Your Understanding

Loading quiz…
Revised on Thursday, April 23, 2026