If you’re new to container orchestration, this Kubernetes tutorial will get you from confusion to confidence. Kubernetes (often called k8s) can feel like a steep climb at first — I’ve helped teams through that fog dozens of times — but once you grasp clusters, pods, and deployments, things click. This guide covers practical steps, real-world examples, and the commands you’ll use day-to-day so you can run apps reliably in production.
Why Kubernetes? Quick overview
Kubernetes solves the messy parts of running containers at scale: service discovery, scaling, self-healing, and rolling updates. In plain terms, it makes containerized apps behave like a reliable service, not a fragile experiment.
Core concepts at a glance
- Cluster: A group of machines running Kubernetes.
- Node: A worker machine (VM or physical).
- Pod: The smallest deployable unit — one or more containers.
- Deployment: Manages pods and updates.
- Service: Stable network endpoint for pods.
Get set up: local environment
Start small. I recommend using a local cluster for learning: minikube, kind, or Docker Desktop’s Kubernetes. They all let you practice without cloud costs.
Install kubectl (client)
kubectl is your primary tool. Install from the official docs and verify with kubectl version –client. See the official guide: Kubernetes tools documentation.
Create a local cluster (example with kind)
- Install Docker and kind.
- Run: kind create cluster.
- Confirm nodes: kubectl get nodes.
First app: Deploying a sample service
Let’s deploy a simple nginx app. Real quick, here’s the minimal flow I use in tutorials and workshops.
- kubectl create deployment nginx –image=nginx
- kubectl expose deployment nginx –port=80 –type=NodePort
- Check pods: kubectl get pods and access via port-forward or NodePort.
This shows the Kubernetes loop: you declare desired state, Kubernetes makes reality match it, and it keeps correcting drift.
Key kubectl commands you’ll use
- kubectl get pods,svc,deploy — inspect resources
- kubectl describe pod <name> — debug pod issues
- kubectl logs <pod> — view container logs
- kubectl apply -f file.yaml — declarative config
- kubectl rollout status deployment/nginx — watch updates
YAML basics and declarative deployments
Pods and deployments are defined in YAML. Keep manifests small and version-controlled. Example snippet for a deployment manifest:
apiVersion: apps/v1nkind: Deploymentnmetadata:n name: nginx-deploynspec:n replicas: 3n selector:n matchLabels:n app: nginxn template:n metadata:n labels:n app: nginxn spec:n containers:n – name: nginxn image: nginx:stable
Scaling, updates, and self-healing
Scaling is simple: kubectl scale deployment nginx –replicas=5. For zero-downtime updates, use rolling updates with deployments. If a node fails, pods are rescheduled — that’s the self-healing part.
Service types and networking
Services expose pods. Common types:
- ClusterIP: internal-only (default)
- NodePort: simple external access through host ports
- LoadBalancer: cloud provider load balancer
Storage and Config
Use ConfigMaps and Secrets for configuration. For persistent data, use PersistentVolumeClaims and StorageClasses. These let stateful apps survive pod restarts.
Helm: package manager for Kubernetes
Helm simplifies deploying complex apps via charts. I often install databases, ingress controllers, or observability stacks with Helm. Example:
- helm repo add bitnami https://charts.bitnami.com/bitnami
- helm install my-mariadb bitnami/mariadb
Helm speeds repeatable installs and upgrades.
Comparison: Kubernetes vs alternatives
| Feature | Kubernetes | Docker Swarm | HashiCorp Nomad |
|---|---|---|---|
| Scalability | High | Medium | High |
| Complexity | High | Low | Medium |
| Ecosystem | Very large | Smaller | Growing |
Observability and logging
Don’t skip metrics and logs. A common stack: Prometheus for metrics, Grafana for dashboards, and an ELK/EFK stack for logs. I’ve seen teams catch production issues fast once dashboards are in place.
Security basics
- Use RBAC to limit access.
- Run containers with least privilege.
- Scan images and sign them where possible.
CI/CD and GitOps patterns
Integrate Kubernetes into CI/CD pipelines. For GitOps, tools like Argo CD or Flux sync manifests from Git to clusters automatically — I tend to recommend GitOps for repeatability.
Troubleshooting checklist
- Check pod status: kubectl get pods
- Inspect events: kubectl get events –sort-by=.metadata.creationTimestamp
- Look at logs and describe resources for errors
Resources and further reading
Official documentation and background reading are essential. The official Kubernetes docs are the best reference: Kubernetes documentation. For concise historical context and broader project info, see the project page on Wikipedia. For Cloud Native governance and ecosystem details, the Cloud Native Computing Foundation provides useful resources.
Practical next steps (what I’d do if I were you)
- Set up a local cluster with kind or minikube.
- Deploy a sample app and practice scaling and rolling updates.
- Add monitoring (Prometheus/Grafana) and logging.
- Try Helm to package an app.
- Experiment with GitOps using Argo CD or Flux.
FAQ
See the FAQ section at the end for short answers to common questions.
Links cited above:
Kubernetes documentation,
Kubernetes on Wikipedia, and
CNCF project page.
Frequently Asked Questions
Kubernetes is used to automate deploying, scaling, and managing containerized applications. It provides features like service discovery, load balancing, self-healing, and automated rollouts.
Begin with a local cluster (minikube or kind), install kubectl, deploy a simple app, and practice scaling and rolling updates. Use the official docs for reference.
A pod is the smallest running unit (one or more containers). A deployment manages pods and handles updates, scaling, and rollbacks declaratively.
Use Helm to package and manage complex applications and repeatable installs. Helm charts simplify upgrades and sharing setups across environments.
You should know container basics (images, containers, Dockerfile) because Kubernetes orchestrates containers, but deep Docker expertise isn’t required to start learning Kubernetes.