- Q.51 What is the role of a Kubernetes admission controller, and how can you customize it?
- Q.52 Discuss the use of Kubernetes custom metrics for autoscaling.
- Q.53 How do you manage multi-tenancy in a large Kubernetes cluster?
- Q.54 What is the difference between a Kubernetes job and a Kubernetes CronJob?
- Q.55 How can you achieve high availability for etcd in a Kubernetes control plane?
Q.51 What is the role of a Kubernetes admission controller, and how can you customize it?
- Admission Controllers: Intercept API requests after authentication and authorization but before an object is persisted. They can validate or mutate objects according to configured rules.
- Built-in Controllers: Kubernetes comes with controllers like PodSecurityPolicy, LimitRanger, etc.
- Custom Admission Controllers: You can write webhooks for your own logic to enforce policies, inject sidecars, or modify submitted objects.
Use Cases:
- Policy enforcement: Require labels, resource limits, or conform to specific standards.
- Configuration defaults: Automatically add sidecar containers or set annotations on incoming Pods.
- External integrations: Validate against external systems for security or compliance checks.
Q.52 Discuss the use of Kubernetes custom metrics for autoscaling.
- Horizontal Pod Autoscaler (HPA): Typically scales Pods based on built-in metrics like CPU and memory. Kubernetes’ Metrics Server collects these.
- Custom Metrics Server: Adapt this server to expose your own metrics (e.g., requests per second, queue depth)
- Custom Metrics in HPA: The HPA can then base scaling decisions on your custom metrics for more fine-tuned, application-specific autoscaling behavior.
Use Cases:
- Application-specific metrics: Scale based on business-relevant data that CPU or memory don’t reflect.
- External systems: Scale pods based on metrics from message queues, databases, or other services outside Kubernetes.
Example:
- Custom metric tracking the number of messages in a Kafka queue. The HPA scales out if the queue depth exceeds a threshold.
Q.53 How do you manage multi-tenancy in a large Kubernetes cluster?
There are two primary approaches to multi-tenancy in Kubernetes:
- Namespace-based isolation (Soft Multi-tenancy): Separate tenants into different namespaces. Enforce security and resource usage limits using:
- Role-Based Access Control (RBAC): Control permissions
- Resource Quotas: Set limits on CPU, memory, and storage per namespace.
- Network Policies: Restricting network traffic between namespaces.
- Cluster-based isolation (Hard Multi-tenancy): Each tenant gets a dedicated Kubernetes cluster. Pros: Strong isolation, avoids noisy neighbor problems. Cons: Increased management overhead.
Use Cases:
- SaaS Providers: Often use namespace-based isolation to run multiple clients on shared infrastructure.
- Highly Regulated Environments: Cluster-based isolation may be mandated if strong security boundaries are required.
Example:
- In namespace-based multi-tenancy, define two namespaces, “tenant-a” and “tenant-b,” and isolate resources and permissions using RBAC, Resource Quotas, and Network Policies.
Q.54 What is the difference between a Kubernetes job and a Kubernetes CronJob?
- Kubernetes Job: Designed to run a task or workload to completion once. The Job controller ensures Pods run successfully through completion.
- Kubernetes CronJob: Designed for running recurring tasks on a defined schedule (similar to traditional cron). It creates Jobs according to the schedule you provide.
Use Cases:
- Jobs: Tasks with a definite start and end like batch processing, database cleanup, or report generation.
- CronJobs: Periodic tasks like scheduled backups, log rotation, or sending email notifications.
Example:
- Job: Processing a queue of data import tasks.
- CronJob: Running database clean-up at midnight daily.
Q.55 How can you achieve high availability for etcd in a Kubernetes control plane?
- Multiple etcd nodes: Deploy an odd number of etcd replicas (typically 3 or 5) to form a cluster. This tolerates individual node failures while maintaining quorum.
- Distributed across failure domains: Place etcd nodes across different availability zones or datacenters to increase resilience to wider outages.
- Load Balancing: Consider a load balancer in front of your etcd cluster for distributing traffic and providing a single point of access.
- Regular Snapshots/Backups: Have a snapshot process or backup strategy to recover in case of data loss.
Use Cases:
- Any production Kubernetes cluster: etcd is the backbone of the Kubernetes control plane. Making it highly available is crucial for cluster stability.
Example:
- Three etcd nodes distributed across three availability zones in a cloud environment.
Part 1- Kubernetes Interview Q & A (Q1-Q5)
Part 2- Kubernetes Interview Q & A (Q6-Q10)
Part 3 – Kubernetes Interview Questions & Answers (Q.11 to Q.15)
Part 4 – Kubernetes Interview Questions & Answers (Q.16 to Q.20)
Part 5 – Kubernetes Interview Questions & Answers (Q.21 to Q.25)
Part 6 – Kubernetes Interview Questions & Answers (Q.26 to Q.30)
Part 7 – Kubernetes Interview Questions & Answers (Q.31 to Q.35)
Part 8 – Kubernetes Interview Questions & Answers (Q.36 to Q.40)
Part 9 – Kubernetes Interview Questions & Answers (Q.41 to Q.45)
Part 10 – Kubernetes Interview Questions & Answers (Q.46 to Q.50)
Hope you find this post helpful.
Telegram: https://t.me/LearnDevOpsForFree
Twitter: https://twitter.com/techyoutbe
Youtube: https://www.youtube.com/@T3Ptech