Overview of Kubernetes, environment setup, and reference books

Machine Learning Artificial Intelligence Digital Transformation ICT Infrastructure Cloud Computing Navigation of this blog
Kubernetes Overview

Kubernetes will be an open source orchestration tool for managing applications running on Docker and other container runtimes. kubernetes allows you to deploy, scale, and manage applications across multiple nodes and failover with ease.

The following are the main components of Kubernetes

  • Master Node: The central administrative node that controls the entire cluster.
  • Worker Node: The node that actually executes the application.
  • Pod: A unit of one or more containers, which is the unit of deployment.
  • Service: A way to group Pods and publish them as a single application.
  • Volume: Provides persistent storage and is used to store container data.
  • ReplicaSet: An object that manages replicas of a Pod.

Kubernetes is available on a variety of cloud platforms and supports a variety of programming languages and frameworks. kubernetes provides features such as auto-scaling, load balancing, health checks, rolling updates, and high availability, reliability, and scalability.

Main flows

The following steps are required to implement Kubernetes

  • Selecting a tool to set up a Kubernetes cluster: While it is possible to set up a Kubernetes cluster manually, in many cases it is more convenient to use an automated tool. Examples include tools such as kubeadm, kops, and Rancher.
  • Node setup: A Kubernetes cluster requires a Master node and several Worker nodes: the Master node is used to run the control plane of the Kubernetes cluster, and the Worker nodes are used to run applications.
  • Building Docker Containers: Kubernetes is designed to run Docker containers. Therefore, it is necessary to package the application into a Docker container.
  • Create Kubernetes resources: Kubernetes uses Pods, Services, Deployment, etc. as resources. To create these resources, YAML files are created and applied to the Kubernetes cluster using kubectl.
  • Deploying Applications: After creating Kubernetes resources, applications can be deployed: use the kubectl command to create a Pod or Deployment and deploy the application to the Kubernetes cluster.
  • Monitoring and Logging Kubernetes allows you to monitor the performance of your application; you can use tools such as Prometheus and Grafana to collect metrics across your Kubernetes cluster and log your application application logs.

Following these steps, Kubernetes can be implemented. However, Kubernetes is a complex tool and the learning curve can be high, so it is important to spend sufficient time learning Kubernetes if you are implementing it for the first time.

Environment Setting

Setting up a Kubernetes environment requires several steps. The basic steps are outlined below.

  • Installation of Kubernetes
    • To install in a local environment, Minikube can be used.
    • To install in a cloud environment, follow the documentation of the respective cloud provider.
  • Installation of kubectl
    • kubectl is a tool necessary to connect to a Kubernetes cluster and perform operations.
  • Connecting to a Kubernetes cluster
    • Connect to the Kubernetes cluster using the kubectl command.
    • The connection requires the IP address, credentials, and cluster name of the Kubernetes master.
  • Creating Kubernetes Objects
    • Kubernetes uses various objects to manage applications.
    • For example, the Deployment object can be used to manage application deployment.
  • Checking Kubernetes Objects
    • Use the kubectl command to check the status of objects you have created.
    • For example, the kubectl get pods command can be used to check the status of pods.
  • Updating Kubernetes Objects
    • If necessary, you can update the objects you have created.
    • For example, the kubectl edit deployment command can be used to edit the Deployment object.
  • Deleting Kubernetes Objects
    • Objects that are no longer needed can be deleted.
    • For example, the kubectl delete deployment command can be used to delete a Deployment object.

The above is the basic environment setup procedure for Kubernetes. However, Kubernetes is a very flexible system with many options. For more information, please refer to the official documentation.

Hello World

As an example of Hello World in Kubernetes, we will discuss how to deploy a simple web application on a Kubernetes cluster.

  1. Creating a Docker image: First, we need to create a Docker image containing a simple web application. The following is an example of a Hello World application written in Node.js.
const http = require('http');
const port = 8080;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello Worldn');
});

server.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

To create a Docker image containing the above application, create a Dockerfile as follows

FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]

To create a Docker image using the above Dockerfile, execute the following command

docker build -t hello-world-app:v1 .
  1. Creating Kubernetes Resources: Next, Kubernetes resources need to be created. Below is an example YAML file for creating Deployment and Service.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-app
  labels:
    app: hello-world-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-world-app
  template:
    metadata:
      labels:
        app: hello-world-app
    spec:
      containers:
      - name: hello-world-app
        image: hello-world-app:v1
        ports:
        - containerPort: 8080
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: hello-world-app
spec:
  type: ClusterIP
  selector:
    app: hello-world-app
  ports:
    - name: http
      port: 80
      targetPort: 8080

To create Deployment and Service using the above YAML file, execute the following command

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
  1. Testing the application: Finally, to test the web application you have created, obtain the Service’s IP address and send a request with the curl command.
kubectl get svc hello-world-app
curl :80

With the above, a simple Hello World application can be deployed on a Kubernetes cluster and verified to work.

In addition, “Deploying and Operating Microservices – Docker and Kubernetes” shows an example implementation using kubernetes for microservices.

reference book (work)

A good reference book on Kubernetes is the Kubernetes Practical Guide.

This book contains detailed information on putting kubernetes into practice.

Introduction
Target of this Manual
Precautions
Structure of this manual
Execution Environment
Code used in this manual
Notation of this Manual
Part 1 Kubernetes Basics
Chapter 1 The World Aimed for by Cloud Native
1-1 What is Cloud Native?
1-1-1 Business Value Provided by Cloud Native
1-1-2 Components Supporting Cloud-native
1-1-3 Cloud-native Application Methodology
1-2 Organizations Promoting Cloud-Native
1-2-1 Role of CNCF
1-2-2 CNCF Projects
1-3 Organizational Structure Required for Cloud Native
1-3-1 Process and Organizational Change
1-3-2 Continuous Improvement of the Organization
1-4 Summary
Chapter 2 Technologies Supporting Containers
2-1 Container Overview
2-1-1 Value provided by containers
2-1-2 Elemental Technologies of Containers
2-1-3 Container Technology Standardization through OCI
2-1-4 Layers of Container Runtime
2-2 Kubernetes Overview
2-2-1 Role of Container Orchestration
2-2-2 What is Kubernetes?
2-2-3 Kubernetes Supported Products and Their Forms
2-2-4 The Kubernetes Ecosystem
2-3 Summary
Chapter 3 Kubernetes Architecture
3-1 Overview of Kubernetes Cluster
3-1-1 Object Overview
3-1-2 Overview of Control Plane
3-2 Master Node Components
3-2-1 kube-apiserver
3-2-2 kube-scheduler
3-2-3 kube-controller-manager
3-2-4 etcd
3-3 Worker Node Components
3-3-1 kubelet
3-3-2 kube-proxy
3-4 Extension components
3-4-1 Service Discovery
3-4-2 Visualization & Control
3-5 Summary
Chapter 4 Building a Kubernetes Cluster
4-1 kubeadm
4-1-1 Overview of Cluster Using kubeadm
4-1-2 Preparation of Cluster Nodes
4-1-3 Building a Master Node
4-1-4 Building a Worker Node
4-2 Azure Kubernetes Service
4-2-1 Overview of Clusters Using AKS
4-2-2 Building AKS
4-3 Deploying Sample Containers
4-3-1 Hello World
4-4 Summary
Chapter 5 Overview of Kubernetes Objects
5-1 Main Objects
5-1-1 Pod
5-1-2 ReplicaSet
5-1-3 Deployment
5-1-4 Service
5-2 Application Deployment Using Objects
5-2-1 Sock Shop Architecture
5-2-2 Deployment of Sock Shop
5-2-3 Pod Field Details
5-2-4 Handling Environment Variables
5-3 Autoscaling with Objects
5-3-1 Pod Autoscale
5-3-2 Horizontal AutoScale (HPA)
5-3-3 Vertical AutoScale (VPA)
5-4 Summary
Part 2 Development and Operation of Cloud Native Applications
Chapter 6 Container Application Catalog
6-1 YAML Management
6-2 Package Management
6-2-1 Helm Overview
6-2-2 Deployment of Chart
6-2-3 Chart Customization
6-3 Custom Managed Services
6-3-1 Operator Overview
6-3-2 Operator Deployment
6-3-3 Utilizing Operator
6-4 Summary
Chapter 7 Continuous Integration
7-1 Cloud-Native Continuous Integration
7-1-1 Continuous Integration Phases
7-2 Container Build
7-2-1 Container Build Overview
7-2-2 Container Build with BuildKit
7-2-3 Container Build with Kaniko
7-3 Container Security
7-3-1 Container Application Security
7-3-2 Overview of Clair
7-3-3 Implementation of Clair
7-4 Continuous Development Workflow for Container Apps
7-4-1 Skaffold Overview
7-4-2 Skaffold Implementation
7-5 Summary
Chapter 8 Continuous Delivery
8-1 Cloud-Native Continuous Delivery
8-1-1 Deployment Strategies
8-1-2 Spinnaker Overview
8-1-3 Spinnaker Installation
8-2 Blue
Green Deployment
8-2-1 Spinnaker Sock Shop Deployment
8-2-2 Blue/Green Deployment Pipeline
8-3 Canary Deployment
8-3-1 Preparation for Automatic Canary Analysis
8-3-2 Automatic Canary Analysis Pipeline
8-4 Summary
Chapter 9 Microservices
9-1 Overview of Microservices Architecture
9-1-1 Challenges of Monolithic Architecture
9-1-2 Design with Microservices Architecture
9-1-3 Advantages of Microservices Architecture
9-1-4 Challenges of Microservices Architecture
9-2 Concept of Service Mesh
9-2-1 Advantages of Service Mesh
9-2-2 Overview of Istio
9-2-3 Istio Architecture
9-3 Deployment of Istio on Kubernetes
9-3-1 Istio on Kubernetes
9-3-2 Installation of Istio
9-3-3 Deployment of Sock Shop on Service Mesh
9-4 Improving Fault Tolerance
9-4-1 Retries
9-4-2 Dealing with Performance Degradation
9-4-3 Timeouts
9-4-4 Circuit Breaker
9-5 Safe Service Updates
9-5-1 Blue/Green Deployment
9-5-2 Canary Deployment
9-6 Chaos Engineering
9-6-1 Fault Injection with Istio
9-7 Distributed Tracing
9-7-1 Standard Technologies for Distributed Tracing
9-7-2 Distributed Tracing with Istio and Jaeger
9-7-3 Trace Analysis with Jaeger
9-8 Summary
Conclusion
Index
Author Profiles

コメント

タイトルとURLをコピーしました