Kubernetes basic configuration

Kubernetes basic configuration

installation

Prerequisite preparation

  • Close the swap space:sudo swapoff -a
  • Avoid boot swap space: Notes /etc/fstabinswap
  • Turn off the firewall: ufw disable view virtual memory
# free
              total used free shared buff/cache available
Mem: 2017296 178020 1636676 1180 202600 1685716
Swap: 0 0 0

One-click installation of docker script (recommended)

# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
# Executing docker install script, commit: 2f4ae48
+ sh -c'apt-get update -qq >/dev/null'
+ sh -c'apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sh -c'curl -fsSL "https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg" | apt-key add -qq->/dev/null'
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c'echo "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable">/etc/apt/sources.list.d/docker.list'
+ sh -c'apt-get update -qq >/dev/null'
+'[' -n''']'
+ sh -c'apt-get install -y -qq --no-install-recommends docker-ce >/dev/null'
+ sh -c'docker version'
Client:
 Version: 18.09.7
 API version: 1.39
 Go version: go1.10.8
 Git commit: 2d0083d
 Built: Thu Jun 27 17:56:23 2019
 OS/Arch: linux/amd64
 Experimental: false

Server: Docker Engine-Community
 Engine:
  Version: 18.09.7
  API version: 1.39 (minimum version 1.12)
  Go version: go1.10.8
  Git commit: 2d0083d
  Built: Thu Jun 27 17:23:02 2019
  OS/Arch: linux/amd64
  Experimental: false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

Install docker

$ sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
$ curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add-
$ sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get -y update
$ sudo apt-get -y install docker-ce
$ docker version
Client:
 Version: 18.09.7
 API version: 1.39
 Go version: go1.10.8
 Git commit: 2d0083d
 Built: Thu Jun 27 17:56:23 2019
 OS/Arch: linux/amd64
 Experimental: false
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/version: dial unix/var/run/docker.sock: connect: permission denied

docker acceleration sudo vim/etc/docker/daemon.json

{
  "registry-mirrors": [
    "https://registry.docker-cn.com"
  ]
}
sudo systemctl restart docker
$ sudo docker info

Modify device name

$ hostnamectl
$ sudo hostnamectl set-hostname kubernetes-master

Install kebernetes mirror

$ su root
# apt-get update && apt-get install -y apt-transport-https
# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add-
# cat << EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://mirrors.aliyun.com/kubernetes/apt/kubernetes-xenial main
> EOF
# apt-get update

Install the three major pieces of kebernetes

# apt-get install -y kubelet kubeadm kubectl
Setting up kubeadm (1.15.0-00) ...
# systemctl enable kubelet && systemctl start kubelet

Change setting

kubeadm config print init-defaults --kubeconfig ClusterConfiguration> kubeadm.yml

Configuration file content

-groups:
  -system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  -signing
  -authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.237.129
  bindPort: 6443
nodeRegistration:
  criSocket:/var/run/dockershim.sock
  name: kubernetes-master
  taints:
  -effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir:/etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir:/var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}            

Pull mirror

# kubeadm config images pull --config kubeadm.yml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.3.1

View mirror

:/home/baxiang# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.15.0 d235b23c3570 3 weeks ago 82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.0 201c7a840312 3 weeks ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.0 8328bb49b652 3 weeks ago 159MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.0 2d3813851e87 3 weeks ago 81.1MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 5 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 7 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 18 months ago 742kB

Initialize the master node

kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version stable

or

kubeadm init --config=kubeadm.yml | tee kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i/etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.237.129:6443 --token abcdef.0123456789abcdef/
    --discovery-token-ca-cert-hash sha256:6487845dbd51ddd8874dda2257ecf6157a0a6d7487317355ddc8a081c8525cc1

Configure kubectl

 mkdir -p $HOME/.kube
  sudo cp -i/etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

View node information

# kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 9m17s v1.15.0

View all namespaces of the node

# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-bccdc95cf-njhpw 0/1 Pending 0 12m
kube-system coredns-bccdc95cf-z4br9 0/1 Pending 0 12m
kube-system etcd-kubernetes-master 1/1 Running 0 11m
kube-system kube-apiserver-kubernetes-master 1/1 Running 0 12m
kube-system kube-controller-manager-kubernetes-master 1/1 Running 0 12m
kube-system kube-proxy-qw6bn 1/1 Running 0 12m
kube-system kube-scheduler-kubernetes-master 1/1 Running 0 12m

Node join command

kubeadm join 192.168.237.129:6443 --token abcdef.0123456789abcdef/
    --discovery-token-ca-cert-hash sha256:6487845dbd51ddd8874dda2257ecf6157a0a6d7487317355ddc8a081c8525cc1

Online practice platform https://labs.play-with-k8s.com/

Cluster design

image.png

Kubernetes can manage large-scale clusters, connect each node in the cluster to each other, and control the entire cluster like a single computer. The cluster has two roles, one is master and the other is Node (also called worker).

  • The master is the "brain" of the cluster and is responsible for managing the entire cluster: such as application scheduling, updating, and scaling.
  • Node is the specific "work". A Node is generally a virtual machine or a physical machine, on which docker service and kubelet service (a component of Kubernetes) run in advance. After receiving the "task" issued by the master, the Node Go to complete the task (use docker to run a specified application)

image.png

image.png

Deployment-Application Manager

When we have a Kubernetes cluster, we can run our application on it, provided that our application must support running in docker, that is, we must prepare the docker image in advance.

After having the image, we generally describe the application through the Kubernetes Deployment configuration file, such as the name of the application, the name of the image used, how many instances to run, how much memory resources are needed, cpu resources, and so on.

With the configuration file, you can manage this application through the command line client kubectl provided by Kubernetes. kubectl will communicate with the Kubernetes master through RestAPI, and finally complete the management of the application. For example, the Deployment configuration file we just configured is called app.yaml, we can create this application through "kubectl create -f app.yaml", and then Kubernetes will ensure that our application is running. When an instance If the operation fails or the Node running the application suddenly goes down, Kubernetes will automatically discover and schedule a new instance on the new Node to ensure that our application always achieves our expected results.

Pod-the smallest scheduling unit of Kubernetes

In fact, after the deployment is created in the previous step, what the Node of Kubernetes does is not simply docker run a container. Out of considerations such as ease of use, flexibility, and stability, Kubernetes proposed something called Pod as the smallest scheduling unit of Kubernetes. So our application is actually a Pod running on each Node. Pod can only run on Node. As shown below:

image.png

So what is a Pod? Pod is a set of containers (of course there can be only one). The container itself is a small box, and the Pod is equivalent to wrapping a layer of small boxes on the container. What are the characteristics of the container in this box?

  • Storage can be shared directly through volume.
  • Have the same network space, in layman's terms, have the same IP address, have the same network card and network settings.
  • Multiple containers can "know" each other, such as knowing others' mirrors, and knowing the ports defined by others.

As for the benefits of this design, we still need to learn more and slowly understand it~

image.png

Service-Service Discovery-Find each Pod

The above Deployment is created, and the Pod is up and running. How can we access our application?

The most direct way to think of is to access it directly through Pod-ip+port, but what if there are a lot of instances? Okay, get all the Pod-ip lists, configure them in the load balancer, and poll access. But as we said above, the Pod may die, and even the Node where the Pod is located may go down. Kubernetes will automatically help us recreate a new Pod. Furthermore, the Pod will be rebuilt every time the service is updated. And each Pod has its own IP. So Pod's ip is unstable and will change frequently.

In the face of this change, we must use another concept: Service. It is here to specifically solve this problem. No matter how many Pods of the Deployment are, whether it is updated, destroyed or rebuilt, Service can always find and maintain its IP list. Service also provides multiple entrances to the outside world:

  1. ClusterIP: The unique ip address of the Service in the cluster. Through this ip, we can access the back-end Pod in a balanced manner without having to care about the specific Pod.
  2. NodePort: Service will start a port on each Node of the cluster, and we can access the Pod through this port of any Node.
  3. LoadBalancer: On the basis of NodePort, an external load balancer is created with the help of the public cloud environment, and the request is forwarded to NodeIP: NodePort.
  4. ExternalName: Forward the service to the specified domain name through DNS CNAME record (set by spec.externlName).

image.png

Well, it seems that the problem of service access has been solved. But have you ever wondered how does Service know which Pods it is responsible for? How are these Pod changes tracked?

The easiest way to think of is to use the name of the Deployment. A Service corresponds to a Deployment. Of course, this can indeed be achieved. But k ubernetes uses a more flexible and universal design-Label label. By labeling Pods, Service can be responsible for only one Deployment Pod or multiple Deployment Pods. Deployment and Service can be decoupled through Label.

image.png

RollingUpdate-Rolling update

Rolling upgrade is the most typical service upgrade solution in Kubernetes. The main idea is to increase the number of instances of the new version while reducing the number of instances of the old version until the number of instances of the new version reaches the expectation, and the number of instances of the old version is reduced to 0 , The rolling upgrade is over. During the entire upgrade process, the service has been available. And you can roll back to the old version at any time.

image.png

Reference: https://cloud.tencent.com/developer/article/1463206 Kubernetes basic configuration-cloud + community-Tencent Cloud