资讯专栏INFORMATION COLUMN

利用 Kubeadm部署 Kubernetes 1.13.1 集群实践录

IntMain / 4050人阅读

摘要:虽然这种方法有利于我们理解集群,但却过于繁琐。该参数使用依赖于使用的网络方案,本文将使用经典的网络方案。因此我们接下来安装版本的,用于集群可视化的管理。


概 述

Kubernetes集群的搭建方法其实有多种,比如我在之前的文章《利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s集群,但却过于繁琐。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。

注: 本文首发于  My Personal Blog:CodeSheep·程序羊,欢迎光临 小站

节点规划

本文准备部署一个 一主两从三节点 Kubernetes集群,整体节点规划如下表所示:

主机名 IP 角色
k8s-master 192.168.39.79 k8s主节点
k8s-node-1 192.168.39.77 k8s从节点
k8s-node-2 192.168.39.78 k8s从节点

下面介绍一下各个节点的软件版本:

操作系统:CentOS-7.4-64Bit

Docker版本:1.13.1

Kubernetes版本:1.13.1

所有节点都需要安装以下组件:

Docker:不用多说了吧

kubelet:运行于所有 Node上,负责启动容器和 Pod

kubeadm:负责初始化集群

kubectl: k8s命令行工具,通过其可以部署/管理应用 以及CRUD各种资源


准备工作

所有节点关闭防火墙

systemctl disable firewalld.service 
systemctl stop firewalld.service

禁用SELINUX

setenforce 0

vi /etc/selinux/config
SELINUX=disabled

所有节点关闭 swap

swapoff -a

设置所有节点主机名

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2

所有节点 主机名/IP加入 hosts解析

编辑 /etc/hosts文件,加入以下内容:

192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2

组件安装 0x01. Docker安装(所有节点)
不赘述 ! ! !
0x02. kubelet、kubeadm、kubectl安装(所有节点)

首先准备repo

cat>>/etc/yum.repos.d/kubrenetes.repo<

然后执行如下指令来进行安装

setenforce 0
sed -i "s/^SELINUX=enforcing$/SELINUX= disabled/" /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet


Master节点配置 0x01. 初始化 k8s集群

为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1           
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.2.24                      
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

然后再在 Master节点上执行如下命令初始化 k8s集群:

kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16

--kubernetes-version: 用于指定 k8s版本

--apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface

--pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。

执行命令后,控制台给出了如下所示的详细集群初始化过程:

[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209   10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "u00a0 podSubnet”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using "kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for "kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager”
[control-plane] Creating static Pod manifest for "kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=""”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123

[root@localhost ~]#
0x02. 配置 kubectl

在 Master上用 root用户执行下列命令来配置 kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 
echo $KUBECONFIG
0x03. 安装Pod网络

安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的 flannel方案

首先设置系统参数:

sysctl net.bridge.bridge-nf-call-iptables=1

然后在 Master节点上执行如下命令:

kubectl apply -f kube-flannel.yaml
kube-flannel.yaml 文件在此

一旦 Pod网络安装完成,可以执行如下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤

kubectl get pods --all-namespaces -o wide

同时我们可以看到主节点已经就绪:kubectl get nodes


添加 Slave节点

在两个 Slave节点上分别执行如下命令来让其加入Master上已经就绪了的 k8s集群:

kubeadm join --token  : --discovery-token-ca-cert-hash sha256:

如果 token忘记,则可以去 Master上执行如下命令来获取:

kubeadm token list

上述kubectl join命令的执行结果如下:

[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token ynffffdp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443”
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443”
[discovery] Successfully established connection with API Server "192.168.39.79:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with "kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run "kubectl get nodes" on the master to see this node join the cluster.

效果验证

查看节点状态

kubectl get nodes

查看所有 Pod状态

kubectl get pods --all-namespaces -o wide

好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。


拆卸集群

首先处理各节点:

kubectl drain  --delete-local-data --force --ignore-daemonsets
kubectl delete node 

一旦节点移除之后,则可以执行如下命令来重置集群:

kubeadm reset

安装 dashboard

就像给elasticsearch配一个可视化的管理工具一样,我们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。

因此我们接下来安装 v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。

首先手动下载镜像并重新打标签:(所有节点)

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0

安装 dashboard:

kubectl create -f dashboard.yaml

dashboard.yaml 文件在此

查看 dashboard的 pod是否正常启动,如果正常说明安装成功:

 kubectl get pods --namespace=kube-system
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-4rds2                1/1     Running   0          81m
coredns-86c58d9df4-rhtgq                1/1     Running   0          81m
etcd-k8s-master                         1/1     Running   0          80m
kube-apiserver-k8s-master               1/1     Running   0          80m
kube-controller-manager-k8s-master      1/1     Running   0          80m
kube-flannel-ds-amd64-8qzpx             1/1     Running   0          78m
kube-flannel-ds-amd64-jvp59             1/1     Running   0          77m
kube-flannel-ds-amd64-wztbk             1/1     Running   0          78m
kube-proxy-crr7k                        1/1     Running   0          81m
kube-proxy-gk5vf                        1/1     Running   0          78m
kube-proxy-ktr27                        1/1     Running   0          77m
kube-scheduler-k8s-master               1/1     Running   0          80m
kubernetes-dashboard-79ff88449c-v2jnc   1/1     Running   0          21s

查看 dashboard的外网暴露端口

kubectl get service --namespace=kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10              53/UDP,53/TCP   5h38m
kubernetes-dashboard   NodePort    10.99.242.186           443:31234/TCP   14

生成私钥和证书签名:

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车即可】

生成SSL证书:

openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

然后将生成的 dashboard.keydashboard.crt置于路径 /home/share/certs下,该路径会配置到下面即将要操作的

dashboard-user-role.yaml文件中

创建 dashboard用户

 kubectl create -f dashboard-user-role.yaml

dashboard-user-role.yaml 文件在此

获取登陆token

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk "{print $1}") -nkube-system
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk "{print $1}") -nkube-system
Name:         admin-token-9d4vl
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw

token既然生成成功,接下来就可以打开浏览器,输入 token来登录进集群管理页面:


后 记
由于能力有限,若有错误或者不当之处,还请大家批评指正,一起学习交流!

My Personal Blog:CodeSheep 程序羊

我的半年技术博客之路



文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/27653.html

相关文章

  • 使用 kubeadm 部署

    摘要:上一章中,我们用去搭建单机集群,并且创建在三章中讲解,本篇将介绍利用部署多节点集群,并学会安装以及使用的命令行工具,快速创建集群实例,完成部署应用的实践。上一章中,我们用 minikube 去搭建单机集群,并且创建 Deployment、Service(在三章中讲解),本篇将介绍利用 kubeadm 部署多节点集群,并学会 安装以及使用 kubernetes 的命令行工具,快速创建集群实例,...

    番茄西红柿 评论0 收藏2637
  • CKAD认证中的部署教程

    摘要:以上便是官方的部署方法。如果使用表示读者可参考本章内容主要介绍了认证中要求掌握的部署配置启动网络插件,跟上一篇的内容比较,主要是通过文件去控制创建集群,两章的部署过程一致,只是网络插件有所不同。在上一章中,我们已经学会了使用 kubeadm 创建集群和加入新的节点,在本章中,将按照 CKAD 课程的方法重新部署一遍,实际上官方教程的内容不多,笔者写了两篇类似的部署方式,如果已经部署了 kub...

    番茄西红柿 评论0 收藏2637
  • 使用kubeadm创建生产就绪的Kubernetes集群

    摘要:的目标是为集群设置和管理提供基础实现。稳定的底层实现现在使用不会很快改变的方法创建一个新的集群。是用于在较低级别创建集群的首选工具。调查虽然是,但将继续致力于改善管理集群的用户体验。 作者:LucasKäldström(CNCF大使)和Luc Perkins(CNCF开发者倡导者) kubeadm是一个工具,使Kubernetes管理员能够快速,轻松地引导完全符合Certified K...

    voyagelab 评论0 收藏0
  • etcd 集群运维实践

    摘要:是集群的数据核心,最严重的情况是,当出问题彻底无法恢复的时候,解决问题的办法可能只有重新搭建一个环境。因此围绕相关的运维知识就比较重要,可以容器化部署,也可以在宿主机自行搭建,以下内容是通用的。 etcd 是 Kubernetes 集群的数据核心,最严重的情况是,当 etcd 出问题彻底无法恢复的时候,解决问题的办法可能只有重新搭建一个环境。因此围绕 etcd 相关的运维知识就比较重要...

    pcChao 评论0 收藏0
  • etcd 集群运维实践

    摘要:是集群的数据核心,最严重的情况是,当出问题彻底无法恢复的时候,解决问题的办法可能只有重新搭建一个环境。因此围绕相关的运维知识就比较重要,可以容器化部署,也可以在宿主机自行搭建,以下内容是通用的。 etcd 是 Kubernetes 集群的数据核心,最严重的情况是,当 etcd 出问题彻底无法恢复的时候,解决问题的办法可能只有重新搭建一个环境。因此围绕 etcd 相关的运维知识就比较重要...

    Noodles 评论0 收藏0

发表评论

0条评论

最新活动
阅读需要支付1元查看
<