在x86和arm架构上使用KubeKey从制作离线包开始部署Kubernetes。
本文基于离线安装 KubeSphere编写,请以官方文档为准。
请不要完整的照抄我的配置文件,首先肯定不是最新的,其次也不一定符合我以外的人的实际使用情况。
建议完整阅读本文后举一反三进行配置。
环境介绍
我这里准备了4台主机,3台用于集群部署,1台用于制作离线包。需要注意的是,制作离线包的机器与集群环境不需要有直接联系,做好的离线包能够以任何方式传入集群环境即可。
序号 | IP地址 | 操作系统 | 用途 | 备注 |
---|---|---|---|---|
1 | 192.168.110.92 | openEuler 24.03 SP1 LTS | 制作离线包 | — |
2 | 192.168.50.71 | ubuntu 20.04 LTS | Master节点 | Node1 |
3 | 192.168.50.72 | ubuntu 20.04 LTS | Worker节点 | Node2 |
4 | 192.168.50.73 | ubuntu 20.04 LTS | Worker节点 | Node3 |
制作离线包
制作离线包这一步需要一台能够联网的主机,用于下载和制作离线包。
1 | [root@localhost ~]# uname -a |
获取镜像列表和依赖包
打开获取 KubeSphere 镜像列表,选择需要的扩展组件(不是必须要选的)后填写邮箱,稍后打开邮箱会收到三份文件,kubesphere-images.txt
、kk-manifest.yaml
和kk-manifest-mirror.yaml
,我们这里只需要最后一份文件。
打开Releases v3.1.6,找到自己目标操作系统的依赖包ISO文件并下载:
1 | wget https://github.com/kubesphere/kubekey/releases/download/v3.1.6/ubuntu-20.04-debs-amd64.iso -O ubuntu-20.04-debs-amd64.iso |
下载并制作离线包
- 下载kk然后解压:切换国内源:
1
2wget https://github.com/kubesphere/kubekey/releases/download/v3.1.7/kubekey-v3.1.7-linux-amd64.tar.gz -O kubekey-v3.1.7-linux-amd64.tar.gz
tar -zxvf kubekey-v3.1.7-linux-amd64.tar.gz1
export KKZONE=cn
- 制作manifest文件,带
--with-registry
以便将镜像仓库的安装文件一并打包进去:该命令将创建一个1
./kk create manifest --with-kubernetes v1.22.12 --with-registry
manifest-sample.yaml
文件。 - 依据自己的情况修改
manifest-sample.yaml
文件:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- amd64
operatingSystems:
# 建议下载依赖包并添加操作系统,也可以留空
- arch: amd64
type: linux
id: ubuntu
version: "20.04"
osImage: Ubuntu 20.04.3 LTS
repository: # Define the operating system repository iso file that will be included in the artifact.
iso:
localPath: ./ubuntu-20.04-debs-amd64.iso # Define getting the iso file from the local path.
url: # Define getting the iso file from the URL.
kubernetesDistributions:
- type: kubernetes
version: v1.22.12
components:
helm:
version: v3.14.3
cni:
version: v1.2.0
etcd:
version: v3.5.13
containerRuntimes:
- type: docker
version: 24.0.9
- type: containerd
version: 1.7.13
calicoctl:
version: v3.27.4
crictl:
version: v1.29.0
docker-registry:
version: "2"
# 我这里使用docker registry作为本地离线镜像库,因此harbor就注释掉了,可以节约300MB左右的空间
# harbor:
# version: v2.10.1
docker-compose:
version: v2.26.1
images:
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
## ks-core
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-apiserver:v4.1.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-console:v4.1.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-controller-manager:v4.1.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.16
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/redis:7.2.4-alpine
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/haproxy:2.9.6-alpine
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-extensions-museum:v1.1.2
## gateway
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/nginx-ingress-controller:v1.4.0
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/gateway-apiserver:v1.0.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/gateway-controller-manager:v1.0.2
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.16
## network
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/network-extension-apiserver:v1.1.0
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/network-extension-controller:v1.1.0
## storage-utils
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/storageclass-accessor:v0.2.5
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/snapshot-controller:v4.2.1
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/snapshotclass-controller:v0.0.1
- swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/pvc-autoresizer:v0.3.1
registry:
auths: {} - 构建离线包:等待出现
1
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
Pipeline[ArtifactExportPipeline] execute successfully
构建完成。 - 安装Helm:下载 KubeSphere Core Helm Chart:
1
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
1
2VERSION=1.1.3 # Chart 版本
helm fetch https://charts.kubesphere.io/main/ks-core-${VERSION}.tgz
至此,应该已经有了如下文件:
1 | [root@localhost ~]# ls -lah |
部署K8S集群
准备主机
1 | root@node1:~# uname -a |
- 将刚才制作的离线包(主要是下面的三个文件)传输到离线环境的Master节点上:
1
2
3
4
5
6
7root@k8s-node:~# ls -la
total 3436896
drwx------ 6 root root 4096 Feb 28 12:10 .
drwxr-xr-x 18 root root 4096 Feb 28 11:02 ..
-rwxr-xr-x 1 root root 82005163 Feb 28 12:08 kk
-rw-r--r-- 1 root root 82422 Feb 28 12:10 ks-core-1.1.3.tgz
-rw-r--r-- 1 root root 3437246299 Feb 28 12:49 kubesphere.tar.gz - 安装依赖:
从./ubuntu-20.04-debs-amd64.iso
依赖包中找到如下安装包传输到每台节点上,然后安装:1
2ubuntu-20.04-debs-amd64.iso/archive.ubuntu.com/ubuntu/pool/main/c/conntrack-tools/conntrack_1.4.5-2_amd64.deb
ubuntu-20.04-debs-amd64.iso/archive.ubuntu.com/ubuntu/pool/main/s/socat/socat_1.7.3.3-2_amd64.deb
部署kubernetes
- 创建离线集群配置文件:
1
2export KKZONE=cn
./kk create config --with-kubernetes v1.22.12 - 修改集群配置文件:
一共需要修改两处,一处是spec.roleGroups
下面添加一项registry
用于指定本地离线镜像仓库部署位置;另外一项是spec.registry
下privateRegistry
和kubesphereio
,按下面示例修改即可。
本文示例配置文件及部署方法没有使用harbor,使用了docker registry,优点是部署方便简单。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.50.71, internalAddress: 192.168.50.71, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
- {name: node2, address: 192.168.50.72, internalAddress: 192.168.50.72, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
- {name: node3, address: 192.168.50.73, internalAddress: 192.168.50.73, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
- node3
# 添加Registry安装位置,部署离线本地镜像仓库
registry:
- node1
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
# 设置集群部署时使用的私有仓库地址和命名空间
privateRegistry: "dockerhub.kubekey.local"
namespaceOverride: "kubesphereio"
registryMirrors: []
insecureRegistries: []
addons: [] - 部署本地离线镜像仓库:等待出现
1
./kk init registry -f config-sample.yaml -a kubesphere.tar.gz
Local image registry created successfully. Address: dockerhub.kubekey.local
即为部署成功。 - 为节点应用镜像仓库CA证书:
将/etc/ssl/registry/ssl/ca.crt
传输到每个节点(包括Master节点)的信任证书位置:然后到每个节点上更新CA证书:1
2
3cp /etc/ssl/registry/ssl/ca.crt /usr/local/share/ca-certificates/
scp /etc/ssl/registry/ssl/ca.crt root@192.168.50.72:/usr/local/share/ca-certificates/
scp /etc/ssl/registry/ssl/ca.crt root@192.168.50.73:/usr/local/share/ca-certificates/1
2
3
4
5
6root@node2:~# update-ca-certificates
Updating certificates in /etc/ssl/certs...
rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done. - 部署Kubernetes集群:最后返回如下内容即为成功:
1
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-local-storage
查看节点状态:1
2
3
4
5
613:30:27 UTC Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl get pod -A1
2
3
4
5root@k8s-node:~# kubectl get node -A
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master,worker 111s v1.22.12
node2 Ready worker 93s v1.22.12
node3 Ready worker 92s v1.22.12
部署Kubesphere
执行如下命令安装:
1 | helm upgrade --install -n kubesphere-system --create-namespace ks-core ks-core-1.1.3.tgz \ |
最后返回如下信息并能够打开登录页面即为安装成功:
1 | NOTES: |
CentOS 7 部署的差异点
依赖
CentOS 7 中应提前安装如下依赖,可到源中下载:
1 | [root@node1 CentOS7-Initial]# ls -la |
安装命令:
1 | rpm -ivh *.rpm --nodeps --force |
安装CA证书
将/etc/ssl/registry/ssl/ca.crt
证书复制到每个节点的/etc/pki/ca-trust/source/anchors
文件夹内,然后执行/bin/update-ca-trust
即可。
可以在安装好CA证书的节点上用下面的命令测试是否安装成功:
1 | curl https://dockerhub.kubekey.local |
安装好的话什么都不会返回,否则会提示证书验证失败。