Lxn-Chan!

(~ ̄▽ ̄)→))* ̄▽ ̄*)o

在x86和arm架构上使用KubeKey从制作离线包开始部署Kubernetes。

本文基于离线安装 KubeSphere编写,请以官方文档为准

请不要完整的照抄我的配置文件,首先肯定不是最新的,其次也不一定符合我以外的人的实际使用情况。
建议完整阅读本文后举一反三进行配置。

环境介绍

我这里准备了4台主机,3台用于集群部署,1台用于制作离线包。需要注意的是,制作离线包的机器与集群环境不需要有直接联系,做好的离线包能够以任何方式传入集群环境即可。

序号IP地址操作系统用途备注
1192.168.110.92openEuler 24.03 SP1 LTS制作离线包
2192.168.50.71ubuntu 20.04 LTSMaster节点Node1
3192.168.50.72ubuntu 20.04 LTSWorker节点Node2
4192.168.50.73ubuntu 20.04 LTSWorker节点Node3

制作离线包

制作离线包这一步需要一台能够联网的主机,用于下载和制作离线包。

1
2
3
4
5
6
7
8
9
[root@localhost ~]# uname -a
Linux localhost.localdomain 6.6.0-72.0.0.76.oe2403sp1.x86_64 #1 SMP Fri Dec 27 12:13:01 CST 2024 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/os-release
NAME="openEuler"
VERSION="24.03 (LTS-SP1)"
ID="openEuler"
VERSION_ID="24.03"
PRETTY_NAME="openEuler 24.03 (LTS-SP1)"
ANSI_COLOR="0;31"

获取镜像列表和依赖包

打开获取 KubeSphere 镜像列表,选择需要的扩展组件(不是必须要选的)后填写邮箱,稍后打开邮箱会收到三份文件,kubesphere-images.txtkk-manifest.yamlkk-manifest-mirror.yaml,我们这里只需要最后一份文件。

打开Releases v3.1.6,找到自己目标操作系统的依赖包ISO文件并下载:

1
wget https://github.com/kubesphere/kubekey/releases/download/v3.1.6/ubuntu-20.04-debs-amd64.iso -O ubuntu-20.04-debs-amd64.iso

下载并制作离线包

  1. 下载kk然后解压:
    1
    2
    wget https://github.com/kubesphere/kubekey/releases/download/v3.1.7/kubekey-v3.1.7-linux-amd64.tar.gz -O kubekey-v3.1.7-linux-amd64.tar.gz
    tar -zxvf kubekey-v3.1.7-linux-amd64.tar.gz
    切换国内源:
    1
    export KKZONE=cn
  2. 制作manifest文件,带--with-registry以便将镜像仓库的安装文件一并打包进去:
    1
    ./kk create manifest --with-kubernetes v1.22.12 --with-registry
    该命令将创建一个 manifest-sample.yaml 文件。
  3. 依据自己的情况修改manifest-sample.yaml文件:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Manifest
    metadata:
    name: sample
    spec:
    arches:
    - amd64
    operatingSystems:
    # 建议下载依赖包并添加操作系统,也可以留空
    - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    osImage: Ubuntu 20.04.3 LTS
    repository: # Define the operating system repository iso file that will be included in the artifact.
    iso:
    localPath: ./ubuntu-20.04-debs-amd64.iso # Define getting the iso file from the local path.
    url: # Define getting the iso file from the URL.
    kubernetesDistributions:
    - type: kubernetes
    version: v1.22.12
    components:
    helm:
    version: v3.14.3
    cni:
    version: v1.2.0
    etcd:
    version: v3.5.13
    containerRuntimes:
    - type: docker
    version: 24.0.9
    - type: containerd
    version: 1.7.13
    calicoctl:
    version: v3.27.4
    crictl:
    version: v1.29.0
    docker-registry:
    version: "2"
    # 我这里使用docker registry作为本地离线镜像库,因此harbor就注释掉了,可以节约300MB左右的空间
    # harbor:
    # version: v2.10.1
    docker-compose:
    version: v2.26.1
    images:
    - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
    - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
    - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
    - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
    - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
    - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
    - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
    - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
    - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
    - registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
    - registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
    - registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
    - registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
    - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
    - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
    - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
    - registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
    - registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
    ## ks-core
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-apiserver:v4.1.2
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-console:v4.1.2
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-controller-manager:v4.1.2
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.16
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/redis:7.2.4-alpine
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/haproxy:2.9.6-alpine
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/ks-extensions-museum:v1.1.2
    ## gateway
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/nginx-ingress-controller:v1.4.0
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/gateway-apiserver:v1.0.2
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/gateway-controller-manager:v1.0.2
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/kubectl:v1.27.16
    ## network
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/network-extension-apiserver:v1.1.0
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/network-extension-controller:v1.1.0
    ## storage-utils
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/storageclass-accessor:v0.2.5
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/snapshot-controller:v4.2.1
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/snapshotclass-controller:v0.0.1
    - swr.cn-southwest-2.myhuaweicloud.com/ks/kubesphere/pvc-autoresizer:v0.3.1

    registry:
    auths: {}
  4. 构建离线包:
    1
    ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
    等待出现Pipeline[ArtifactExportPipeline] execute successfully构建完成。
  5. 安装Helm:
    1
    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
    下载 KubeSphere Core Helm Chart:
    1
    2
    VERSION=1.1.3     # Chart 版本
    helm fetch https://charts.kubesphere.io/main/ks-core-${VERSION}.tgz

至此,应该已经有了如下文件:

1
2
3
4
5
6
7
8
9
[root@localhost ~]# ls -lah
total 3.4G
dr-xr-x---. 4 root root 4.0K Feb 28 19:55 .
dr-xr-xr-x. 19 root root 4.0K Feb 28 18:39 ..
-rwxr-xr-x. 1 root root 79M Oct 30 17:42 kk
-rw-r--r--. 1 root root 81K Feb 28 19:55 ks-core-1.1.3.tgz
drwxr-xr-x. 3 root root 4.0K Feb 28 19:52 kubekey
-rw-r--r--. 1 root root 3.3G Feb 28 19:52 kubesphere.tar.gz
-rw-r--r--. 1 root root 3.8K Feb 28 19:39 manifest-sample.yaml

部署K8S集群

准备主机

1
2
3
4
5
6
7
8
9
10
11
root@node1:~# uname -a
Linux node1 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root@node1:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  1. 将刚才制作的离线包(主要是下面的三个文件)传输到离线环境的Master节点上:
    1
    2
    3
    4
    5
    6
    7
    root@k8s-node:~# ls -la
    total 3436896
    drwx------ 6 root root 4096 Feb 28 12:10 .
    drwxr-xr-x 18 root root 4096 Feb 28 11:02 ..
    -rwxr-xr-x 1 root root 82005163 Feb 28 12:08 kk
    -rw-r--r-- 1 root root 82422 Feb 28 12:10 ks-core-1.1.3.tgz
    -rw-r--r-- 1 root root 3437246299 Feb 28 12:49 kubesphere.tar.gz
  2. 安装依赖:
    ./ubuntu-20.04-debs-amd64.iso依赖包中找到如下安装包传输到每台节点上,然后安装:
    1
    2
    ubuntu-20.04-debs-amd64.iso/archive.ubuntu.com/ubuntu/pool/main/c/conntrack-tools/conntrack_1.4.5-2_amd64.deb
    ubuntu-20.04-debs-amd64.iso/archive.ubuntu.com/ubuntu/pool/main/s/socat/socat_1.7.3.3-2_amd64.deb

部署kubernetes

  1. 创建离线集群配置文件:
    1
    2
    export KKZONE=cn
    ./kk create config --with-kubernetes v1.22.12
  2. 修改集群配置文件:
    一共需要修改两处,一处是spec.roleGroups下面添加一项registry用于指定本地离线镜像仓库部署位置;另外一项是spec.registryprivateRegistrykubesphereio,按下面示例修改即可。
    本文示例配置文件及部署方法没有使用harbor,使用了docker registry,优点是部署方便简单。
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Cluster
    metadata:
    name: sample
    spec:
    hosts:
    - {name: node1, address: 192.168.50.71, internalAddress: 192.168.50.71, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
    - {name: node2, address: 192.168.50.72, internalAddress: 192.168.50.72, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
    - {name: node3, address: 192.168.50.73, internalAddress: 192.168.50.73, user: lxnchan, password: "i-hate-this-very-long-password-but-i-finally-do"}
    roleGroups:
    etcd:
    - node1
    control-plane:
    - node1
    worker:
    - node1
    - node2
    - node3
    # 添加Registry安装位置,部署离线本地镜像仓库
    registry:
    - node1
    controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
    kubernetes:
    version: v1.22.12
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
    etcd:
    type: kubekey
    network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
    enabled: false
    registry:
    # 设置集群部署时使用的私有仓库地址和命名空间
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
    addons: []
  3. 部署本地离线镜像仓库:
    1
    ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz
    等待出现Local image registry created successfully. Address: dockerhub.kubekey.local即为部署成功。
  4. 为节点应用镜像仓库CA证书:
    /etc/ssl/registry/ssl/ca.crt传输到每个节点(包括Master节点)的信任证书位置:
    1
    2
    3
    cp /etc/ssl/registry/ssl/ca.crt /usr/local/share/ca-certificates/
    scp /etc/ssl/registry/ssl/ca.crt root@192.168.50.72:/usr/local/share/ca-certificates/
    scp /etc/ssl/registry/ssl/ca.crt root@192.168.50.73:/usr/local/share/ca-certificates/
    然后到每个节点上更新CA证书:
    1
    2
    3
    4
    5
    6
    root@node2:~# update-ca-certificates 
    Updating certificates in /etc/ssl/certs...
    rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
    1 added, 0 removed; done.
    Running hooks in /etc/ca-certificates/update.d...
    done.
  5. 部署Kubernetes集群:
    1
    ./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-local-storage
    最后返回如下内容即为成功:
    1
    2
    3
    4
    5
    6
    13:30:27 UTC Pipeline[CreateClusterPipeline] execute successfully
    Installation is complete.

    Please check the result using the command:

    kubectl get pod -A
    查看节点状态:
    1
    2
    3
    4
    5
    root@k8s-node:~# kubectl get node -A
    NAME STATUS ROLES AGE VERSION
    node1 Ready control-plane,master,worker 111s v1.22.12
    node2 Ready worker 93s v1.22.12
    node3 Ready worker 92s v1.22.12

部署Kubesphere

执行如下命令安装:

1
2
3
4
5
6
helm upgrade --install -n kubesphere-system --create-namespace ks-core ks-core-1.1.3.tgz \
--set global.imageRegistry=dockerhub.kubekey.local/ks \
--set extension.imageRegistry=dockerhub.kubekey.local/ks \
--set ksExtensionRepository.image.tag=v1.1.2 \
--debug \
--wait

最后返回如下信息并能够打开登录页面即为安装成功:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
NOTES:
Thank you for choosing KubeSphere Helm Chart.

Please be patient and wait for several seconds for the KubeSphere deployment to complete.

1. Wait for Deployment Completion

Confirm that all KubeSphere components are running by executing the following command:

kubectl get pods -n kubesphere-system
2. Access the KubeSphere Console

Once the deployment is complete, you can access the KubeSphere console using the following URL:

http://192.168.50.71:30880

3. Login to KubeSphere Console

Use the following credentials to log in:

Account: admin
Password: P@88w0rd

NOTE: It is highly recommended to change the default password immediately after the first login.
For additional information and details, please visit https://kubesphere.io.
root@node1:~# exit
logout
Connection to 192.168.50.71 closed.
[root@localhost ~]# curl http://192.168.50.71:30880
Redirecting to <a href="/login">/login</a>.

CentOS 7 部署的差异点

依赖

CentOS 7 中应提前安装如下依赖,可到源中下载:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node1 CentOS7-Initial]# ls -la
total 9160
drwxr-xr-x 2 root root 4096 Feb 28 11:35 .
dr-xr-x---. 9 root root 4096 Feb 28 11:39 ..
-rw-r--r-- 1 root root 191000 Feb 28 11:31 conntrack-tools-1.4.4-7.el7.x86_64.rpm
-rw-r--r-- 1 root root 125448 Feb 28 11:31 ebtables-2.0.10-16.el7.x86_64.rpm
-rw-r--r-- 1 root root 39568 Feb 28 11:31 ipset-7.1-1.el7.x86_64.rpm
-rw-r--r-- 1 root root 65944 Feb 28 11:31 libmspack-0.5-0.8.alpha.el7.x86_64.rpm
-rw-r--r-- 1 root root 50076 Feb 28 11:31 libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm
-rw-r--r-- 1 root root 247560 Feb 28 11:31 libxslt-1.1.28-6.el7.x86_64.rpm
-rw-r--r-- 1 root root 506040 Feb 28 11:31 openssl-1.0.2k-26.el7_9.x86_64.rpm
-rw-r--r-- 1 root root 32468 Feb 28 11:31 perl-Error-0.17020-2.el7.noarch.rpm
-rw-r--r-- 1 root root 57432 Feb 28 11:31 perl-Git-1.8.3.1-25.el7_9.noarch.rpm
-rw-r--r-- 1 root root 31916 Feb 28 11:31 perl-TermReadKey-2.30-20.el7.x86_64.rpm
-rw-r--r-- 1 root root 296632 Feb 28 11:31 socat-1.7.3.2-2.el7.x86_64.rpm
-rw-r--r-- 1 root root 1106008 Feb 28 11:31 vim-enhanced-7.4.629-8.el7_9.x86_64.rpm
-rw-r--r-- 1 root root 560272 Feb 28 11:31 wget-1.14-18.el7_6.1.x86_64.rpm
-rw-r--r-- 1 root root 181628 Feb 28 11:31 xmlsec1-1.2.20-8.el7_9.x86_64.rpm
-rw-r--r-- 1 root root 77988 Feb 28 11:31 xmlsec1-openssl-1.2.20-8.el7_9.x86_64.rpm

安装命令:

1
rpm -ivh *.rpm --nodeps --force

安装CA证书

/etc/ssl/registry/ssl/ca.crt证书复制到每个节点的/etc/pki/ca-trust/source/anchors文件夹内,然后执行/bin/update-ca-trust即可。

可以在安装好CA证书的节点上用下面的命令测试是否安装成功:

1
curl https://dockerhub.kubekey.local

安装好的话什么都不会返回,否则会提示证书验证失败。

 简单说两句



联系站长 | 服务状态 | 友情链接

备案号:辽ICP备19013963号

萌ICP备 20219421 号

津公网安备12011602300394号

中国互联网违法和不良信息举报中心

架构版本号:8.1.6 | 本站已全面支持IPv6

正在载入运行数据(1/2)请稍后...
正在载入运行数据(2/2)请稍后...

Copyright 2024 LingXuanNing, All rights reserved.