无公网环境下在centos7.9上使用kk工具部署k8s平台(amd64架构)

发布于:2025-08-05 ⋅ 阅读:(9) ⋅ 点赞:(0)


前言

无公网环境下在centos7.9上使用kk工具部署k8s平台(amd64架构)
有个项目需要部署到甲方那边,需要断网部署,准备一下部署包
增加:
centos7.6也可以用相同方式,只有offlinerpms.tar包不一致,其他的文件都一致。


一、环境列表

服务器架构:amd64
操作系统iso:CentOS-7-x86_64-Minimal-2009.iso
k8s版本:v1.23.6
kk工具版本:3.1.10
harbor:harbor-online-installer-v2.5.0.tgz
docker-compose:1.23.2

二、思路

分两步,首先在可以访问互联网的机器A上下载部署所需文件镜像等,然后在不能访问互联网的机器B上进行测试验证。

三、环境准备

联系网管老师,将192.168.150.140-149段IP打开互联网访问权限。
将192.168.150-159段IP保持关闭互联网访问权限。

四、有网环境下准备文件

服务器IP:192.168.150.141
使用CentOS-7-x86_64-Minimal-2009.iso镜像安装的全新的虚拟机
需要准备的文件列表:

  • rpm安装包文件:offlinerpms.tar、
  • harbor的镜像文件:harbor-image.tar
  • kk工具:kk
  • k8s安装需要用到的docker镜像包: kubesphereio-image.tar
  • k8s安装的离线包:kubesphere.tar.gz
  • harbor的安装包:harbor-online-installer-v2.5.0.tgz
  • docker-compose二进制文件:docker-compose
  • harbor创建项目脚本:create_project_harbor.sh

1.下载所需的rpm包

先给全新的虚拟机更换源

mkdir -p /etc/yum.repos.d/CentOS-Base.repo.backup;
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup;
curl  -o  /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache;
sudo yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo;

下载所需的rpm包

mkdir -p /root/offlinerpms
# 下载必须的工具软件
yum install -y  yum-utils
# 基础工具包
yum install --downloadonly --downloaddir=/root/offlinerpms  wget ntp vim 
# k8s用到的基础环境包
yum install --downloadonly --downloaddir=/root/offlinerpms socat conntrack yum-utils epel-release
# docker相关包
yum install --downloadonly --downloaddir=/root/offlinerpms  docker-ce docker-ce-cli

下载完成后打tar包

cd /root/
tar -cvf offlinerpms.tar offlinerpms/

2.准备harbor需要用到的镜像

# 镜像准备过程
docker pull docker.m.daocloud.io/goharbor/prepare:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-log:v2.5.0
docker pull docker.m.daocloud.io/goharbor/registry-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-db:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-core:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0
docker pull docker.m.daocloud.io/goharbor/redis-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0
# 修改tag
docker tag docker.m.daocloud.io/goharbor/prepare:v2.5.0  goharbor/prepare:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-log:v2.5.0 goharbor/harbor-log:v2.5.0
docker tag docker.m.daocloud.io/goharbor/registry-photon:v2.5.0 goharbor/registry-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-registryctl:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-db:v2.5.0 goharbor/harbor-db:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-core:v2.5.0 goharbor/harbor-core:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0 goharbor/harbor-portal:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0 goharbor/harbor-jobservice:v2.5.0
docker tag docker.m.daocloud.io/goharbor/redis-photon:v2.5.0 goharbor/redis-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0 goharbor/nginx-photon:v2.5.0
# 保存镜像
docker save -o harbor-image.tar  goharbor/prepare:v2.5.0 goharbor/harbor-log:v2.5.0 goharbor/registry-photon:v2.5.0 goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-db:v2.5.0 goharbor/harbor-core:v2.5.0 goharbor/harbor-portal:v2.5.0 goharbor/harbor-jobservice:v2.5.0 goharbor/redis-photon:v2.5.0 goharbor/nginx-photon:v2.5.0

3. k8s的镜像文件

生成manifest-sample.yaml文件获取需要的docker-image列表,我的思路是手动准备image,所以把文件中的镜像手动下载下来,把manifest-sample.yaml中的镜像信息全部删除掉。

chmod a+x kk
export KKZONE=cn
./kk create manifest --with-kubernetes v1.23.6  --arch amd64  --with-registry "docker registry"
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems: []
  kubernetesDistributions:
  - type: kubernetes
    version: v1.23.6
  components:
    helm: 
      version: v3.14.3
    cni: 
      version: v1.2.0
    etcd: 
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.4
    crictl: 
      version: v1.29.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.10.1
    docker-compose:
      version: v2.26.1
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
  registry:
    auths: {}

下载镜像

docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0

打镜像包

docker save -o kubesphereio-image.tar registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3  registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2 registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10 registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8 registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0  registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2 registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0

4、 生成离线安装包

修改manifest-sample.yaml文件

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems: []
  kubernetesDistributions:
  - type: kubernetes
    version: v1.23.6
  components:
    helm: 
      version: v3.14.3
    cni: 
      version: v1.2.0
    etcd: 
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.4
    crictl: 
      version: v1.29.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.10.1
    docker-compose:
      version: v2.26.1
  images:
  registry:
    auths: {}

打离线安装包

export KKZONE=cn
chmod a+x kk
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

5、harbor创建项目脚本

create_project_harbor.sh

docker-compose version 1.23.2, build 1110ad01
[root@demo home]# cat create_project_harbor.sh 
#!/usr/bin/env bash

# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

url="http://XX.XX.XX.XX"  # 或修改为实际镜像仓库地址
user="admin"
passwd="Harbor12345"

harbor_projects=(
        ks
        kubesphere
        kubesphereio
        coredns
        calico
        flannel
        cilium
        hybridnetdev
        kubeovn
        openebs
        library
        plndr
        jenkins
        argoproj
        dexidp
        openpolicyagent
        curlimages
        grafana
        kubeedge
        nginxinc
        prom
        kiwigrid
        minio
        opensearchproject
        istio
        jaegertracing
        timberio
        prometheus-operator
        jimmidyson
        elastic
        thanosio
        brancz
        prometheus
)

for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k  # 注意在 curl 命令末尾加上 -k

五、无公网环境部署单点集群

服务器IP:192.168.150.152
使用CentOS-7-x86_64-Minimal-2009.iso镜像安装的全新的虚拟机

重点:一定要配置dns信息,配置到内网的dns服务器或者是8.8.8.8和114.114.114.114,不配置的话nodelocaldns会有报错

把准备的好的文件上传到/data/install/目录下:

  • rpm安装包文件:offlinerpms.tar、
  • harbor的镜像文件:harbor-image.tar
  • kk工具:kk
  • k8s安装需要用到的docker镜像包: kubesphereio-image.tar
  • k8s安装的离线包:kubesphere.tar.gz
  • harbor的安装包:harbor-online-installer-v2.5.0.tgz
  • docker-compose二进制文件:docker-compose
  • harbor创建项目脚本:create_project_harbor.sh

1、基础环境安装

cd /data/install/
tar -xvf offlinerpms.tar
cd /data/install/offlinerpms
#修改docker的cgroupdriver
mkdir -p /etc/docker/;
cat > /etc/docker/daemon.json <<EOF
{
  "insecure-registries": [
    "http://192.168.150.152:80"
  ],
    "exec-opts":["native.cgroupdriver=systemd"],
    "log-driver":"json-file",
    "log-opts":{
        "max-size":"100m"
    }
}
EOF
yum localinstall -y *.rpm
#修改到阿里云的时间服务器,内网环境修改到内网ntp服务器
sudo sed -i 's/^server /#server /' /etc/ntp.conf;
sed -i '/3.centos.pool.ntp.org iburst/a server time1.aliyun.com prefer\nserver time2.aliyun.com\nserver time3.aliyun.com\nserver time4.aliyun.com\nserver time5.aliyun.com\nserver time6.aliyun.com\nserver time7.aliyun.com' /etc/ntp.conf;
#重启并加入自启
systemctl enable ntpd;
systemctl restart ntpd;
timedatectl set-timezone "Asia/Shanghai";
ntpq -p;
hwclock;
#关闭selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config;
#关闭防火墙
systemctl stop firewalld.service;
systemctl disable firewalld.service;
#启动服务,设置自启动
systemctl restart docker;
systemctl enable  docker;
# 重启服务器
reboot

2、安装harbor

拷贝docker-compose,导入harbor所需的镜像

cd /data/install
\cp  /data/install/docker-compose /usr/local/bin/
chmod a+x /usr/local/bin/docker-compose

docker-compose --version

# 创建目录
mkdir -p /data/harbor/data
# 导入镜像
docker load -i harbor-image.tar

cd /data/install/
tar -xvf harbor-online-installer-v2.5.0.tgz
cd /data/install/harbor/

修改harbor的安装文件harbor.yml

hostname: 192.168.150.152
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80
# https related config
#https:
  # https port for harbor, default is 443
  #port: 443
  # The path of cert and key files for nginx
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
data_volume: /data/harbor/data

安装harbor

# 创建目录
mkdir -p /data/harbor/data
cd /data/install/harbor/
./install.sh

创建harbor中的项目

cd /data/install
# 修改create_project_harbor.sh中的url信息
# url="http://192.168.150.152"  # 或修改为实际镜像仓库地址
./create_project_harbor.sh

3 、 准备k8s镜像

导入镜像并登陆仓库

cd /data/install
docker load -i kubesphereio-image.tar
docker login 192.168.150.152:80

修改镜像名字

docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6                           192.168.150.152:80/kubesphereio/pause:3.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6              192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6 
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6     192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6              192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6                  192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6                       192.168.150.152:80/kubesphereio/coredns:1.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20          192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4            192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4                         192.168.150.152:80/kubesphereio/cni:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4                        192.168.150.152:80/kubesphereio/node:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4          192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4                       192.168.150.152:80/kubesphereio/typha:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3                     192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2           192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3                      192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3            192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6                    192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10                   192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8                     192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0           192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0                   192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine                192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2                     192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable                  192.168.150.152:80/kubesphereio/kata-deploy:stable
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0      192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0

上传到harbor仓库

docker push    192.168.150.152:80/kubesphereio/pause:3.6
docker push    192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6 
docker push    192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker push    192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker push    192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker push    192.168.150.152:80/kubesphereio/coredns:1.8.6
docker push    192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker push    192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker push    192.168.150.152:80/kubesphereio/cni:v3.27.4
docker push    192.168.150.152:80/kubesphereio/node:v3.27.4
docker push    192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker push    192.168.150.152:80/kubesphereio/typha:v3.27.4
docker push    192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker push    192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker push    192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker push    192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker push    192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker push    192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker push    192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker push    192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker push    192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker push    192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker push    192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker push    192.168.150.152:80/kubesphereio/kata-deploy:stable
docker push    192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0

4、安装k8s

创建部署文件

cd /data/install
export KKZONE=cn
./kk create config --with-kubernetes v1.23.6  

修改部署配置文件/data/install/config-sample.yaml
按照本地节点信息修改hosts和roleGroups、按照本地harbor信息修改registry

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: demo, address: 192.168.150.152, internalAddress: 192.168.150.152, user: root, password: "smartcore"}
  roleGroups:
    etcd:
    - demo
    control-plane: 
    - demo
    worker:
    - demo
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.6
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "192.168.150.152:80":
        username: admin
        password: Harbor12345
        skipTLSVerify: true
    privateRegistry: "192.168.150.152:80"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []

创建集群

cd /data/install
export KKZONE=cn
# 将本机信息配置进hosts文件
echo 192.168.150.152 demo >> /etc/hosts
# 创建
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --skip-push-images -y

部署成功,查看部署结果
kubectl get pod -A -o wide

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP                NODE   NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-84f449dd8-lqn6w   1/1     Running   0          35s   10.233.93.2       demo   <none>           <none>
kube-system   calico-node-p29jj                         1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>
kube-system   coredns-7fcdc7c747-5g4p6                  1/1     Running   0          35s   10.233.93.1       demo   <none>           <none>
kube-system   coredns-7fcdc7c747-92kgl                  1/1     Running   0          35s   10.233.93.3       demo   <none>           <none>
kube-system   kube-apiserver-demo                       1/1     Running   0          49s   192.168.150.152   demo   <none>           <none>
kube-system   kube-controller-manager-demo              1/1     Running   0          49s   192.168.150.152   demo   <none>           <none>
kube-system   kube-proxy-9zc2d                          1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>
kube-system   kube-scheduler-demo                       1/1     Running   0          50s   192.168.150.152   demo   <none>           <none>
kube-system   nodelocaldns-xhgmv                        1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>

六、无公网环境部署多点集群

多点部署的话,就是等部署节点harbor好之后,再开始其他的节点安装,安装好基础环境之后,登陆一下harbor一下就可以了

总结

kubesphere闭源了,大家且用且珍惜。


网站公告

今日签到

点亮在社区的每一天
去签到