环境准备,目前3个机器执行,可以根据需求进行调整。
主机名称 | ip | 类型 | 配置 | 系统 | 组件 |
---|---|---|---|---|---|
k8s-master | 192.168.44.129 | master | 4c8g(生产更高) | centos7.9 | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
k8s-node1 | 192.168.44.154 | node | 2c2g | centos7.9 | kubectl, kubelet, kube-proxy, flannel,docker |
k8s-node2 | 192.168.44.155 | node | 2c2g | centos7.9 | kubectl, kubelet, kube-proxy, flannel,docker |
三台主机都执行
1.环境初始化
cat >>/etc/hosts <<'EOF'
192.168.44.129 k8s-master
192.168.44.154 k8s-node1
192.168.44.155 k8s-node2
EOF
ping -c 2 k8s-master
ping -c 2 k8s-node
ping -c 2 k8s-node
2.防火墙初始化
# 关闭防火墙
systemctl stop firewalld NetworkManager
# 禁用防火墙
systemctl disable firewalld NetworkManager
# 关闭selinux
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
# 清空iptables
iptables -F
iptables -X
iptables -Z
# 允许 forward 链表
iptables -P FORWARD ACCEPT
3.关闭swap
k8s默认禁用swap功能
swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
4.yum源配置
# 将之前的repo 文件进行备份,避免影响安装
cd /etc/yum.repos.d/
mkdir bak
mv *.repo bak/
# 配置yum 源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/*.repo
yum clean all && yum makecache fast
5.ntp配置
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
# 修改配置文件,加入ntp.aliyun.com
# 同步主机时间
ntpdate -u ntp.aliyun.com
hwclock -w
6.修改linux内核参数,开启数据包转发功能
# 容器夸主机通通信,底层是走的iptables,内核级别的数据包转发
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
# 加载读取内核参数配置文件
sysctl -p /etc/sysctl.d/k8s.conf
7.安装docker基础环境
yum remove docker docker-common docker-selinux docker-engine -y
curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum list docker-ce --showduplicates
yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 -y
#配置docker加速器、以及crgoup驱动,改为k8s官方推荐的systemd,否则初始化时会有报错。
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<'EOF'
{
"registry-mirrors": ["https://docker.m.daocloud.io"],
"exec-opts":["native.cgroupdriver=systemd"]
}
EOF
# "exec-opts":["native.cgroupdriver=systemd"] 这个一定要配置,否则会跟k8s中kubelet 冲突,kubelet 无法启动
#启动
systemctl start docker && systemctl enable docker
# 查看版本
docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:17:57 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:33 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.33
GitCommit: d2d58213f83a351ca8f528a95fbd145f5654e957
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.18.0
GitCommit: fec3683
8.安装k8s的初始化工具kubeadm命令(所有节点执行)
# 安装k8s集群环境初始化的工具
# kubelet-1.19.3 , # 组件,增删改查pod再具体机器上,pod可以运行主节点上,node节点上
# kubeadm-1.19.3 # k8s版本 1.19.3 ,自动拉去k8s基础组件镜像的一个工具
# kubectl-1.19.3 # 管理,维护k8s客户端换,和服务端交互的一个命令行工具
所有机器执行
cat init-k8s.sh
#!/bin/bash
#设置阿里云源
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache
#yum list kubeadm --showduplicates 列出,这个阿里云k8s源,提供了哪些k8s版本让你玩
# 安装指定版本 kubeadm-1.19.3 ,安装的kubeadm版本,就是决定了,拉去什么版本的k8s集群版本的镜像
yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 ipvsadm -y
## 查看kubeadm 版本,初始化的k8s版本信息,就是 v1.19.3版本
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
## 设置kubelet开机启动
systemctl enable kubelet
systemctl enable docker
k8s-master上执行
9.初始化k8s-master主节点(只在主节点执行)
# kubeadm init 初始化,加入一些参数
kubeadm init \
--apiserver-advertise-address=192.168.44.129 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--service-dns-domain=cluster.local \
--ignore-preflight-errors=Swap \
--ignore-preflight-errors=NumCPU
# 参数说明
kubeadm init \
--apiserver-advertise-address=192.168.44.129 \ # api-server运行在k8s-master的ip上
--image-repository registry.aliyuncs.com/google_containers \ # 拉去k8s镜像,从阿里云上获取,否则默认是国外的k8s镜像地址,下载不了
--kubernetes-version v1.19.3 \ # 和kubeadm保持一直
--service-cidr=10.1.0.0/16 \ # k8s服务发现网段设置,service网段(外部访问地址)
--pod-network-cidr=10.2.0.0/16 \ # 设置pod创建后,的运行网段地址 (内部访问地址)跟后面flannel 的网段一致
--service-dns-domain=cluster.local \ # k8s服务发现网段设置,service资源的域名后缀
--ignore-preflight-errors=Swap \ # 忽略swap报错
--ignore-preflight-errors=NumCPU # 忽略cpu数量报错
10.k8s-master初始化过程
# k8s-master成功装好了
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
# 创建k8s集群配置文件
# 制定了,默认的ssl整数在哪,api-server的地址,等
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
======================-===============
# pod分布再多个机器上,pod互相之间链接,得部署,集群网络,选用flannel网络插件
# 安装,使用即可。
You should now deploy a pod network to the cluster.
==================================
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
# 使用如下命令,将k8s-node加入集群即可,
Then you can join any number of worker nodes by running the following on each as root:
===================================================
# join添加到集群中
kubeadm join 192.168.44.129:6443 --token 6uyfck.k8ztc3txxd719ans \
--discovery-token-ca-cert-hash sha256:7593583bab082c416306738303b75e079059c084a69eafd597ec63f6780920df
====================================
11.将k8s-node加入到集群中
在k8s-node1跟k8s-node2上执行如下
# k8s-node1
kubeadm join 192.168.44.129:6443 --token 6uyfck.k8ztc3txxd719ans \
--discovery-token-ca-cert-hash sha256:7593583bab082c416306738303b75e079059c084a69eafd597ec63f6780920df
# k8s-node2
kubeadm join 192.168.44.129:6443 --token 6uyfck.k8ztc3txxd719ans \
--discovery-token-ca-cert-hash sha256:7593583bab082c416306738303b75e079059c084a69eafd597ec63f6780920df
# k8s-master
# 查看节点状态
kubectl get nodes
# 节点都是未就绪的状态,需要安装flannel
# 显示更详细的信息
kubectl get nodes -owide
此时node机器就可以和master机器 通信 了,走kubelet进程
# k8s-node1
netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1985/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1152/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1155/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1683/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 51885/sshd: root@pt
tcp 0 0 127.0.0.1:38598 0.0.0.0:* LISTEN 51535/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 51535/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 52070/kube-proxy
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 718/rpcbind
tcp6 0 0 :::22 :::* LISTEN 1152/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1155/cupsd
tcp6 0 0 ::1:25 :::* LISTEN 1683/master
tcp6 0 0 ::1:6010 :::* LISTEN 51885/sshd: root@pt
tcp6 0 0 :::10250 :::* LISTEN 51535/kubelet
tcp6 0 0 :::111 :::* LISTEN 718/rpcbind
tcp6 0 0 :::10256 :::* LISTEN 52070/kube-proxy
udp 0 0 0.0.0.0:893 0.0.0.0:* 718/rpcbind
udp 0 0 192.168.122.1:53 0.0.0.0:* 1985/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1985/dnsmasq
udp 0 0 0.0.0.0:68 0.0.0.0:* 883/dhclient
udp 0 0 0.0.0.0:68 0.0.0.0:* 879/dhclient
udp 0 0 0.0.0.0:111 0.0.0.0:* 718/rpcbind
udp 0 0 0.0.0.0:5353 0.0.0.0:* 730/avahi-daemon: r
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* 740/chronyd
udp 0 0 0.0.0.0:49891 0.0.0.0:* 730/avahi-daemon: r
udp6 0 0 :::893 :::* 718/rpcbind
udp6 0 0 :::111 :::* 718/rpcbind
udp6 0 0 ::1:323 :::* 740/chronyd
# 查看状态kubelet
systemctl status kubelet
12.部署网络插件
k8s-master 上操作
# 1. 下载网络插件的,配置文件 ,yaml以及配置文件
https://github.com/coreos/flannel.git
https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
# 2.再k8s主节点上,应用这个yaml,基于yaml,创建具体的pod过程。
# 3.如果需要修改pod运行网络的话,要改配置文件,
wget https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
# 创建k8s资源,都是写这种yml文件了
grep 'Network' -A 5 kube-flannel.yml
# 修改1,修改网段跟上面初始化pod 一致,否则网络不通
"Network": "10.2.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
# 修改的第二处,跨主机的容器通信,最终不得走宿主机的物理网卡。
# 告诉flannel的你物理网卡是谁
containers:
- name: kube-flannel
image: ghcr.io/flannel-io/flannel:v0.27.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- -iface=ens33
# 基于kubectl命令,应用这个yml文件,读取,以及创建pod资源
kubectl create -f ./kube-flannel.yml
# 安装的信息
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 查看当前机器的容器,关于flannel网络插件的进程
docker ps |grep flannel
cb51f56410cc f2b60fe541ef "/opt/bin/flanneld -…" 4 days ago Up 4 days k8s_kube-flannel_kube-flannel-ds-hgmdk_kube-flannel_0fa2b349-665d-49e5-b53b-61b3d3f3dbce_8
00000d330cc5 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 4 days ago Up 4 days k8s_POD_kube-flannel-ds-hgmdk_kube-flannel_0fa2b349-665d-49e5-b53b-61b3d3f3dbce_9
- 修改pod网络的网段地址,根据kubeadm init 初始化时,设置的地址来
12.查看节点的状态,出现如下状态说明状态正常
[root@k8s-master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 5d2h v1.19.3 192.168.44.129 <none> CentOS Linux 7 (Core) 3.10.0-1160.71.1.el7.x86_64 docker://19.3.15
k8s-node1 Ready <none> 5d1h v1.19.3 192.168.44.154 <none> CentOS Linux 7 (Core) 3.10.0-1160.71.1.el7.x86_64 docker://19.3.15
k8s-node2 Ready <none> 5d1h v1.19.3 192.168.44.155 <none> CentOS Linux 7 (Core) 3.10.0-1160.71.1.el7.x86_64 docker://19.3.15
13.配置k8s命令补全(可选)
k8s命令太多,配置补全
操作节点:k8s-master
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
14.使用
手动创建pod请求,运行一个nginx-pod
# 查看命令帮助
kubectl run --help
# 后台运行一个nginx ,1.28 pod ,看结果
kubectl run nginx128 --image=nginx:1.28
# 如何查看pod信息
kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx128 1/1 Running 0 4d 10.2.2.11 k8s-node2 <none> <none>
如何修改pod里面的nginx 首页信息
# 修改
kubectl exec nginx128 -- sh -c "echo 'k8s nginx pod' >/usr/share/nginx/html/index.html "
# 访问
curl http://10.2.2.11
k8s nginx pod