首先要把centos7系统的内核升级最好4.4以上(默认3.10的内核,运行大规模docker的时候会有bug)
| 软件/系统 | 版本 | 备注 |
|---|---|---|
| Centos | 7.9 | 最小安装版 |
| k8s | 1.15.1 | |
| flannel | 0.11 | |
| etcd | 3.3.10 |
| k8s角色 | 主机名 | 节点IP | 备注 |
|---|---|---|---|
| master1+etcd1 | master1.host.com | 10.0.0.70 | master节点 |
| master2+etcd2 | master2.host.com | 10.0.0.71 | |
| master3+etcd3 | master3.host.com | 10.0.0.72 | |
| node1 | node1.host.com | 10.0.0.73 | node节点 |
| node2 | node2.host.com | 10.0.0.74 | |
| haproxy1+keepalived | haproxy1.host.com | 10.0.0.75 | 负载均衡(vip:10.0.0.80) |
| haproxy2+keepalived | haproxy2.host.com | 10.0.0.76 |
所有节点都执行
yum install net-tools vim wget -y所有节点都执行
(相关资料图)
systemctl stop firewalldsystemctl disable firewalldsed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configreboot所有节点都执行
\cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -rf所有节点都执行
swapoff -ased -i "/ swap / s/^\(.*\)$/#\1/g" /etc/fstab所有节点都执行
yum install -y ntpdatentpdate -u ntp.aliyun.comecho "*/5 * * * * ntpdate ntp.aliyun.com >/dev/null 2>&1" >> /etc/crontabsystemctl start crond.servicesystemctl enable crond.servicecat > /etc/hosts <master执行脚本即可
yum install sshpass -ycat > scp.sh << EOF#!/bin/shIP="10.0.0.70 10.0.0.71 10.0.0.72 10.0.0.73 10.0.0.74 10.0.0.75 10.0.0.76"for node in ${IP}do sshpass -p123456 ssh-copy-id -i /root/.ssh/id_rsa.pub ${node} -o StrictHostKeyChecking=no &>/dev/null if [ $? -eq 0 ];then echo "${node} 秘钥copy完成" else echo "${node} 秘钥copy失败" fidoneEOFmaster和node节点
cat >/etc/sysctl.d/kubernetes.conf <2台haproxy都要安装yum install -y keepalivedhaproxy1配置成master
cat >/etc/keepalived/keepalived.conf<haproxy2配置成BACKUP
cat >/etc/keepalived/keepalived.conf<systemctl start keepalivedsystemctl enable keepalived1. 上传软件包root@haproxy1:/usr/local/src# lltotal 3792drwxr-xr-x 4 root root 96 Jun 9 14:01 ./drwxr-xr-x 10 root root 140 Jun 9 14:02 ../drwxrwxr-x 13 root root 4096 Jun 9 14:16 haproxy-2.4.4/-rw-r--r-- 1 root root 3570069 May 24 10:25 haproxy-2.4.4.tar.gzdrwxr-xr-x 4 1026 ygw 58 Jun 27 2018 lua-5.3.5/-rw-r--r-- 1 root root 303543 Nov 16 2020 lua-5.3.5.tar.gz2. 做软连接ln -s /usr/local/src/lua-5.3.5 /usr/local/lualn -s /usr/local/src/haproxy-2.4.4 /usr/local/haproxy3. 编译安装lua1)安装依赖yum install gcc gcc-c++ readline-devel glibc glibc-devel pcre pcre-devel openssl-devel zlib-devel systemd-devel -y cd /usr/local/lua && make linux 查看编译安装的版本/usr/local/lua/src/lua -vLua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio4. 编译安装haproxycd /usr/local/src/haproxy-2.4.4 && make TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_LUA=1 LUA_INC=/usr/local/lua/src/ LUA_LIB=/usr/local/lua/src/ && make install PREFIX=/apps/haproxy5. 方便命令的调用ln -s /apps/haproxy/sbin/haproxy /usr/sbin/6. 查看版本haproxy -vmkdir /etc/haproxycat >/etc/haproxy/haproxy.cfg<cat >/etc/systemd/system/haproxy.service<mkdir /var/lib/haproxyuseradd -r -s /sbin/nologin -d /var/lib/haproxy haproxysystemctl daemon-reload systemctl start haproxysystemctl enable haproxyMaster1上操作
mkdir /soft && cd /softwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfomkdir /root/etcd && cd /root/etcdMaster1
cd /root/etcdcat << EOF | tee ca-config.json{ "signing": { "default": { "expiry": "876000h" }, "profiles": { "www": { "expiry": "876000h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}EOFcd /root/etcdcat << EOF | tee ca-csr.json{ "CA":{"expiry":"876000h"}, "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ]}EOF可以把所有的master IP 加入到csr文件中(Master1上执行)
cd /root/etcdcat << EOF | tee server-csr.json{ "CN": "etcd", "hosts": [ "master-1", "master-2", "master-3", "192.168.3.200", "192.168.3.201", "192.168.3.202" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ]}EOFcd /root/etcd/cfssl gencert -initca ca-csr.json | cfssljson -bare ca -[root@master1 etcd]# lltotal 24-rw-r--r-- 1 root root 289 Mar 5 07:40 ca-config.json #ca 的配置文件-rw-r--r-- 1 root root 956 Mar 5 07:51 ca.csr #ca 证书生成文件-rw-r--r-- 1 root root 209 Mar 5 07:45 ca-csr.json #ca 证书请求文件-rw------- 1 root root 1679 Mar 5 07:51 ca-key.pem #ca 证书key-rw-r--r-- 1 root root 1265 Mar 5 07:51 ca.pem #ca 证书-rw-r--r-- 1 root root 350 Mar 5 07:48 server-csr.jsoncd /root/etcd/cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server[root@master1 etcd]# lltotal 36-rw-r--r-- 1 root root 289 Mar 5 07:40 ca-config.json-rw-r--r-- 1 root root 956 Mar 5 07:51 ca.csr-rw-r--r-- 1 root root 209 Mar 5 07:45 ca-csr.json-rw------- 1 root root 1679 Mar 5 07:51 ca-key.pem-rw-r--r-- 1 root root 1265 Mar 5 07:51 ca.pem-rw-r--r-- 1 root root 1086 Mar 5 07:54 server.csr-rw-r--r-- 1 root root 350 Mar 5 07:48 server-csr.json-rw------- 1 root root 1679 Mar 5 07:54 server-key.pem #etcd客户端使用-rw-r--r-- 1 root root 1415 Mar 5 07:54 server.pem此证书用于Kubernetes节点直接的通信, 与之前的ETCD证书不同. (Master-1)
mkdir /root/kubernetes/ && cd /root/kubernetes/cd /root/kubernetes/cat << EOF | tee ca-config.json{ "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "expiry": "876000h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}EOF[root@master1 kubernetes]# lltotal 4-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.jsoncd /root/kubernetes/cat << EOF | tee ca-csr.json{ "CA": { "expiry": "876000h" }, "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}EOF[root@master1 kubernetes]# lltotal 8-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.json-rw-r--r-- 1 root root 264 Mar 5 08:02 ca-csr.jsoncd /root/kubernetes/cat << EOF | tee server-csr.json{ "CN": "kubernetes", "hosts": [ "192.168.0.1", # service网段 "127.0.0.1","192.168.0.2", #将来DNS需要用的地址"10.0.0.70","10.0.0.71","10.0.0.72","10.0.0.73","10.0.0.74","10.0.0.80","master1.host.com","master2.host.com","master3.host.com","node1.host.com","node2.host.com", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}EOF[root@master1 kubernetes]# lltotal 12-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.json-rw-r--r-- 1 root root 264 Mar 5 08:02 ca-csr.json-rw-r--r-- 1 root root 681 Mar 5 08:21 server-csr.jsoncd /root/kubernetes/cat << EOF | tee kube-proxy-csr.json{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ]}EOF生成ca证书(Master-1)
[root@master1 kubernetes]# pwd/root/kubernetescfssl gencert -initca ca-csr.json | cfssljson -bare ca -[root@master1 kubernetes]# lltotal 28-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.json-rw-r--r-- 1 root root 1001 Mar 5 08:23 ca.csr-rw-r--r-- 1 root root 264 Mar 5 08:02 ca-csr.json-rw------- 1 root root 1679 Mar 5 08:23 ca-key.pem-rw-r--r-- 1 root root 1359 Mar 5 08:23 ca.pem-rw-r--r-- 1 root root 230 Mar 5 08:23 kube-proxy-csr.json-rw-r--r-- 1 root root 681 Mar 5 08:21 server-csr.json生成 api-server 证书(Master-1)
[root@master1 kubernetes]# pwd/root/kubernetescfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server[root@master1 kubernetes]# lltotal 40-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.json-rw-r--r-- 1 root root 1001 Mar 5 08:23 ca.csr-rw-r--r-- 1 root root 264 Mar 5 08:02 ca-csr.json-rw------- 1 root root 1679 Mar 5 08:23 ca-key.pem-rw-r--r-- 1 root root 1359 Mar 5 08:23 ca.pem-rw-r--r-- 1 root root 230 Mar 5 08:23 kube-proxy-csr.json-rw-r--r-- 1 root root 1419 Mar 5 08:25 server.csr-rw-r--r-- 1 root root 681 Mar 5 08:21 server-csr.json-rw------- 1 root root 1679 Mar 5 08:25 server-key.pem-rw-r--r-- 1 root root 1785 Mar 5 08:25 server.pem[root@master1 kubernetes]# pwd/root/kubernetescfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy[root@master1 kubernetes]# lltotal 52-rw-r--r-- 1 root root 296 Mar 5 07:58 ca-config.json-rw-r--r-- 1 root root 1001 Mar 5 08:23 ca.csr-rw-r--r-- 1 root root 264 Mar 5 08:02 ca-csr.json-rw------- 1 root root 1679 Mar 5 08:23 ca-key.pem-rw-r--r-- 1 root root 1359 Mar 5 08:23 ca.pem-rw-r--r-- 1 root root 1009 Mar 5 08:27 kube-proxy.csr-rw-r--r-- 1 root root 230 Mar 5 08:23 kube-proxy-csr.json-rw------- 1 root root 1679 Mar 5 08:27 kube-proxy-key.pem-rw-r--r-- 1 root root 1403 Mar 5 08:27 kube-proxy.pem-rw-r--r-- 1 root root 1419 Mar 5 08:25 server.csr-rw-r--r-- 1 root root 681 Mar 5 08:21 server-csr.json-rw------- 1 root root 1679 Mar 5 08:25 server-key.pem-rw-r--r-- 1 root root 1785 Mar 5 08:25 server.pemmkdir -p /soft && cd /softwget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gztar -xvf etcd-v3.3.10-linux-amd64.tar.gzcd etcd-v3.3.10-linux-amd64/cp etcd etcdctl /usr/local/bin/注意修改每个节点的ETCD_NAME注意修改每个节点的监听地址
mkdir -p /etc/etcd/{cfg,ssl}cat >/etc/etcd/cfg/etcd.conf<mkdir -p /etc/etcd/{cfg,ssl}cat >/etc/etcd/cfg/etcd.conf<mkdir -p /etc/etcd/{cfg,ssl}cat >/etc/etcd/cfg/etcd.conf<cat > /usr/lib/systemd/system/etcd.service<\cp /root/etcd/*pem /etc/etcd/ssl/scp /etc/etcd/ssl/* 10.0.0.71:/etc/etcd/ssl/scp /etc/etcd/ssl/* 10.0.0.72:/etc/etcd/ssl/拷贝到node节点上要现在node上创建相关目录mkdir -p /etc/etcd/{cfg,ssl}scp /etc/etcd/ssl/* 10.0.0.73:/etc/etcd/ssl/scp /etc/etcd/ssl/* 10.0.0.74:/etc/etcd/ssl/systemctl start etcdsystemctl enable etcdsystemctl status etcd启动的时候master1会先挂起etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://10.0.0.70:2379" cluster-healthmember 55829f95b702c087 is healthy: got healthy result from https://10.0.0.71:2379member b1f1be65c0a2eb31 is healthy: got healthy result from https://10.0.0.72:2379member b5d8162db028bc4e is healthy: got healthy result from https://10.0.0.70:2379cluster is healthy参考docker安装文档etcdctl --ca-file=/etc/etcd/ssl/ca.pem \--cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem \--endpoints="https://10.0.0.70:2379,https://10.0.0.71:2379,https://10.0.0.72:2379" \ set /coreos.com/network/config \ "{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}"检查
etcdctl \--endpoints=https://10.0.0.70:2379,https://10.0.0.71:2379,https://10.0.0.72:2379 \--ca-file=/etc/etcd/ssl/ca.pem \--cert-file=/etc/etcd/ssl/server.pem \--key-file=/etc/etcd/ssl/server-key.pem \get /coreos.com/network/config所有节点都要安装
cd /softtar xf flannel-v0.11.0-linux-amd64.tar.gzmv flanneld mk-docker-opts.sh /usr/local/bin/拷贝到其他节点上scp /usr/local/bin/flanneld 10.0.0.71:/usr/local/binscp /usr/local/bin/mk-docker-opts.sh 10.0.0.71:/usr/local/binscp /usr/local/bin/flanneld 10.0.0.72:/usr/local/binscp /usr/local/bin/mk-docker-opts.sh 10.0.0.72:/usr/local/binscp /usr/local/bin/flanneld 10.0.0.73:/usr/local/binscp /usr/local/bin/mk-docker-opts.sh 10.0.0.73:/usr/local/binscp /usr/local/bin/flanneld 10.0.0.74:/usr/local/binscp /usr/local/bin/mk-docker-opts.sh 10.0.0.74:/usr/local/bin所有k8s节点
mkdir -p /etc/flannelcat > /etc/flannel/flannel.cfg<所有k8s节点
cat > /usr/lib/systemd/system/flanneld.service <所有节点
systemctl daemon-reloadsystemctl start flanneld.servicesystemctl enable flanneld.service这一步的目的是让docker和flannel运行在同一个网段
systemctl daemon-reloadsystemctl restart docker检查是否和flannel在同一个网段可以在任意一台master能否ping通172.17.1.1Node 节点验证是否可以访问其他节点Docker0
Master端需要安装的组件如下:kube-apiserverkube-schedulerkube-controller-manager
在master1上解压后传到其它的master上
cd /softtar xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/cp kube-scheduler kube-apiserver kube-controller-manager kubectl /usr/local/bin/scp /usr/local/bin/kube* 10.0.0.71:/usr/local/binscp /usr/local/bin/kube* 10.0.0.72:/usr/local/binKubernetes各个组件之间通信需要证书,需要复制到每个master节点(master1)
master1mkdir -p /etc/kubernetes/{cfg,ssl}cp /root/kubernetes/*.pem /etc/kubernetes/ssl/master2mkdir -p /etc/kubernetes/{cfg,ssl}master3mkdir -p /etc/kubernetes/{cfg,ssl}复制到其他的节点
scp /etc/kubernetes/ssl/* 10.0.0.71:/etc/kubernetes/sslscp /etc/kubernetes/ssl/* 10.0.0.72:/etc/kubernetes/sslTLS bootstrapping 功能就是让 kubelet 先使用一个预定的低权限用户连接到 apiserver,然后向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署Token可以是任意的包含128 bit的字符串,可以使用安全的随机数发生器生成
head -c 16 /dev/urandom | od -An -t x | tr -d " "f89a76f197526a0d4bc2bf9c86e871c3:随机字符串,自定义生成; kubelet-bootstrap:用户名; 10001:UID; system:kubelet-bootstrap:用户组
cat > /etc/kubernetes/cfg/token.csv << EOFf89a76f197526a0d4bc2bf9c86e871c3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF配置文件内容基本相同, 如果有多个节点, 那么需要修改IP地址即可
cat >/etc/kubernetes/cfg/kube-apiserver.cfg <cat >/usr/lib/systemd/system/kube-apiserver.service<systemctl daemon-reloadsystemctl start kube-apiserver.servicesystemctl enable kube-apiserver.service查看加密的端口是否已经启动
[root@master1 ssl]# netstat -anltup | grep 6443tcp6 0 0 :::6443 :::* LISTEN 49473/kube-apiserve tcp6 0 0 ::1:43658 ::1:6443 ESTABLISHED 49473/kube-apiserve tcp6 0 0 ::1:6443 ::1:43658 ESTABLISHED 49473/kube-apiserve 查看加密的端口是否已经启动(node节点)
[root@node1 ~]# telnet 10.0.0.70 6443Trying 10.0.0.70...Connected to 10.0.0.70.Escape character is "^]".cat >/etc/kubernetes/cfg/kube-scheduler.cfg<cat >/usr/lib/systemd/system/kube-scheduler.service<systemctl daemon-reloadsystemctl start kube-scheduler.servicesystemctl enable kube-scheduler.service[root@master3 ssl]# kubectl get csNAME STATUS MESSAGE ERROR#这里是因为这个服务还没有安装controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} cat >/etc/kubernetes/cfg/kube-controller-manager.cfg<cat >/usr/lib/systemd/system/kube-controller-manager.service<systemctl daemon-reloadsystemctl start kube-controller-manager.servicesystemctl enable kube-controller-manager.servicesystemctl status kube-controller-manager.service必须要在各个节点组件正常的情况下, 才去部署Node节点组件
[root@master1 ssl]# kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} 拷贝到node1scp /soft/kubernetes/server/bin/kubelet 10.0.0.73:/usr/local/bin/scp /soft/kubernetes/server/bin/kube-proxy 10.0.0.73:/usr/local/bin/拷贝到node2scp /soft/kubernetes/server/bin/kubelet 10.0.0.74:/usr/local/bin/scp /soft/kubernetes/server/bin/kube-proxy 10.0.0.74:/usr/local/bin/Maste1节点
mkdir /root/config ; cd /root/configcat >/root/config/environment.sh<cat >/root/config/env_proxy.sh<将bootstrap kubeconfig kube-proxy.kubeconfig 文件复制到所有Node节点先在node节点上创建目录
mkdir -p /etc/kubernetes/{cfg,ssl}复制证书文件ssl(master1)
scp /etc/kubernetes/ssl/* 10.0.0.73:/etc/kubernetes/ssl/scp /etc/kubernetes/ssl/* 10.0.0.74:/etc/kubernetes/ssl/复制kubeconfig文件(master1)
拷贝到node1scp /root/config/bootstrap.kubeconfig 10.0.0.73:/etc/kubernetes/cfgscp /root/config/kube-proxy.kubeconfig 10.0.0.73:/etc/kubernetes/cfg拷贝到node2scp /root/config/bootstrap.kubeconfig 10.0.0.74:/etc/kubernetes/cfgscp /root/config/kube-proxy.kubeconfig 10.0.0.74:/etc/kubernetes/cfg不同的Node节点, 需要修改IP地址 (node节点操作)
cat >/etc/kubernetes/cfg/kubelet.config<cat >/etc/kubernetes/cfg/kubelet.config<不同的Node节点, 需要修改IP地址
cat >/etc/kubernetes/cfg/kubelet<cat >/etc/kubernetes/cfg/kubelet<cat >/usr/lib/systemd/system/kubelet.service<master1节点操作
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapsystemctl daemon-reloadsystemctl start kubelet.servicesystemctl enable kubelet.servicesystemctl status kubelet.serviceMaste1节点操作
[root@master1 ~]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-I81WpTLkI1GcJL1RN_7AsH2gDtqkRuIGb9Cvkzktg00 23s kubelet-bootstrap Pendingnode-csr-vaqrhpHnGVhoa6lFp3ADkZxKtcZLYFEbuXbJ1r9AtrM 13s kubelet-bootstrap Pending批准请求Master节点操作
kubectl certificate approve node-csr-I81WpTLkI1GcJL1RN_7AsH2gDtqkRuIGb9Cvkzktg00kubectl certificate approve node-csr-vaqrhpHnGVhoa6lFp3ADkZxKtcZLYFEbuXbJ1r9AtrM[root@master1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSION10.0.0.73 Ready 5s v1.15.110.0.0.74 Ready 32m v1.15.1 kube-proxy 运行在所有Node节点上, 监听Apiserver 中 Service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡
注意修改hostname-override地址, 不同的节点则不同
cat >/etc/kubernetes/cfg/kube-proxy<cat >/etc/kubernetes/cfg/kube-proxy<cat >/usr/lib/systemd/system/kube-proxy.service<systemctl daemon-reloadsystemctl start kube-proxy.servicesystemctl enable kube-proxy.servicesystemctl status kube-proxy.service[root@master1 ~]# kubectl run nginx --image=nginx --replicas=2kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/nginx created[root@master1 ~]# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-7bb7cd8db5-8b5gs 0/1 ContainerCreating 0 7sdefault nginx-7bb7cd8db5-dzc5q 0/1 ContainerCreating 0 7s#获取容器IP与运行节点[root@master1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-7bb7cd8db5-8b5gs 1/1 Running 0 56s 172.17.1.2 10.0.0.74 nginx-7bb7cd8db5-dzc5q 1/1 Running 0 56s 172.17.46.2 10.0.0.73 #创建容器svc端口[root@master1 ~]# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort# 查看SVC[root@master1 ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 192.168.0.1 443/TCP 125mnginx NodePort 192.168.0.147 88:39073/TCP 11s# 访问node节点上的pod[root@master1 ~]# curl http://10.0.0.73:39073[root@master1 ~]# curl http://10.0.0.74:39073 kubectl delete deployment nginx kubectl delete pods nginxkubectl delete svc -l run=nginx把镜像先传到node节点,然后倒入镜像
docker load -i coredns1.0.6.tarmaster1上应用yml文件
kubectl apply -f coredns-1.0.6.yml下载yml文件以后,修改一下如下地方访问的时候前面地址加上https:节点ip:50000
master上执行(任意一个master节点)
kubectl create serviceaccount dashboard-admin -n kube-systemkubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminkubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk "/dashboard-admin/{print $1}")crd资源不区分命名空间
mkdir /root/ingress && cd /root/ingress[root@master1 ingress]# cat traefik-crd.yamlapiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: name: ingressroutes.traefik.containo.usspec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: IngressRoute plural: ingressroutes singular: ingressroute---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: name: ingressroutetcps.traefik.containo.usspec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: IngressRouteTCP plural: ingressroutetcps singular: ingressroutetcp---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: name: middlewares.traefik.containo.usspec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: Middleware plural: middlewares singular: middleware---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: name: tlsoptions.traefik.containo.usspec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: TLSOption plural: tlsoptions singular: tlsoption创建kubectl apply -f traefik-crd.yaml检查
[root@master1 ingress]# kubectl get crdNAME CREATED ATingressroutes.traefik.containo.us 2023-03-11T01:29:34Zingressroutetcps.traefik.containo.us 2023-03-11T01:29:34Zmiddlewares.traefik.containo.us 2023-03-11T01:29:34Ztlsoptions.traefik.containo.us 2023-03-11T01:29:34Zrbac区分命名空间
[root@master1 ingress]# cat traefik-rbac.yamlapiVersion: v1kind: ServiceAccountmetadata: namespace: kube-system name: traefik-ingress-controller---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: traefik-ingress-controllerrules: - apiGroups: [""] resources: ["services","endpoints","secrets"] verbs: ["get","list","watch"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","list","watch"] - apiGroups: ["extensions"] resources: ["ingresses/status"] verbs: ["update"] - apiGroups: ["traefik.containo.us"] resources: ["middlewares"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["ingressroutes"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["ingressroutetcps"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["tlsoptions"] verbs: ["get","list","watch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: traefik-ingress-controllerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controllersubjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: kube-systemkubectl apply -f traefik-rbac.yaml[root@master1 ingress]# cat traefik-config.yamlkind: ConfigMapapiVersion: v1metadata: name: traefik-config namespace: kube-systemdata: traefik.yaml: |- serversTransport: insecureSkipVerify: true api: insecure: true dashboard: true debug: true metrics: prometheus: "" entryPoints: web: address: ":80" websecure: address: ":443" providers: kubernetesCRD: "" log: filePath: "" level: error format: json accessLog: filePath: "" format: json bufferingSize: 0 filters: retryAttempts: true minDuration: 20 fields: defaultMode: keep names: ClientUsername: drop headers: defaultMode: keep names: User-Agent: redact Authorization: drop Content-Type: keep创建kubectl apply -f traefik-config.yaml检查[root@master1 ingress]# kubectl get configmap -n kube-systemNAME DATA AGEcoredns 1 14hextension-apiserver-authentication 1 15hkubernetes-dashboard-settings 1 13htraefik-config 1 4s由于是 Kubernetes DeamonSet 这种方式部署 Traefik,所以需要提前给节点设置 Label,这样当程序部署时 Pod 会自动调度到设置 Label 的节点上
kubectl label nodes 192.168.3.203 IngressProxy=true查看标签是否成功
[root@master1 ingress]# kubectl get nodes --show-labelsNAME STATUS ROLES AGE VERSION LABELS192.168.3.203 Ready 13h v1.15.1 IngressProxy=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.3.203,kubernetes.io/os=linux192.168.3.204 Ready 13h v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.3.204,kubernetes.io/os=linux 注意每个Node节点的80与443端口不能被占用
netstat -antupl | grep -E "80|443"上面打了节点标签,它只会到到指定的节点上面,即使是用的daemonset
[root@master1 ingress]# cat traefik-deploy.yamlapiVersion: v1kind: Servicemetadata: name: traefik namespace: kube-systemspec: ports: - name: web port: 80 - name: websecure port: 443 - name: admin port: 8080 selector: app: traefik---apiVersion: apps/v1kind: DaemonSetmetadata: name: traefik-ingress-controller namespace: kube-system labels: app: traefikspec: selector: matchLabels: app: traefik template: metadata: name: traefik labels: app: traefik spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 1 containers: - image: traefik:2.0.5 name: traefik-ingress-lb ports: - name: web containerPort: 80 hostPort: 80 - name: websecure containerPort: 443 hostPort: 443 - name: admin containerPort: 8080 resources: limits: cpu: 2000m memory: 1024Mi requests: cpu: 1000m memory: 1024Mi securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --configfile=/config/traefik.yaml volumeMounts: - mountPath: "/config" name: "config" volumes: - name: config configMap: name: traefik-config tolerations: - operator: "Exists" nodeSelector: IngressProxy: "true"创建kubectl apply -f traefik-deploy.yaml查看运行状态
[root@master1 ingress]# kubectl get DaemonSet -ANAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-system traefik-ingress-controller 1 1 0 1 0 IngressProxy=true 18svim traefik-dashboard-route.yamlapiVersion: traefik.containo.us/v1alpha1kind: IngressRoutemetadata: name: traefik-dashboard-route namespace: kube-systemspec: entryPoints: - web routes: - match: Host(`ingress.abcd.com`) kind: Rule services: - name: traefik port: 8080创建kubectl apply -f traefik-dashboard-route.yaml检查[root@master1 ingress]# kubectl get ingressroute.traefik.containo.us -ANAMESPACE NAME AGEkube-system traefik-dashboard-route 78s绑定物理主机Hosts文件或者域名解析/etc/hosts192.168.3.203 ingress.abcd.com访问web 关键词:
k8s的1.15.1高可用版本(Centos:7.9)_当前看点
【环球时快讯】宁波小伙子一张“娃娃脸”,想把救生筏卖到南北极去
男子在万宁游玩遭群殴 3人被行拘 基本信息讲解 热门
世界银行执董会批准彭安杰出任世行行长_世界热头条
新动态:全国共有共青团员7358.3万名
刚过去的五一假期,年轻人“更卷”了|每日热门
水平面积_水平面
伊朗总统12年来首访叙利亚,伊朗外长:表明地区走向和解新趋势
门卫大爷要求节假日3倍工资,为何法院不支持? 天天快看
数据会说话丨这里是吉林,中国“粮仓”和“肉库”
苏丹武装部队同意伊加特调解倡议 包括继续延长停火一周_环球速讯
乒联点名表扬孙颖莎,世乒赛再战3项,马琳不认可刘国梁派出3天才 焦点快报
把性骚扰摆到台面上,才可能解决问题
曹三所_对于曹三所简单介绍
报道:电焊机接线方法示意图_电焊机接线方法
每日消息!可乐鸡翅做法儿童版_可乐鸡翅做法
AMD第一季度营收54亿美元,毛利率大约50%-新资讯
“你好青春”抖音直播五四歌会精彩将至,艺人、院团、主播同台献艺 今热点
聚焦:圣邦股份:公司在工业领域呈现持续增长的趋势 2022年泛工业占比首次超过五成
环球新动态:被忽略在城乡之间:“小镇”的寂静和喧嚣
大三挂科了怎么办 挂科了怎么办
世界实时:官方:桑切斯-马丁内斯将执法国王杯决赛皇马vs奥萨苏纳
今日一半出手都是三分!科尔:有AD镇守禁区时 我们内线得分很难
【独家焦点】反应速率常数与温度的关系方程_反应速率常数
西安不锈钢水箱生产厂家_不锈钢水箱生产厂家 全球热讯
每日资讯:专用发票丢失了怎么办?2020年最新规定!_专用发票丢失最新规定
篮球护腕有什么作用_篮球护腕_当前信息
里昂:维持中国建材(03323)“买入”评级 目标价降至7.2港元
你是我的太阳作文600字初一 你是我的太阳作文600字|全球报资讯
意媒:国米无意冒险,戈森斯或伤缺欧冠与米兰的首回合较量 全球播报
快讯:天津外贸回升势头良好
2023年05月03日全国土杂猪生猪价格行情涨跌表|每日观察
全球微动态丨小米多亲防沉迷学习手机秒杀价仅需1354元
云音乐(09899.HK):5月2日南向资金减持7050股-天天微速讯
世界新资讯:科尔:你很难攻破有浓眉坐镇的湖人内线 但我对我们的队伍有信心
天天讯息:收获8.87亿元订单 新疆名优特农产品(厦门)交易会顺利闭幕
相关新闻