高可用集群部署NFS-CSI、Ingress-controller与页面控制面板dashboard
部署三个附件:
ingress controller:流量分发器
dashboard:控制面板
CSI:动态PV置备,卷
部署NFS-CSI
第一步:
创建名称空间
[root@k8s-master01 ~]#kubectl create namespace nfs
namespace/nfs created
第二步:部署nfs-server
创建nfs-server(把从文档复制的命令粘贴并指明名称空间。文档地址:https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/nfs-provisioner/README.md)
[root@k8s-master01 ~]#kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml -n nfs
service/nfs-server created
deployment.apps/nfs-server created
查看pod
[root@k8s-master01 ~]#kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
nfs-server-5847b99d99-tx4df 1/1 Running 0 7s
第三步:部署nfs-csi-driver
下载相关版本(版本下载地址:https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/install-nfs-csi-driver.md)
[root@k8s-master01 ~]#curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v3.1.0/deploy/install-driver.sh | bash -s v3.1.0 --
Installing NFS CSI driver, version: v3.1.0 ...
serviceaccount/csi-nfs-controller-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
检查状态:
[root@k8s-master01 ~]#kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
[root@k8s-master01 ~]#kubectl -n kube-system get pod -o wide -l app=csi-nfs-node
第四步:
定义并创建storage class (支持动态制备,https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/README.md)
定义storage class
[root@k8s-master01 ~]#vim nfs-csi-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.nfs.svc.cluster.local #修改名称空间为nfs
share: /
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete #生产环境设置为Retain,以免误删数据
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1
创建storage class
[root@k8s-master01 ~]#kubectl apply -f nfs-csi-storageclass.yaml
storageclass.storage.k8s.io/nfs-csi created
查看storage class
[root@k8s-master01 ~]#kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Retain Immediate false 28s
第五步:
创建PVC(相关文档:https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/README.md)
[root@k8s-master01 ~]#kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pvc-nfs-csi-dynamic.yaml
persistentvolumeclaim/pvc-nfs-dynamic created
查看PVC
[root@k8s-Master-01 ~]#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-dynamic Bound pvc-cc49ad43 nfs-csi 49s
此时,当PVC删除时,PV也自动删除
[root@k8s-Master-01 ~]#kubectl delete pvc pvc-nfs-dynamic
[root@k8s-Master-01 ~]#kubectl get pv
No resources found
部署Ingress-controller
部署文档:https://github.com/kubernetes/ingress-nginx
进入文档:点击See the Getting Started document
第一步:部署Ingress-controller
[root@k8s-Master-01 ~]#kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
会自动创建一个ingress-nginx名称空间
[root@k8s-Master-01 ~]#kubectl get ns
NAME STATUS AGE
default Active 69m
ingress-nginx Active 42s
kube-flannel Active 66m
kube-node-lease Active 69m
kube-public Active 69m
kube-system Active 69m
nfs Active 49m
查看名称空间下所有的资源类型
[root@k8s-Master-01 ~]#kubectl get all -n ingress-nginx
查看svc
[root@k8s-Master-01 ~]#kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.68.244 80:31873/TCP,443:30060/TCP 68m
ingress-nginx-controller-admission ClusterIP 10.109.27.143 443/TCP 68m
该service对于外部流量策略使用了local的流量策略,需要改成Cluster
添加IP供集群使用
[root@k8s-Master-01 ~]#vim /etc/netplan/01-netcfg.yaml
addresses:
- 10.0.0.100/24
- 10.0.0.200/24
[root@k8s-Master-01 ~]#netplan apply
在线修改策略及应用IP
[root@k8s-Master-01 ~]#kubectl edit svc ingress-nginx-controller -n ingress-nginx
externalTrafficPolicy: Cluster #虽然会降低性能,但能提高负载均衡的效果
externelIPs: #没有LoadBalancer服务,人为加上externelIP
- 10.0.0.200
再次查看svc
[root@k8s-Master-01 ~]#kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.68.244 10.0.0.100 80:31873/TCP,443:30060/TCP 68m
ingress-nginx-controller-admission ClusterIP 10.109.27.143 443/TCP 68m
部署页面控制面板dashboard
第一步:
需要先部署Metrics Server:(https://github.com/kubernetes-sigs/metrics-server/tree/v0.6.1)
[root@k8s-Master-01 ~]#kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
其部署在kube-system名称空间下
[root@k8s-Master-01 ~]#kubectl get pods -n kube-system
metrics-server-847d45fd4f-p5mx7 0/1 Running 0 72s
也有了其专有的群组(核心资源指标可正常使用了)
[root@k8s-Master-01 ~]#kubectl api-versions
metrics.k8s.io/v1beta1
第二步:部署dashboard(https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
[root@k8s-Master-01 ~]#kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
其生成专用的名称空间:kubernetes-dashboard
[root@k8s-Master-01 ~]#kubectl get ns
NAME STATUS AGE
default Active 118m
ingress-nginx Active 49m
kube-flannel Active 115m
kube-node-lease Active 118m
kube-public Active 118m
kube-system Active 118m
kubernetes-dashboard Active 86s
nfs Active 98m
查看pods
[root@k8s-Master-01 ~]#kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-64bcc67c9c-cbtrf 1/1 Running 0 15m
kubernetes-dashboard-5c8bd6b59-sfbsn 1/1 Running 0 15m
查看svc
[root@k8s-Master-01 ~]#kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.109.70.161 8000/TCP 15m
kubernetes-dashboard ClusterIP 10.102.29.74 443/TCP 15m
查看pod
[root@k8s-Master-01 ~]#kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-64bcc67c9c-cbtrf 1/1 Running 0 49m
kubernetes-dashboard-5c8bd6b59-sfbsn 1/1 Running 0 49m
将dashboard利用ingress发布出去:
[root@k8s-master02 chapter13]#vim ingress-kubernetes-dashboard.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /$2
namespace: kubernetes-dashboard
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
backend:
service:
name: kubernetes-dashboard
port:
number: 443
pathType: Prefix
创建pod:
[root@k8s-master02 chapter13]#kubectl apply -f ingress-kubernetes-dashboard.yaml
ingress.networking.k8s.io/dashboard created
查看dashboard已经被ingress的虚拟机进行代理,IP为10.0.0.200:
[root@k8s-master02 chapter13]#kubectl get ingress -n kubernetes-dashboard
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard nginx * 10.0.0.200 80 2m55s
在网页访问http://10.0.0.200//dashboard/即可
使用token:
创建service account:
[root@k8s-master02 ~]#kubectl create sa dashuser
serviceaccount/dashuser created
将dashuser-admin绑定到dashuser的service account上:
[root@k8s-master02 ~]#kubectl create clusterrolebinding dashuser-admin --clusterrole=cluster-admin --serviceaccount=default:dashuser
取token需要先运行一个pod
生成pod配置文件:
[root@k8s-master02 ~]#kubectl run test --image ikubernetes/admin-box:v1.2 --restart=Never --dry-run=client -o yaml > test.yaml
修改配置文件:
[root@k8s-master02 ~]#vim test.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
serviceAccountName: dashuser
containers:
- image: ikubernetes/admin-box:v1.2
name: test
command: ['bin/sh','-c','sleep 9999']
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
[root@k8s-master02 ~]#kubectl apply -f test.yaml
pod/test created
进入交互式接口:
kubectl exec -it test -- /bin/bash
取出token:
cd /var/run/secrets/kubernetes.io/serviceaccount/
cat token
token信息从头到root前都是,复制token粘贴到网页框,即可登录doshboard
进入管理界面,是管理员权限,集群级别管理界面
如果只允许查看特定名称空间下的资源
第一步:
创建名称空间
kubectl create namespace demo
创建rolebinding
kubectl create rolebinding dashuser-domo-view --clusterrole=view --serviceaccount=default:dashuser -n demo
Metrics Server状态一直处于不就绪状态
部署了Metrics Server,但其状态一直处于不就绪状态,原因如下:
[root@k8s-Master-01 ~]#kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
metrics-server-847d45fd4f-rw985 0/1 Running 0 67m
原因一:k8s集群,需要接入到节点上的kubelet,kubelet内部有个接口,可以对外提供API接口
这个接口通常监听在tcp协议的10250和10248端口上。
10250端口是需要做认证的,而Metrics Server就是对接到10250、10248端口之上来采集指标。
Metrics Server要想采集到每一个节点上的节点和pod的资源利用率,就得认证到kubelet上才能做到,为了能够认证到kubelet之上,Metrics Server采取的办法就是做一个tls认证,借助于给kubelet发证书的k8sCA的证书来验证一下相关节点是不是拥有了同一个证书。
原因二:pod在实现名称解析的时候,每一个pod被注入一个/etc/nameresout.conf的配置文件并指定了DNS的policy,pod内的每一个容器,在实现名称解析的时候,会首先到本地的coreDNS进行解析,如果coreDNS无法正常运作,那么就会到对应的节点之上以nameserver来完成解析。但此时环境,没有准备节点级的DNS服务,只有集群网络之上的coreDNS服务,以pod形式运行的Metrics Server组件,为了对接到每个节点之上,就需要解析各个节点的名称,但是很显然,这里无法完成解析。
因而,Metrics Server与各个节点建立通信的时候,tls的连接能建立起来,但因为验证对方证书不通过,或者是通过名称解析这个方式得不到节点的IP地址,则通信始终连接不起来
解决办法:
第一:让Metrics Server采集各个节点的的时候都使用IP地址,不使用节点名称
第二:如果使用节点IP进行连接很有可能对应证书的持有者验证无法通过,我们访问的是IP地址,对方证书持有者使用的是名称,二者之间不匹配,会拒绝接收对方的证书,可不验证对方证书合法性。
更常规的解决方案是搭建一个DNS服务器来解析各节点名称,而不应该使用host文件
如果是用hosts文件来解析的话,解决办法
第一步:
下载Metrics Serveryaml文件并修改
[root@k8s-Master-01 ~]#cd /tmp/
[root@k8s-Master-01 tmp]#curl -LO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
[root@k8s-Master-01 tmp]#vim components.yaml
修改Deployment下的内容
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
再次查看pod
[root@k8s-Master-01 ~]#kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
metrics-server-659496f9b5-kknn6 1/1 Running 0 87s
此时就可以使用kubectl top 查看资源使用率了
[root@k8s-Master-01 ~]#kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master-01 279m 13% 853Mi 46%
k8s-master-02 347m 8% 1773Mi 22%
k8s-master-03 269m 6% 1646Mi 21%
k8s-node-01 156m 7% 885Mi 47%
k8s-node-02 105m 5% 615Mi 33%
k8s-node-03 187m 9% 850Mi 45%
查看pod的资源使用率
[root@k8s-Master-01 ~]#kubectl top pods -n nfs
NAME CPU(cores) MEMORY(bytes)
nfs-server-5847b99d99-7nw9j 5m 59Mi
服务器托管,北京服务器托管,服务器租用 http://www.fwqtg.net
机房租用,北京机房租用,IDC机房托管, http://www.e1idc.net