服务治理
前几天我们分别分享了两篇(和)基础的文章,大致讲解了下,如何快速的构建一个三节点的Kubernetes集群,整个部署过程我们也遇到了一些问题,查阅了大量Kubernetes官方文档,同时对出现的问题进行猜想和质疑,以及对问题进行深究和排查。
众所周知,Kubernetes的最主要也是最核心的功能是对容器服务的编排和调度功能,以及对容器服务的治理能力,同时Docker官方也有一款与Kubernetes类似的产品Swarm,Swarm同样也提供对容器特别是Docker的编排和调度功能。在这两款产品的对比下,可能更多的开发者或者从业者会首选Kubernetes,不仅仅是因为Kubernetes是出自Google,也不是因为Kubernetes更早的出现在大家的视野里。可能更多的原因是Kubernetes相对来说还是比较成熟的,社区活跃度更高点,开源代码贡献量更多点。当然这仅仅是笔者的观点,也不可能完全正确。可能需要根据实际的环境,人员配备,以及企业发展的方向来综合考虑是选择Kubernetes还是Swarm。
下图是Kubernetes开源社区的一个整体情况:
Kubernetes
到目前为止,我们已经有一套Kubernetes环境了,下面我们就通过一个简单的例子来体验下Kubernetes的魅力所在。我们主要完成下面三部分内容:
kubernetes demo
Part One:追溯Kubernetes集群节点访问ClusterIP有丢包或者网络延迟的问题
上一篇文章我们提到在Kubernetes集群节点curl ClusterIP会出现不能访问或者访问超时的现象,当时对这一现象没有做过多的解释。笔者最后在空闲时间查阅了网络上大量资料和阅读了Kubernetes官方文档,发现同样也有人遇到类似的问题。出现这个问题的根本原因是:
这个问题的触发是由于kube-proxy 的 docker 镜像里安装了最新的 iptables,--random-fully选项会触发内核vxlan的bug。
最简单的解决这个根本原因的方法是:关闭CNI的VXLAN网卡的checksum。
参考:
https://github.com/coreos/flannel/issues/1279
下面我们来尝试下,首先在Kubernetes环境中部署一个2个pod的nginx服务,部署过程参考,然后暴露一个内部service得到一个ClusterIP:
kubernetes service
[root@instance01 ~]# time curl 10.96.208.250:8090
延时
可以看到延时大概在1分多钟。
下面我们将flannel.1网卡的IP报文头checksum关掉试试:
[root@instance01 ~]# ethtool -K flannel.1 tx-checksum-ip-generic off
Actual changes:
tx-checksumming: off
tx-checksum-ip-generic: off
tcp-segmentation-offload: off
tx-tcp-segmentation: off [requested on]
tx-tcp-ecn-segmentation: off [requested on]
tx-tcp6-segmentation: off [requested on]
tx-tcp-mangleid-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]
关闭checksum
可以看到现在的网络延时已经很低了。这种解决办法应该算是一种最简单的方式了。
kubernetes实践
Part Two:在Kubernetes集群中部署一个小的demo应用
我们构建一个简单的SpringBoot项目并且把该SpringBoot项目打包成Docker容器推送到我们的私有仓库中:
SpringBoot代码如下:
@RestController
@RequestMapping("/provider/index")
public class IndexController {
@RequestMapping(value = "/echo", method = RequestMethod.GET)
@ResponseBody
public String echo() throws UnknownHostException {
InetAddress address = InetAddress.getLocalHost();
return "[" address.getHostName() ":" address.getHostAddress() "] this is nacos-discovery-provider</br>\n";
}
}
这段代码比较简单,只是输出下容器的hostname和IP地址。
Dockerfile
FROM openjdk:8
MAINTAINER xiaobaoqiang <xiaobaoqiang@163.com>
ADD "./nacos-discovery-provider.jar" "/root/nacos-discovery-provider.jar"
ENTRYPOINT ["java", "-jar", "/root/nacos-discovery-provider.jar"]
EXPOSE 8081
部署一个私有Docker仓库:
docker pull registry
docker run -d -p 5000:5000 --restart always --name registry registry
构建SpringBoot Docker容器,将maven打包后的jar包和Dockerfile放在同一目录下:
[root@instance01 demo]# docker build -t 10.0.0.10:5000/nacos-discovery-provider .
Sending build context to Docker daemon 19.38MB
Step 1/5 : FROM openjdk:8
---> 5684f3366a1f
Step 2/5 : MAINTAINER xiaobaoqiang <xiaobaoqiang@163.com>
---> Running in 921fad79b907
Removing intermediate container 921fad79b907
---> bcf4fbbe836c
Step 3/5 : ADD "./nacos-discovery-provider.jar" "/root/nacos-discovery-provider.jar"
---> f8d1bbd0df9b
Step 4/5 : ENTRYPOINT ["java", "-jar", "/root/nacos-discovery-provider.jar"]
---> Running in e869eb6667af
Removing intermediate container e869eb6667af
---> 2359f0866858
Step 5/5 : EXPOSE 8081
---> Running in 05d79e65a436
Removing intermediate container 05d79e65a436
---> da0ec4d595d6
Successfully built da0ec4d595d6
Successfully tagged 10.0.0.10:5000/nacos-discovery-provider:latest
[root@instance01 demo]# docker push 10.0.0.10:5000/nacos-discovery-provider
The push refers to repository [10.0.0.10:5000/nacos-discovery-provider]
34acdd6c5b37: Pushed
fd48b7313a1f: Pushed
16e8bdbf703d: Pushed
de9aadc6b492: Pushed
e5df62d9b33a: Pushed
7a9460d53218: Pushed
b2765ac0333a: Pushed
0ced13fcf944: Pushed
latest: digest: sha256:7d18ee8aca1e45f7969a17af755df8664a19bdf51c0e5955b6190998ef718774 size: 2006
下面通过Kubernetes部署我们刚刚打包的应用,deployment.yml如下:
kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery-provider
namespace: web-system
selfLink: /apis/apps/v1/namespaces/web-system/deployments/discovery-provider
uid: 4bb0fc7d-5fb0-4ecf-a3cf-a6556f1cf068
resourceVersion: '161986'
generation: 1
creationTimestamp: '2020-08-21T02:21:12Z'
labels:
k8s-app: discovery-provider
annotations:
deployment.kubernetes.io/revision: '1'
managedFields:
- manager: dashboard
operation: Update
apiVersion: apps/v1
time: '2020-08-21T02:21:12Z'
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2020-08-21T02:21:34Z'
spec:
replicas: 2
selector:
matchLabels:
k8s-app: discovery-provider
template:
metadata:
name: discovery-provider
creationTimestamp: null
labels:
k8s-app: discovery-provider
spec:
containers:
- name: discovery-provider
image: '10.0.0.10:5000/nacos-discovery-provider'
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
securityContext:
privileged: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
rc.yml
kind: ReplicaSet
apiVersion: apps/v1
metadata:
name: discovery-provider-5bf4cc58c8
namespace: web-system
selfLink: >-
/apis/apps/v1/namespaces/web-system/replicasets/discovery-provider-5bf4cc58c8
uid: bded5f27-e508-4c43-b4e0-b0f94b0b6123
resourceVersion: '161985'
generation: 1
creationTimestamp: '2020-08-21T02:21:12Z'
labels:
k8s-app: discovery-provider
pod-template-hash: 5bf4cc58c8
annotations:
deployment.kubernetes.io/desired-replicas: '2'
deployment.kubernetes.io/max-replicas: '3'
deployment.kubernetes.io/revision: '1'
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: discovery-provider
uid: 4bb0fc7d-5fb0-4ecf-a3cf-a6556f1cf068
controller: true
blockOwnerDeletion: true
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2020-08-21T02:21:34Z'
spec:
replicas: 2
selector:
matchLabels:
k8s-app: discovery-provider
pod-template-hash: 5bf4cc58c8
template:
metadata:
name: discovery-provider
creationTimestamp: null
labels:
k8s-app: discovery-provider
pod-template-hash: 5bf4cc58c8
spec:
containers:
- name: discovery-provider
image: '10.0.0.10:5000/nacos-discovery-provider'
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
securityContext:
privileged: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
service.yml
kind: Service
apiVersion: v1
metadata:
name: discovery-provider
namespace: web-system
selfLink: /api/v1/namespaces/web-system/services/discovery-provider
uid: d5139fa7-15ae-4b33-86ab-679eaf6db812
resourceVersion: '161895'
creationTimestamp: '2020-08-21T02:21:12Z'
labels:
k8s-app: discovery-provider
managedFields:
- manager: dashboard
operation: Update
apiVersion: v1
time: '2020-08-21T02:21:12Z'
spec:
ports:
- name: tcp-9002-8081-fwjvk
protocol: TCP
port: 9002
targetPort: 8081
selector:
k8s-app: discovery-provider
clusterIP: 10.96.79.63
type: ClusterIP
sessionAffinity: None
在命令行执行如下命令:
kubectl apply -f deployment.yml
kubectl apply -f rc.yml
kubectl apply -f service.yml
部署应用
可以看到pod和service都处于Running状态。
[root@instance01 ~]# curl 10.96.79.63:9002/provider/index/echo
[discovery-provider-5bf4cc58c8-4nrsg:192.168.2.15] this is nacos-discovery-provider</br>
通过curl命令行可以访问到我们刚部署的应用的url。
kubectl --namespace web-system port-forward --address 0.0.0.0 svc/discovery-provider 9002
通过上面这条端口转发命令我们就可以在我们的浏览器中访问我们刚才部署的SpringBoot应用了。
端口转发
Part Three:ClusterIP实现集群节点内部负载均衡的功能
ClusterIP是Kubernetes的一个虚拟IP地址,它是多个pod的统一入口,并且只能通过Kubernetes集群节点内部访问。ClusterIP在多个pod之间起到一个内部负载均衡的作用,下面我们通过一段简单的命令来验证下:
[root@instance01 ~]# for i in `seq 1 20`; do curl 10.96.79.63:9002/provider/index/echo; done
负载均衡
通过上面的截图可以看到,curl请求被随机的转发到了2个pod上。
通过命令行将pod扩展为5个副本:
[root@instance01 ~]# kubectl scale deployment discovery-provider --replicas=5 -n web-system
pod扩展副本
再次使用curl命令:
[root@instance01 ~]# for i in `seq 1 20`; do curl 10.96.79.63:9002/provider/index/echo; done
负载均衡
可以看到curl请求被均匀的转发到5个pod副本上了。
通过以上部署一个简单的SpringBoot应用我们体会到了Kubernetes在容器编排和容器调度方面的优势,以及实现应用的高可用都很方便,通过几条简单的命令就能够完成。通过ClusterIP实现集群内部的负载均衡,如果想要为外部提供服务,我们可以通过port-forward命令(端口转发)或者在任意Kubernetes节点部署一个nginx服务,通过反向代理实现。Kubernetes还有更多的优势和特性,我们将在后面的时间里慢慢挖掘。
不积跬步,无以至千里;不积小流,无以成江海!
Copyright © 2024 妖气游戏网 www.17u1u.com All Rights Reserved