主机配置规划
服务器名称(hostname)    系统版本    配置    内网IP    外网IP(模拟)
k8s-master    CentOS7.7    2C/4G/20G    172.16.1.110    10.0.0.110
k8s-node01    CentOS7.7    2C/4G/20G    172.16.1.111    10.0.0.111
k8s-node02    CentOS7.7    2C/4G/20G    172.16.1.112    10.0.0.112
Helm是什么
没有使用Helm之前,在Kubernetes部署应用,我们要依次部署deployment、service等,步骤比较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂。

helm通过打包的方式,支持发布的版本管理和控制,很大程度上简化了Kubernetes应用的部署和管理。

Helm本质就是让k8s的应用管理(Deployment、Service等)可配置,能动态生成。通过动态生成K8S资源清单文件(deployment.yaml、service.yaml)。然后kubectl自动调用K8S资源部署。

Helm是官方提供类似于YUM的包管理,是部署环境的流程封装,Helm有三个重要的概念:chart、release和Repository

chart是创建一个应用的信息集合,包括各种Kubernetes对象的配置模板、参数定义、依赖关系、文档说明等。可以将chart想象成apt、yum中的软件安装包。
release是chart的运行实例,代表一个正在运行的应用。当chart被安装到Kubernetes集群,就生成一个release。chart能多次安装到同一个集群,每次安装都是一个release【根据chart赋值不同,完全可以部署出多个release出来】。
Repository用于发布和存储 Chart 的存储库。
Helm包含两个组件:Helm客户端和Tiller服务端,如下图所示:
 

Helm 客户端负责 chart 和 release 的创建和管理以及和 Tiller 的交互。Tiller 服务端运行在 Kubernetes 集群中,它会处理Helm客户端的请求,与 Kubernetes API Server 交互。

Helm部署
现在越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,我们也会使用Helm安装Kubernetes的常用组件。Helm由客户端命令helm工具和服务端tiller组成。

helm的GitHub地址

https://github.com/helm/helm

 本次部署版本

Helm安装部署

[root@k8s-master software]# pwd
/root/software 
[root@k8s-master software]# wget https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz 
[root@k8s-master software]# 
[root@k8s-master software]# tar xf helm-v2.16.9-linux-amd64.tar.gz
[root@k8s-master software]# ll
total 12624
-rw-r--r-- 1 root root 12926032 Jun 16 06:55 helm-v3.2.4-linux-amd64.tar.gz
drwxr-xr-x 2 3434 3434       50 Jun 16 06:55 linux-amd64
[root@k8s-master software]# 
[root@k8s-master software]# cp -a linux-amd64/helm /usr/bin/helm

 因为Kubernetes API Server开启了RBAC访问控制,所以需要创建tiller的service account:tiller并分配合适的角色给它。这里为了简单起见我们直接分配cluster-admin这个集群内置的ClusterRole给它。

[root@k8s-master helm]# pwd
/root/k8s_practice/helm
[root@k8s-master helm]# 
[root@k8s-master helm]# cat rbac-helm.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
[root@k8s-master helm]# 
[root@k8s-master helm]# kubectl apply -f rbac-helm.yaml 
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created 

初始化Helm的client 和 server

[root@k8s-master helm]# helm init --service-account tiller
………………
[root@k8s-master helm]# kubectl get pod -n kube-system -o wide | grep 'tiller'
tiller-deploy-8488d98b4c-j8txs       0/1     Pending   0          38m     <none>         <none>       <none>           <none>
[root@k8s-master helm]# 
##### 之所有没有调度成功,就是因为拉取镜像包失败;查看需要拉取的镜像包
[root@k8s-master helm]# kubectl describe pod tiller-deploy-8488d98b4c-j8txs -n kube-system
Name:           tiller-deploy-8488d98b4c-j8txs
Namespace:      kube-system
Priority:       0
Node:           <none>
Labels:         app=helm
                name=tiller
                pod-template-hash=8488d98b4c
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/tiller-deploy-8488d98b4c
Containers:
  tiller:
    Image:       gcr.io/kubernetes-helm/tiller:v2.16.9
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-kjqb7 (ro)
Conditions:
………………

由上可见,镜像下载失败。原因是镜像在国外,因此这里需要修改镜像地址

[root@k8s-master helm]# helm init --upgrade --tiller-image registry.cn-beijing.aliyuncs.com/google_registry/tiller:v2.16.9
[root@k8s-master helm]# 
### 等待一会儿后
[root@k8s-master helm]# kubectl get pod -o wide -A | grep 'till'
kube-system    tiller-deploy-7b7787d77-zln6t    1/1     Running   0    8m43s   10.244.4.123   k8s-node01   <none>    <none>

由上可见,Helm服务端tiller部署成功

helm版本信息查看

[root@k8s-master helm]# helm version
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"dirty"}

Helm使用

helm源地址

helm默认使用的charts源地址

[root@k8s-master helm]# helm repo list
NAME  	URL 
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts      

helm安装包下载存放位置

/root/.helm/cache/archive

helm常见应用操作

# 列出charts仓库中所有可用的应用
helm search
# 查询指定应用
helm search memcached
# 查询指定应用的具体信息
helm inspect stable/memcached
# 用helm安装软件包,--name:指定release名字
helm install --name memcached1 stable/memcached
# 查看安装的软件包
helm list
# 删除指定引用
helm delete memcached1

helm常用命令

chart管理

create:根据给定的name创建一个新chart
fetch:从仓库下载chart,并(可选项)将其解压缩到本地目录中
inspect:chart详情
package:打包chart目录到一个chart归档
lint:语法检测
verify:验证位于给定路径的chart已被签名且有效

release管理

get:下载一个release
delete:根据给定的release name,从Kubernetes中删除指定的release
install:安装一个chart
list:显示release列表
upgrade:升级release
rollback:回滚release到之前的一个版本
status:显示release状态信息
history:Fetch release历史信息

helm常见操作

# 添加仓库
helm repo add REPO_INFO   # 如:helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
##### 示例
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm repo add elastic https://helm.elastic.co
# 查看helm仓库列表
helm repo list
# 创建chart【可供参考,一般都是自己手动创建chart】
helm create CHART_PATH
# 根据指定chart部署一个release
helm install --name RELEASE_NAME CHART_PATH
# 根据指定chart模拟安装一个release,并打印处debug信息
helm install --dry-run --debug --name RELEASE_NAME CHART_PATH
# 列出已经部署的release
helm list
# 列出所有的release
helm list --all
# 查询指定release的状态
helm status Release_NAME
# 回滚到指定版本的release,这里指定的helm release版本
helm rollback Release_NAME REVISION_NUM
# 查看指定release的历史信息
helm history Release_NAME
# 对指定chart打包
helm package CHART_PATH    如:helm package my-test-app/
# 对指定chart进行语法检测
helm lint CHART_PATH
# 查看指定chart详情
helm inspect CHART_PATH
# 从Kubernetes中删除指定release相关的资源【helm list --all 中仍然可见release记录信息】
helm delete RELEASE_NAME
# 从Kubernetes中删除指定release相关的资源,并删除release记录
helm delete --purge RELEASE_NAME

上述操作可结合下文示例,这样能看到更多细节。

helm示例

chart文件信息

[root@k8s-master helm]# pwd
/root/k8s_practice/helm
[root@k8s-master helm]# 
[root@k8s-master helm]# mkdir my-test-app
[root@k8s-master helm]# cd my-test-app
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# ll
total 8
-rw-r--r-- 1 root root 158 Jul 16 17:53 Chart.yaml
drwxr-xr-x 2 root root  49 Jul 16 21:04 templates
-rw-r--r-- 1 root root 129 Jul 16 21:04 values.yaml
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# cat Chart.yaml 
apiVersion: v1
appVersion: v2.2
description: my test app
keywords:
- myapp
maintainers:
- email: zhang@test.com
  name: zhang
# 该name值与上级目录名相同
name: my-test-app
version: v1.0.0
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# cat values.yaml 
deployname: my-test-app02
replicaCount: 2
images:
  repository: registry.cn-beijing.aliyuncs.com/google_registry/myapp
  tag: v2
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# ll templates/
total 8
-rw-r--r-- 1 root root 544 Jul 16 21:04 deployment.yaml
-rw-r--r-- 1 root root 222 Jul 16 20:41 service.yaml
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# cat templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.deployname }}
  labels:
    app: mytestapp-deploy
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: mytestapp
      env: test
  template:
    metadata:
      labels:
        app: mytestapp
        env: test
        description: mytest
    spec:
      containers:
      - name: myapp-pod
        image: {{ .Values.images.repository }}:{{ .Values.images.tag }}
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80

[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# cat templates/service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: my-test-app
  namespace: default
spec:
  type: NodePort
  selector:
    app: mytestapp
    env: test
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP

生成release

[root@k8s-master my-test-app]# pwd
/root/k8s_practice/helm/my-test-app
[root@k8s-master my-test-app]# ll
total 8
-rw-r--r-- 1 root root 160 Jul 16 21:15 Chart.yaml
drwxr-xr-x 2 root root  49 Jul 16 21:04 templates
-rw-r--r-- 1 root root 129 Jul 16 21:04 values.yaml
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# helm install --name mytest-app01 .   ### 如果在上级目录则为 helm install --name mytest-app01 my-test-app/
NAME:   mytest-app01
LAST DEPLOYED: Thu Jul 16 21:18:08 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME           READY  UP-TO-DATE  AVAILABLE  AGE
my-test-app02  0/2    2           0          0s

==> v1/Pod(related)
NAME                            READY  STATUS             RESTARTS  AGE
my-test-app02-58cb6b67fc-4ss4v  0/1    ContainerCreating  0         0s
my-test-app02-58cb6b67fc-w2nhc  0/1    ContainerCreating  0         0s

==> v1/Service
NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
my-test-app  NodePort  10.110.82.62  <none>       80:30965/TCP  0s

[root@k8s-master my-test-app]# helm list
NAME        	REVISION	UPDATED                 	STATUS  	CHART             	APP VERSION	NAMESPACE
mytest-app01	1       	Thu Jul 16 21:18:08 2020	DEPLOYED	my-test-app-v1.0.0	v2.2       	default  

curl访问

[root@k8s-master ~]# kubectl get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
my-test-app02-58cb6b67fc-4ss4v   1/1     Running   0          9m3s   10.244.2.187   k8s-node02   <none>           <none>
my-test-app02-58cb6b67fc-w2nhc   1/1     Running   0          9m3s   10.244.4.134   k8s-node01   <none>           <none>
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get svc -o wide
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE    SELECTOR
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP        65d    <none>
my-test-app   NodePort    10.110.82.62   <none>        80:30965/TCP   9m8s   app=mytestapp,env=test
[root@k8s-master ~]#
##### 根据svc的IP访问
[root@k8s-master ~]# curl 10.110.82.62
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# 
[root@k8s-master ~]# curl 10.110.82.62/hostname.html
my-test-app02-58cb6b67fc-4ss4v
[root@k8s-master ~]# 
[root@k8s-master ~]# curl 10.110.82.62/hostname.html
my-test-app02-58cb6b67fc-w2nhc
[root@k8s-master ~]# 
##### 根据本机的IP访问
[root@k8s-master ~]# curl 172.16.1.110:30965/hostname.html
my-test-app02-58cb6b67fc-w2nhc
[root@k8s-master ~]# 
[root@k8s-master ~]# curl 172.16.1.110:30965/hostname.html
my-test-app02-58cb6b67fc-4ss4v

chart更新

values.yaml文件修改

[root@k8s-master my-test-app]# pwd
/root/k8s_practice/helm/my-test-app
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# cat values.yaml 
deployname: my-test-app02
replicaCount: 2
images:
  repository: registry.cn-beijing.aliyuncs.com/google_registry/myapp
  # 改了tag
  tag: v3

重新release发布

[root@k8s-master my-test-app]# helm list
NAME        	REVISION	UPDATED                 	STATUS  	CHART             	APP VERSION	NAMESPACE
mytest-app01	1       	Thu Jul 16 21:18:08 2020	DEPLOYED	my-test-app-v1.0.0	v2.2       	default  
[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# helm upgrade mytest-app01 .    ### 如果在上级目录则为 helm upgrade mytest-app01 my-test-app/
Release "mytest-app01" has been upgraded.
LAST DEPLOYED: Thu Jul 16 21:32:25 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME           READY  UP-TO-DATE  AVAILABLE  AGE
my-test-app02  2/2    1           2          14m

==> v1/Pod(related)
NAME                            READY  STATUS             RESTARTS  AGE
my-test-app02-58cb6b67fc-4ss4v  1/1    Running            0         14m
my-test-app02-58cb6b67fc-w2nhc  1/1    Running            0         14m
my-test-app02-6b84df49bb-lpww7  0/1    ContainerCreating  0         0s

==> v1/Service
NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
my-test-app  NodePort  10.110.82.62  <none>       80:30965/TCP  14m


[root@k8s-master my-test-app]# 
[root@k8s-master my-test-app]# helm list
NAME        	REVISION	UPDATED                 	STATUS  	CHART             	APP VERSION	NAMESPACE
mytest-app01	2       	Thu Jul 16 21:32:25 2020	DEPLOYED	my-test-app-v1.0.0	v2.2       	default  

curl访问,可参见上面。可见app version已从v2改为了v3。

智一面王老师说运维推荐高级运维工程师(k8s专题)在线评测:http://www.gtalent.cn/exam/interview/aRdgXDLFNpjwiWEM 

王老师说运维之菜鸟kubernetes(k8s)入门实战:http://www.codeforest.cn/course/473