集群外部署Prometheus+Grafana监控K8S解析 一文剖析了k8s的监控,但是无法在grafana dashboard中配置告警,因此我们需要额外在Prometheus单独配置告警规则,配合AlertManager实现告警。而Prometheus监控的指标很多,如何定义告警规则就是我们接下来需要做的。

通常情况下,运维工作中会存在多个监控告警系统并行使用的情况,如zabbix、nagios、prometheus;每个监控系统面向的监控需求不同,我们不可能通过一套监控实现所有的监控需求。认识到这一点后,我们就可以将Prometheus的监控指标主要集中在k8s资源、pod性能方面,而其他传统的服务器性能监控可以交给其他监控系统。

分析

结合自己的实际应用,我把k8s的告警规则分为以下几方面:

  • JobDown
    对应Prometheus指标收集的4个Job、分别为kube-state-metrics、kube-node-exporter、kube-node-kubelet、kube-node-cadvisor,一旦哪个Job有问题,则会进行告警。
  • PodDown
    当处于Running状态的pod停止时,则会进行告警。
  • PodReady
    在Pod重新调度后,虽然Pod处于Running状态,由于此时正在重启,Ready为0,只有当readiness探针探测正常后,Ready为1,才会正式接受请求;因此Ready长时间为0时,说明Pod启动有问题,需要进行告警。
  • PodRestart
    当Pod健康检查不成功,Pod会进行不断的重启,一旦超过一定的次数,则说明Pod有问题,此时需要进行告警。

注意:
查看pod状态,只有当READY为1,STATUS为Running时,Pod才是正常运行的

[root@test ~]# kubectl get pod 
NAME                  READY   STATUS    RESTARTS   AGE
node-exporter-nch8w   1/1     Running   0          46h
node-exporter-smnxn   1/1     Running   0          46h
node-exporter-sqng2   1/1     Running   0          46h
12345

Prometheus

  1. 修改配置文件
# 配置alertmanager
alerting:
  alertmanagers:
  - static_configs:
    - targets:
       - 192.168.3.44:9093
# 告警规则
rule_files:
  - "/prometheus/etc/k8s.yml"
123456789

注意:由于prometheus使用的是docker部署,因此k8s.yml位置必须是容器内部的目录位置,而不是宿主机上的问题。

  1. 添加告警规则
groups:
- name: node.rules
  rules:
  - alert: JobDown #检测job的状态,持续5分钟metrices不能访问会发给altermanager进行报警
    expr: up == 0  #0不正常,1正常
    for: 5m  #持续时间 , 表示持续5分钟获取不到信息,则触发报警
    labels:
      severity: error
      cluster: k8s
    annotations:
      summary: "Job: {{ $labels.job }} down"
      description: "Instance:{{ $labels.instance }}, Job {{ $labels.job }} stop "
  - alert: PodDown
    expr: kube_pod_container_status_running != 1  
    for: 2s
    labels:
      severity: warning
      cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} down'
      description: 'Namespace: {{ $labels.namespace }}, Pod: {{ $labels.pod }} is not running'
  - alert: PodReady
    expr: kube_pod_container_status_ready != 1  
    for: 5m   #Ready持续5分钟,说明启动有问题
    labels:
      severity: warning
      cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} ready'
      description: 'Namespace: {{ $labels.namespace }}, Pod: {{ $labels.pod }} always ready for 5 minitue'
  - alert: PodRestart
    expr: changes(kube_pod_container_status_restarts_total[30m])>0 #最近30分钟pod重启
    for: 2s
    labels:
      severity: warning
      cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} restart'
      description: 'namespace: {{ $labels.namespace }}, pod: {{ $labels.pod }} restart {{ $value }} times'

其中为每条告警规则添加了两个自定义label:

  • severity: warning
    告警紧急程度。
  • cluster: k8s
    集群标签,alertmanager根据此标签进行告警路由选择,以便通知到告警人。

注意:for告警持续时间,可根据实际情况进行修改。

  1. 检查配置并重载
# 1.进入prometheus容器
docker exec -it b9824084f26f /bin/sh
# 2.检查配置文件
# /bin/promtool check rules /prometheus/etc/k8s.yml
Checking /prometheus/etc/k8s.yml
  SUCCESS: 4 rules found
# 3.重载
curl -X POST http://127.0.0.1:9090/-/reload

 

Alertmanager

1.告警设置

默认进行微信告警,也可根据部门rd、集群cluster、产品product等标签,进行邮件或微信告警。

global:
  resolve_timeout: 5m
  smtp_smarthost: xxx.xxx.cn:587
  smtp_from: xxxx@xxx.cn
  smtp_auth_username: xxx@xx.cn
  smtp_auth_password: xxxxxxxx

#告警模板
templates:
- '/usr/local/alertmanager/wechat.tmpl'

route:
  #按alertname进行分组
  group_by: ['alertname']
  group_wait: 5m
  #同一组内警报,等待group_interval时间后,再继续等待repeat_interval时间
  group_interval: 5m
  #当group_interval时间到后,再等待repeat_interval时间后,才进行报警
  repeat_interval: 5m
  # 默认微信告警
  receiver: 'wechat'
  # 按部门rd、集群cluster、产品product区分,进行不同方式的告警
  routes:
  - receiver: 'email_rd'
    match_re:
      department: 'rd'
  - receiver: 'email_k8s'
    match_re:
      cluster: 'k8s'
  - receiver: 'wechat_product'
    match_re:
      product: 'app'
receivers:
#微信告警通道
- name: 'wechat'
  wechat_configs:
  - corp_id: 'wwxxxxfdd372e'
    agent_id: '1000005'
    api_secret: 'FxLzx7sdfasdAhPgoK9Dt-NWYOLuy-RuX3I'
    to_user: 'test1|test2'
    send_resolved: true
#邮件告警通道
- name: 'email_tech'
  email_configs:
  - to: muxq@cityhouse.cn
    headers: {"subject":'{{ template "email.test.header" . }}'}
    html: '{{ template "email.test.message" . }}'
    send_resolved: true
- name: 'email_k8s'
  email_configs:
  - to: 'xxx@xx.cn,xx@xx.cn'
    headers: {"subject":'{{ template "email.test.header" . }}'}
    html: '{{ template "email.test.message" . }}'
    send_resolved: true
#微信告警通道
- name: 'wechat_product'
  wechat_configs:
  - corp_id: 'wwxxxxfdd372e'
    agent_id: '1000005'
    api_secret: 'FxLzx7sdfasdAhPgoK9Dt-NWYOLuy-RuX3I'
    to_user: 'test1|test2'
    send_resolved: true

2.告警模板

# 1.微信告警模板
{{ define "grafana.default.message" }}{{ range .Alerts }}
{{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
{{ range .Annotations.SortedPairs }}{{ .Name }} = {{ .Value }}
{{ end }}{{ end }}{{ end }}

{{ define "wechat.default.message" }}
{{ if eq .Status "firing"}}[Warning]:{{ template "grafana.default.message" . }}{{ end }}
{{ if eq .Status "resolved" }}[Resolved]:{{ template "grafana.default.message" . }}{{ end }}
{{ end }}

# 2.邮件告警模板
{{ define "email.to.html" }}
{{ range .Alerts }}
<br>
触发时间: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}<br>
告警类型: {{ .Labels.alertname }} <br>
告警主题: {{ .Annotations.summary }} <br>
告警详情: {{ .Annotations.description }} <br>
{{ end }}
{{ end }}

{{ define "email.test.header" }}
{{ if eq .Status "firing"}}[Warning]:{{ range .Alerts }}{{ .Annotations.summary }} {{ end }}{{ end }}
{{ if eq .Status "resolved"}}[Resolved]:{{ range .Alerts }}{{ .Annotations.summary }} {{ end }}{{ end }}
{{ end }}

{{ define "email.test.message" }}
{{ if eq .Status "firing"}}[Warning]:{{ template "email.to.html" . }}{{ end }}
{{ if eq .Status "resolved" }}[Resolved]:{{ template "email.to.html" . }}{{ end }}
{{ end }}

3.告警内容

(1)微信告警

 

(2)邮件告警

# 1.JobDown

[Warning]:

触发时间: 2020-10-27 16:46:44

告警类型: JobDown

告警主题: Job: kube-node-exporter down

告警详情: Instance:uvmsvr-3-217, Job kube-node-exporter stop

[Resolved]:

触发时间: 2020-10-27 16:46:44

告警类型: JobDown

告警主题: Job: kube-node-exporter down

告警详情: Instance:uvmsvr-3-217, Job kube-node-exporter stop

# 2.PodDown

[Warning]:

触发时间: 2020-10-28 14:03:14

告警类型: PodRestart

告警主题: Container: sdk-back down

告警详情: namespace: test, pod: api-sdk-58497c4db9-hwgjl is not running

[Resolved]:

触发时间: 2020-10-28 14:03:14

告警类型: PodRestart

告警主题: Container: sdk-back down

告警详情: namespace: test, pod: api-sdk-58497c4db9-hwgjl is not running

# 3.PodReady

[Warning]:

触发时间: 2020-10-29 11:24:59

告警类型: PodReady

告警主题: Container: sdk-back ready

告警详情: Namespace: test, Pod: api-sdk-58497c4db9-k98g5 always ready for 5 minitue

[Resolved]:

触发时间: 2020-10-29 11:24:59

告警类型: PodReady

告警主题: Container: sdk-back ready

告警详情: Namespace: test, Pod: api-sdk-58497c4db9-k98g5 always ready for 5 minitue

# 4.PodRestart

[Warning]:

触发时间: 2020-10-29 11:20:29

告警类型: PodRestart

告警主题: Container: sdk-back restart

告警详情: namespace: test, pod: api-sdk-58497c4db9-k98g5 restart 3 times

[Resolved]:

触发时间: 2020-10-29 11:20:29

告警类型: PodRestart

告警主题: Container: sdk-back restart

告警详情: namespace: test, pod: api-sdk-58497c4db9-k98g5 restart 3 times

总结

除了以上告警规则,我们还可根据grafana中的dashboard进一步配置,毕竟里面监控内容更加详细。总之需要根据实际