K8s 部署 Prometheus + Grafana
程序员文章站
2022-03-03 22:23:26
一、简介 1. Prometheus 一款开源的监控&报警&时间序列数据库的组合,起始是由 SoundCloud 公司开发的 基本原理是通过 HTTP 协议周期性抓取被监控组件的状态,这样做的好处是任意组件只要提供 HTTP 接口就可以接入监控系统,不需要任何 SDK 或者其他的集成过程。这样做非常 ......
一、简介
1. prometheus
- 一款开源的监控&报警&时间序列数据库的组合,起始是由 soundcloud 公司开发的
- 基本原理是通过 http 协议周期性抓取被监控组件的状态,这样做的好处是任意组件只要提供 http 接口就可以接入监控系统,不需要任何 sdk 或者其他的集成过程。这样做非常适合虚拟化环境比如 vm 或者 docker
- 输出被监控组件信息的 http 接口被叫做 exporter 。目前互联网公司常用的组件大部分都有 exporter 可以直接使用,比如 varnish、haproxy、nginx、mysql、linux 系统信息(包括磁盘、内存、cpu、网络等),具体支持的源看:https://github.com/prometheus
- 特点:
- 一个多维数据模型(时间序列由指标名称定义和设置键/值尺寸)
- 非常高效的存储,平均一个采样数据占 ~3.5bytes 左右,320 万的时间序列,每 30 秒采样,保持 60 天,消耗磁盘大概 228g
- 一种灵活的查询语言
- 不依赖分布式存储,单个服务器节点
- 时间集合通过 http 上的 pull 模型进行
- 通过中间网关支持推送时间
- 通过服务发现或静态配置发现目标
- 多种模式的图形和仪表板支持
2. grafana
- 一个跨平台的开源的度量分析和可视化工具,可以通过将采集的数据查询然后可视化的展示,并及时通知
- 特点:
- 展示方式:快速灵活的客户端图表,面板插件有许多不同方式的可视化指标和日志,官方库中具有丰富的仪表盘插件,如热图、折线图、图表等多种展示方式
- 数据源:graphite,influxdb,opentsdb,prometheus,elasticsearch,cloudwatch 和 kairosdb 等
- 通知提醒:以可视方式定义最重要指标的警报规则,grafana 将不断计算并发送通知,在数据达到阈值时通过 slack、pagerduty 等获得通知
- 混合展示:在同一图表中混合使用不同的数据源,可以基于每个查询指定数据源,甚至自定义数据源
- 注释:使用来自不同数据源的丰富事件注释图表,将鼠标悬停在事件上会显示完整的事件元数据和标记
- 过滤器:ad-hoc 过滤器允许动态创建新的键/值过滤器,这些过滤器会自动应用于使用该数据源的所有查询
3. 效果展示
二、部署
$ kubectl create ns ns-monitor $ kubectl create -f ... $ kubectl get all -n ns-monitor name ready status restarts age pod/node-exporter-rcbss 1/1 running 0 4h41m pod/grafana-5567c66c9d-49b5w 1/1 running 0 4h25m pod/prometheus-5ccc8db98f-lkwf5 1/1 running 0 3h12m name type cluster-ip external-ip port(s) age service/node-exporter-service nodeport 10.43.75.152 <none> 9100:31672/tcp 4h41m service/grafana-service nodeport 10.43.26.238 <none> 3000:32534/tcp 4h25m service/prometheus-service nodeport 10.43.174.110 <none> 9090:31396/tcp 3h12m
grafana 和 prometheus 没有配置
nodeport
,端口随机生成
1. node-exporter
- 用于采集 k8s 集群中各个节点的物理指标,如 memory、cpu 等。可以直接在每个物理节点直接安装
kind: daemonset apiversion: apps/v1 metadata: labels: app: node-exporter name: node-exporter namespace: ns-monitor spec: revisionhistorylimit: 10 selector: matchlabels: app: node-exporter template: metadata: labels: app: node-exporter spec: containers: - name: node-exporter image: prom/node-exporter:v0.16.0 ports: - containerport: 9100 protocol: tcp name: http hostnetwork: true # 获得node的物理指标信息 hostpid: true # 获得node的物理指标信息 # tolerations: # master节点 # - effect: noschedule # operator: exists --- kind: service apiversion: v1 metadata: labels: app: node-exporter name: node-exporter-service namespace: ns-monitor spec: ports: - name: http port: 9100 nodeport: 31672 protocol: tcp type: nodeport selector: app: node-exporter
2. prometheus
apiversion: rbac.authorization.k8s.io/v1 kind: clusterrole metadata: name: prometheus rules: - apigroups: [""] # "" indicates the core api group resources: - nodes - nodes/proxy - services - endpoints - pods verbs: - get - watch - list - apigroups: - extensions resources: - ingresses verbs: - get - watch - list - nonresourceurls: ["/metrics"] verbs: - get --- apiversion: v1 kind: serviceaccount metadata: name: prometheus namespace: ns-monitor labels: app: prometheus --- apiversion: rbac.authorization.k8s.io/v1 kind: clusterrolebinding metadata: name: prometheus subjects: - kind: serviceaccount name: prometheus namespace: ns-monitor roleref: kind: clusterrole name: prometheus apigroup: rbac.authorization.k8s.io --- apiversion: v1 kind: configmap metadata: name: prometheus-conf namespace: ns-monitor labels: app: prometheus data: prometheus.yml: |- # my global config global: scrape_interval: 15s # set the scrape interval to every 15 seconds. default is every 1 minute. evaluation_interval: 15s # evaluate rules every 15 seconds. the default is every 1 minute. # scrape_timeout is set to the global default (10s). # alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # a scrape configuration containing exactly one endpoint to scrape: # here it's prometheus itself. scrape_configs: # the job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] - job_name: 'grafana' static_configs: - targets: - 'grafana-service.ns-monitor:3000' - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints # default to scraping over https. if required, just disable this or change to # `http`. scheme: https # this tls & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. this is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # prometheus. the discovery auth config is automatic if prometheus runs inside # the cluster. otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # if your node certificates are self-signed or use a different ca to the # master ca, then disable certificate verification below. note that # certificate verification is an integral part of a secure infrastructure # so this should only be disabled in a controlled environment. you can # disable certificate verification by uncommenting the line below. # # insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # keep only the default/kubernetes service endpoints for the https port. this # will add targets for each api server which kubernetes adds an endpoint to # the default/kubernetes service. relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https # scrape config for nodes (kubelet). # # rather than connecting directly to the node, the scrape is proxied though the # kubernetes apiserver. this means it will work if prometheus is running out of # cluster, or can't connect to nodes for some other reason (e.g. because of # firewalling). - job_name: 'kubernetes-nodes' # default to scraping over https. if required, just disable this or change to # `http`. scheme: https # this tls & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. this is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # prometheus. the discovery auth config is automatic if prometheus runs inside # the cluster. otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics # scrape config for kubelet cadvisor. # # this is required for kubernetes 1.7.3 and later, where cadvisor metrics # (those whose names begin with 'container_') have been removed from the # kubelet metrics endpoint. this job scrapes the cadvisor endpoint to # retrieve those metrics. # # in kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cadvisor # http endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" # in that case (and ensure cadvisor's http server hasn't been disabled with # the --cadvisor-port=0 kubelet flag). # # this job is not necessary and should be removed in kubernetes 1.6 and # earlier versions, or it will cause the metrics to be scraped twice. - job_name: 'kubernetes-cadvisor' # default to scraping over https. if required, just disable this or change to # `http`. scheme: https # this tls & bearer token file config is used to connect to the actual scrape # endpoints for cluster components. this is separate to discovery auth # configuration because discovery & scraping are two separate concerns in # prometheus. the discovery auth config is automatic if prometheus runs inside # the cluster. otherwise, more config options have to be provided within the # <kubernetes_sd_config>. tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor # scrape config for service endpoints. # # the relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/scrape`: only scrape services that have a value of `true` # * `prometheus.io/scheme`: if the metrics endpoint is secured then you will need # to set this to `https` & most likely set the `tls_config` of the scrape config. # * `prometheus.io/path`: if the metrics path is not `/metrics` override this. # * `prometheus.io/port`: if the metrics are exposed on a different port to the # service then set this appropriately. - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name # example scrape config for probing services via the blackbox exporter. # # the relabeling allows the actual service scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: only probe services that have a value of `true` - job_name: 'kubernetes-services' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__address__] target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name # example scrape config for probing ingresses via the blackbox exporter. # # the relabeling allows the actual ingress scrape endpoint to be configured # via the following annotations: # # * `prometheus.io/probe`: only probe services that have a value of `true` - job_name: 'kubernetes-ingresses' metrics_path: /probe params: module: [http_2xx] kubernetes_sd_configs: - role: ingress relabel_configs: - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe] action: keep regex: true - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path] regex: (.+);(.+);(.+) replacement: ${1}://${2}${3} target_label: __param_target - target_label: __address__ replacement: blackbox-exporter.example.com:9115 - source_labels: [__param_target] target_label: instance - action: labelmap regex: __meta_kubernetes_ingress_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_ingress_name] target_label: kubernetes_name # example scrape config for pods # # the relabeling allows the actual pod scrape endpoint to be configured via the # following annotations: # # * `prometheus.io/scrape`: only scrape pods that have a value of `true` # * `prometheus.io/path`: if the metrics path is not `/metrics` override this. # * `prometheus.io/port`: scrape the pod on the indicated port instead of the # pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name --- apiversion: v1 kind: configmap metadata: name: prometheus-rules namespace: ns-monitor labels: app: prometheus data: cpu-usage.rule: | groups: - name: nodecpuusage rules: - alert: nodecpuusage expr: (100 - (avg by (instance) (irate(node_cpu{name="node-exporter",mode="idle"}[5m])) * 100)) > 75 for: 2m labels: severity: "page" annotations: summary: "{{$labels.instance}}: high cpu usage detected" description: "{{$labels.instance}}: cpu usage is above 75% (current value is: {{ $value }})" --- apiversion: v1 kind: persistentvolume metadata: name: "prometheus-data-pv" labels: name: prometheus-data-pv release: stable spec: capacity: storage: 5gi accessmodes: - readwriteonce persistentvolumereclaimpolicy: recycle nfs: path: /nfs/prometheus/data server: 192.168.11.210 --- apiversion: v1 kind: persistentvolumeclaim metadata: name: prometheus-data-pvc namespace: ns-monitor spec: accessmodes: - readwriteonce resources: requests: storage: 5gi selector: matchlabels: name: prometheus-data-pv release: stable --- kind: deployment apiversion: apps/v1 metadata: labels: app: prometheus name: prometheus namespace: ns-monitor spec: replicas: 1 revisionhistorylimit: 10 selector: matchlabels: app: prometheus template: metadata: labels: app: prometheus spec: serviceaccountname: prometheus securitycontext: runasuser: 0 containers: - name: prometheus image: prom/prometheus:latest imagepullpolicy: ifnotpresent volumemounts: - mountpath: /prometheus name: prometheus-data-volume - mountpath: /etc/prometheus/prometheus.yml name: prometheus-conf-volume subpath: prometheus.yml - mountpath: /etc/prometheus/rules name: prometheus-rules-volume ports: - containerport: 9090 protocol: tcp volumes: - name: prometheus-data-volume persistentvolumeclaim: claimname: prometheus-data-pvc - name: prometheus-conf-volume configmap: name: prometheus-conf - name: prometheus-rules-volume configmap: name: prometheus-rules tolerations: - key: node-role.kubernetes.io/master effect: noschedule --- kind: service apiversion: v1 metadata: annotations: prometheus.io/scrape: 'true' labels: app: prometheus name: prometheus-service namespace: ns-monitor spec: ports: - port: 9090 targetport: 9090 selector: app: prometheus type: nodeport
3. grafana
apiversion: v1 kind: persistentvolume metadata: name: "grafana-data-pv" labels: name: grafana-data-pv release: stable spec: capacity: storage: 5gi accessmodes: - readwriteonce persistentvolumereclaimpolicy: recycle nfs: path: /nfs/grafana/data server: 192.168.11.210 --- apiversion: v1 kind: persistentvolumeclaim metadata: name: grafana-data-pvc namespace: ns-monitor spec: accessmodes: - readwriteonce resources: requests: storage: 5gi selector: matchlabels: name: grafana-data-pv release: stable --- kind: deployment apiversion: apps/v1 metadata: labels: app: grafana name: grafana namespace: ns-monitor spec: replicas: 1 revisionhistorylimit: 10 selector: matchlabels: app: grafana template: metadata: labels: app: grafana spec: securitycontext: runasuser: 0 containers: - name: grafana image: grafana/grafana:latest imagepullpolicy: ifnotpresent env: - name: gf_auth_basic_enabled value: "true" - name: gf_auth_anonymous_enabled value: "false" readinessprobe: httpget: path: /login port: 3000 volumemounts: - mountpath: /var/lib/grafana name: grafana-data-volume ports: - containerport: 3000 protocol: tcp volumes: - name: grafana-data-volume persistentvolumeclaim: claimname: grafana-data-pvc --- kind: service apiversion: v1 metadata: labels: app: grafana name: grafana-service namespace: ns-monitor spec: ports: - port: 3000 targetport: 3000 selector: app: grafana type: nodeport
配置数据源
import dashboard from file(非必须)
https://files.cnblogs.com/files/lb477/kubernetes-pod-resources.json
参考:
推荐阅读
-
利用Prometheus与Grafana对Mysql服务器的性能监控详解
-
使用k8s部署Django项目的方法步骤
-
基于Prometheus和Grafana的监控平台 - 环境搭建
-
k8s~部署EFK框架
-
k8s + docker + Jenkins使用Pipeline部署SpringBoot项目时Jenkins错误集锦
-
kubernetes系列03—kubeadm安装部署K8S集群
-
使用Rancher在K8S上部署高性能PHP应用程序的教程
-
Prometheus(一):Prometheus+Grafana 安装配置
-
机房ping监控 smokeping+prometheus+grafana
-
在docker中部署k8s的方法