欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

微服务监控神器Prometheus的安装部署

程序员文章站 2022-03-22 18:45:08
...

本文涉及:如何在k8s下搭建Prometheus+grafana的监控环境

基本概念

Prometheus提供了容器和云原生领域数据搜集、存储、处理、可视化和告警一套完整的解决方案,最初时是由SoundCloud公司开发的。自2012年开源以来社区成员就不断递增。如今的Prometheus已经发展到继Kubernetes后第2个正式加入CNCF基金会的项目

Prometheus的特点?

  • 多维的数据模型(基于时间序列的k/v键值对)。
  • 灵活的查询及聚合语句(PromQL)。
  • 不依赖分布式存储,节点自治。
  • 基于HTTP的pull模式采集时间序列数据。
  • 可以使用pushgateway(prometheus的可选中间件)实现push模式。
  • 可以使用动态服务发现或静态配置采集的目标机器。
  • 支持多种图形及仪表盘。

Prometheus可以监控什么?

  • k8s、docker、mysql、redis、es、consul、rabbitmq、zabbix等等

Prometheus架构图

微服务监控神器Prometheus的安装部署
            
    
    博客分类: 微服务 微服务监控普罗米修斯Prometheusgrafana 

Prometheus安装部署

Helm 安装

Helm 是一个命令行下的客户端工具。主要用于 Kubernetes 应用程序 Chart 的创建、打包、发布以及创建和管理本地和远程的 Chart 仓库。

1
2
3
4
5
6
[root@syj ~]# wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-rc.2-linux-amd64.tar.gz
[root@syj ~]# tar -zxvf helm-v2.14.0-rc.2-linux-amd64.tar.gz
[root@syj ~]# cp linux-amd64/helm /usr/local/bin/
[root@syj ~]# helm version
Client: &version.Version{SemVer:"v2.13.1-rc.2", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Error: could not find tiller
Tiller 服务器安装

Tiller 是 Helm 的服务端,部署在 Kubernetes 集群中。Tiller 用于接收 Helm 的请求,并根据 Chart 生成 Kubernetes 的部署文件( Helm 称为 Release ),然后提交给 Kubernetes 创建应用。Tiller 还提供了 Release 的升级、删除、回滚等一系列功能。

创建rbac-config.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

启动

1
2
3
[root@syj ~]# kubectl apply -f rbac-config.yaml 
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

使用阿里云镜像进行安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@syj ~]# helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

查看结果

1
2
3
4
5
6
7
[root@syj ~]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
[root@syj ~]# helm repo list
NAME    URL                                                   
stable  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local   http://127.0.0.1:8879/charts
部署 Prometheus Operator

创建命名空间

1
[root@syj ~]# kubectl create namespace monitoring

下载Prometheus Operator

1
[root@syj ~]# wget https://github.com/coreos/prometheus-operator/archive/release-0.29.zip

将下载下来的依赖包解压并重命名为prometheus-operator并cd到此目录
安装prometheus相关内容

1
2
3
helm install --name prometheus-operator --set rbacEnable=true --namespace=monitoring helm/prometheus-operator
helm install --name prometheus --set serviceMonitorsSelector.app=prometheus --set ruleSelector.app=prometheus --namespace=monitoring helm/prometheus
helm install --name alertmanager --namespace=monitoring helm/alertmanager

验证

1
2
3
4
5
6
7
8
9
10
11
[root@syj ~]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-alertmanager-0            2/2     Running   0          58s
prometheus-operator-545b59ffc9-6g7dg   1/1     Running   0          6m32s
prometheus-prometheus-0                3/3     Running   1          3m31s
[root@syj ~]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
alertmanager            ClusterIP   10.98.237.7      <none>        9093/TCP            87s
alertmanager-operated   ClusterIP   None             <none>        9093/TCP,6783/TCP   87s
prometheus              ClusterIP   10.104.185.104   <none>        9090/TCP            4m
prometheus-operated     ClusterIP   None             <none>        9090/TCP            4m

安装 kube-prometheus

1
2
3
4
[root@syj ~]# mkdir -p helm/kube-prometheus/charts
[root@syj ~]# helm package -d helm/kube-prometheus/charts helm/alertmanager helm/grafana helm/prometheus  helm/exporter-kube-dns \
> helm/exporter-kube-scheduler helm/exporter-kubelets helm/exporter-node helm/exporter-kube-controller-manager \
> helm/exporter-kube-etcd helm/exporter-kube-state helm/exporter-coredns helm/exporter-kubernetes

验证

1
2
3
4
5
6
7
8
9
10
11
[root@syj ~]# kubectl get svc -n monitoring
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
alertmanager                          ClusterIP   10.98.237.7      <none>        9093/TCP            34m
alertmanager-operated                 ClusterIP   None             <none>        9093/TCP,6783/TCP   34m
kube-prometheus                       ClusterIP   10.101.249.82    <none>        9090/TCP            29s
kube-prometheus-alertmanager          ClusterIP   10.100.29.63     <none>        9093/TCP            29s
kube-prometheus-exporter-kube-state   ClusterIP   10.98.91.146     <none>        80/TCP              29s
kube-prometheus-exporter-node         ClusterIP   10.98.34.11      <none>        9100/TCP            29s
kube-prometheus-grafana               ClusterIP   10.108.208.247   <none>        80/TCP              29s
prometheus                            ClusterIP   10.104.185.104   <none>        9090/TCP            36m
prometheus-operated                   ClusterIP   None             <none>        9090/TCP            36m

将grafana的Service类型改为NodePort

1
kubectl patch svc kube-prometheus-grafana -p '{"spec":{"type":"NodePort"}}' -n monitoring

此时访问grafana的默认端口31106即可:

1
http://ip:31106

安装过程参考文章:https://blog.csdn.net/wangzan18/article/details/85270816

grafana的各种模板可参考
https://grafana.com/dashboards

推荐阅读

  1. SpringCloud学习系列汇总
  2. 为什么一线大厂面试必问redis,有啥好问的?
  3. 多线程面试必备基础知识汇总
  4. Java集合源码分析汇总-JDK1.8
  5. Linux常用命令速查-汇总篇
  6. JVM系列文章汇总

博客所有文章首发于公众号《Java学习录》转载请保留
扫码关注公众号即可领取2000GJava学习资源

微服务监控神器Prometheus的安装部署
            
    
    博客分类: 微服务 微服务监控普罗米修斯Prometheusgrafana