欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Prometheus 监控 Hadoop3

程序员文章站 2022-05-01 08:02:33
...

Prometheus 监控 Hadoop3

可能是全网最简洁的配置攻略

下载 jmx_exporter

jmx_exporter Github地址 包含下载链接和使用说明。


我们可以看到jmx的使用方法是以java agent的形式启动,会开启一个端口供Prometheus拉数:

java -javaagent:./jmx_prometheus_javaagent-0.13.0.jar=8080:config.yaml -jar yourJar.jar

创建组件配置文件
启动jmx_exporter的时候需要指定配置文件,配置文件可以为空,但不能没有。为每个组件创建一下配置文件,暂时设置为空就好:

namenode.yaml
datanode.yaml
resourcemanager.yaml
nodemanager.yaml
journalnode.yaml
zkfc.yaml
hffps.yaml
proxyserver.yaml
historyserver.yaml

规划一下监控体系

如果你是像我一样正准备建设一整套hadoop的监控,我建议你规划一下监控文件的存放路径和端口。
我是将监控相关的文件放在了和各种大数据组件平级的路径下:

mkdir -p /opt/bigdata/monitoring
mkdir -p /opt/bigdata/monitoring/zookeeper
mkdir -p /opt/bigdata/monitoring/hadoop
mv jmx_prometheus_javaagent-0.13.0.jar /opt/bigdata/monitoring

我在monitoring目录下存放了jmx_exporter jar包。后续还会有主机监控node_exporter等组件,我选择将他们放在一起管理、并将配置文件分类存放了下来。
我这样做的目的是想把监控做成一个模块,尽量少的对原有组件进行改动,降低耦合程度。

Hadoop 配置

Hadoop的配置十分简单,只需要在hadoop-env.sh传入参数即可:

cd hadoop/etc/hadoop/
vim hadoop-env.sh
export HDFS_NAMENODE_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30002:/opt/bigdata/monitoring/hadoop/namenode.yaml $HDFS_NAMENODE_OPTS"
export HDFS_DATANODE_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30003:/opt/bigdata/monitoring/hadoop/datanode.yaml $HDFS_DATANODE_OPTS"
export YARN_RESOURCEMANAGER_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30004:/opt/bigdata/monitoring/hadoop/resourcemanager.yaml $YARN_RESOURCEMANAGER_OPTS"
export YARN_NODEMANAGER_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30005:/opt/bigdata/monitoring/hadoop/nodemanager.yaml $YARN_NODEMANAGER_OPTS"
export HDFS_JOURNALNODE_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30006:/opt/bigdata/monitoring/hadoop/journalnode.yaml $HDFS_JOURNALNODE_OPTS" 
export HDFS_ZKFC_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30007:/opt/bigdata/monitoring/hadoop/zkfc.yaml $HDFS_ZKFC_OPTS"
export HDFS_HTTPFS_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30008:/opt/bigdata/monitoring/hadoop/httpfs.yaml $HDFS_HTTPFS_OPTS" 
export YARN_PROXYSERVER_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30009:/opt/bigdata/monitoring/hadoop/proxyserver.yaml $YARN_PROXYSERVER_OPTS" 
export MAPRED_HISTORYSERVER_OPTS="-javaagent:/opt/bigdata/monitoring/jmx_prometheus_javaagent-0.13.0.jar=30010:/opt/bigdata/monitoring/hadoop/historyserver.yaml $MAPRED_HISTORYSERVER_OPTS"

这里每一个组件仅有一行配置。配置完后记得分发、保存和重启集群。


(唠叨一下)
如果你搜索了其他攻略,会发现有些攻略中配置了很多其他东西,包括JMX相关的配置项、修改启动文件等等。
从个人角度来讲,我不太喜欢这样直接修改组件本身的操作。优秀的项目往往会充分地留有入口让我们传入一些自定义配置。
我在官网文档 UnixShellGuide中发现了这么一段:

(command)_(subcommand)_OPTS

It is also possible to set options on a per subcommand basis. This allows for one to create special options for particular cases. The first part of the pattern is the command being used, but all uppercase. The second part of the command is the subcommand being used. Then finally followed by the string _OPT.
For example, to configure mapred distcp to use a 2GB heap, one would use:
MAPRED_DISTCP_OPTS="-Xmx2g"
These options will appear after HADOOP_CLIENT_OPTS during execution and will generally take precedence.

这段话告诉我们可以以(command)_(subcommand)_OPTS这一格式传入针对(subcommand)级别的配置项。
拿我们一句组件启动命令为例:

$HADOOP_HOME/bin/hdfs --daemon start namenode

这句命令 hdfs 为 command, namenode为subcommand。因此想要对namenode组件传入配置参数则需要配置HDFS_NAMENODE_OPTS这一属性。
相关的说明在 yarn-env.sh,hdfs-env.sh,mapred-env.sh这几个环境配置脚本文件中也有说明。上述配置也可以对应command名称分别写入这几个脚本文件,它们的优先级会高于hadoop-env.sh

Prometheus 配置

采取引用外部配置文件的模式,具有更好的结构性和易管理性,当组件节点发生变动,我们只需修改json文件,不需要重启prometheus。
在prometheus根目录下新建configs目录,并新建文件 组件名.json

[
 {
  "targets": ["ip1:port","ip2:port","ip3:port"]
 }
]

修改配置文件prometheus.yml

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'zookeeper'
    file_sd_configs:
    - files: 
      - configs/zookeeper.json

  
  - job_name: 'hdfs-namenode'
    file_sd_configs:
    - files:
      - configs/namenode.json

  - job_name: 'hdfs-datanode'
    file_sd_configs:
    - files:
      - configs/datanode.json
  
  - job_name: 'yarn-resourcemanager'
    file_sd_configs:
    - files:
      - configs/resourcemanager.json

  - job_name: 'yarn-nodemanager'
    file_sd_configs:
    - files:
      - configs/nodemanager.json
  
  - job_name: 'hdfs-journalnode'
    file_sd_configs:
    - files:
      - configs/journalnode.json

  - job_name: 'hdfs-zkfc'
    file_sd_configs:
    - files:
      - configs/zkfc.json

  - job_name: 'hdfs-httpfs'
    file_sd_configs:
    - files:
      - configs/httpfs.json

  - job_name: 'yarn-proxyserver'
    file_sd_configs:
    - files:
      - configs/proxyserver.json

  - job_name: 'mapred-historyserver'
    file_sd_configs:
    - files:
      - configs/historyserver.json

监控展示

Prometheus+grafana的安装配置和使用可参考本人之前写的Prometheus监控Flink或者上网找更加详细的教程
https://www.yuque.com/u552836/hu5de3/mvhz9a


启动 prometheus

nohup ./prometheus --config.file=prometheus.yml &

启动 grafana

nohup bin/grafana-server &

接下来就是漫长的制作面板的过程了。。。
社区也貌似没有太多好面板模版,之后可能我会更新一些模版贴出来

Reference

https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-common/UnixShellGuide.html#a.28command.29_.28subcommand.29_OPTS
https://blog.csdn.net/YYC1503/article/details/102894698