欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Hive运行引擎Tez

程序员文章站 2022-04-29 08:58:04
...

    Tez是一个Hive的运行引擎,性能优于MR。为什么优于MR呢?看下图。

Hive运行引擎Tez

    用Hive直接编写MR程序,假设有四个有依赖关系的MR作业,上图中,绿色是Reduce Task,云状表示写屏蔽,需要将中间结果持久化写到HDFS。

    Tez可以将多个有依赖的作业转换为一个作业,这样只需写一次HDFS,且中间节点较少,从而大大提升作业的计算性能。

1.安装包准备

1.1 下载tez的依赖包:http://tez.apache.org

1.2 拷贝apache-tez-0.9.1-bin.tar.gz到hadoop102的/opt/software目录

1.3 解压缩apache-tez-0.9.1-bin.tar.gz

    [aaa@qq.com module]$ tar -zxvf apache-tez-0.9.1-bin.tar.gz -C /opt/module

1.4 修改名称

    [aaa@qq.com module]$ mv apache-tez-0.9.1-bin/ tez-0.9.1

 

2.在Hive中配置Tez

2.1 进入到Hive的配置目录:/opt/module/hive/conf

    [aaa@qq.com conf]$ pwd

    /opt/module/hive/conf

2.2 在hive-env.sh文件中添加tez环境变量配置和依赖包环境变量配置

# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/opt/module/hadoop-2.7.2


# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/module/hive/conf


# Folder containing extra libraries required for hive compilation/execution can be controlled by:
export TEZ_HOME=/opt/module/tez-0.9.1    #是你的tez的解压目录
export TEZ_JARS=""
for jar in `ls $TEZ_HOME |grep jar`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/$jar
done
for jar in `ls $TEZ_HOME/lib`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/lib/$jar
done


export HIVE_AUX_JARS_PATH=/opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar$TEZ_JARS

2.3 在hive-site.xml文件中添加如下配置,更改hive计算引擎

<property>
    <name>hive.execution.engine</name>
    <value>tez</value>
</property>

 

3.配置Tez

3.1 在Hive的/opt/module/hive/conf下面创建一个tez-site.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>tez.lib.uris</name>    <value>${fs.defaultFS}/tez/tez-0.9.1,${fs.defaultFS}/tez/tez-0.9.1/lib</value>
</property>
<property>
    <name>tez.lib.uris.classpath</name>        <value>${fs.defaultFS}/tez/tez-0.9.1,${fs.defaultFS}/tez/tez-0.9.1/lib</value>
</property>
<property>
     <name>tez.use.cluster.hadoop-libs</name>
     <value>true</value>
</property>
<property>
     <name>tez.history.logging.service.class</name>        <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
</property>
</configuration>

 

4.上传Tez到集群

4.1 将/opt/module/tez-0.9.1上传到HDFS的/tez路径

    [aaa@qq.com conf]$ hadoop fs -mkdir /tez

    [aaa@qq.com conf]$ hadoop fs -put /opt/module/tez-0.9.1/ /tez

    [aaa@qq.com conf]$ hadoop fs -ls /tez

    /tez/tez-0.9.1    

 

5.测试

5.1 启动Hive、

[aaa@qq.com hive]$ bin/hive

5.2 创建LZO表

hive (default)> create table student(id int,name string);  

5.3 向表中插入数据

hive (default)> insert into student values(1,"zhangsan");

5.4 如果没有报错就表示成功了

hive (default)> select * from student;
1       zhangsan

 

6.小结6.1 运行Tez时检查到用过多内存而被NodeManager杀死进程问题:

Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1546781144082_0005 failed 2 times due to AM Container for appattempt_1546781144082_0005_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://hadoop103:8088/cluster/app/application_1546781144082_0005Then, click on links to logs of each attempt.
Diagnostics: Container [pid=11116,containerID=container_1546781144082_0005_02_000001] is running beyond virtual memory limits. Current usage: 216.3 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container.

解决方法:

    方案一:或者是关掉虚拟内存检查。我们选这个,修改yarn-site.xml,修改后一定要分发,并重新启动hadoop集群。

<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

    方案二:mapred-site.xml中设置Map和Reduce任务的内存配置如下:(value中实际配置的内存需要根据自己机器内存大小及应用情况进行修改)

<property>
  <name>mapreduce.map.memory.mb</name>
  <value>1536</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx1024M</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>3072</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx2560M</value>
</property>

 

相关标签: # Hive