Hadoop的JVM重用
Hadoop中有个参数是mapred.job.reuse.jvm.num.tasks(hadoop2 为:mapreduce.job.jvm.numtasks),默认是1,表示一个JVM上最多可以顺序执行的task数目(属于同一个Job)是1。也就是说一个task启一个JVM。
比如我配的是每个slave节点最多同时运行8个map和8个reduce。那么在map阶段,slave节点会启动最多8个JVM用于map。如下:
root@slave1:~# jps
28291 Child
28290 Child
28281 Child
28293 Child
28277 Child
1487 DataNode
28298 Child
28273 Child
28272 Child
1636 TaskTracker
28799 Jps
root@slave1:~# ps -e | grep java
1487 ? 00:53:26 java |
TaskTracker |
1636 ? 00:12:42 java |
DataNode |
28272 ? 00:00:35 java |
Child |
28273 ? 00:00:35 java |
Child |
28277 ? 00:00:36 java |
Child |
28281 ? 00:00:36 java |
Child |
28290 ? 00:00:36 java |
Child |
28291 ? 00:00:37 java |
Child |
28293 ? 00:00:36 java |
Child |
28298 ? 00:00:36 java |
Child |
其中前两个是固定的进程。
为每个task启动一个新的JVM将耗时1秒左右,对于运行时间较长(比如1分钟以上)的job影响不大,但如果都是时间很短的task,那么频繁启停JVM会有开销。
如果我们想使用JVM重用技术来提高性能,那么可以将mapred.job.reuse.jvm.num.tasks设置成大于1的数。这表示属于同一job的顺序执行的task可以共享一个JVM,也就是说第二轮的map可以重用前一轮的JVM,而不是第一轮结束后关闭JVM,第二轮再启动新的JVM。
那么最多一个JVM能顺序执行多少个task才关闭呢?这个值就是mapred.job.reuse.jvm.num.tasks。如果设置成-1,那么只要是同一个job的task(无所谓多少个),都可以按顺序在一个JVM上连续执行。
如果task属于不同的job,那么JVM重用机制无效,不同job的task需要不同的JVM来运行。
注意:
JVM重用技术不是指同一Job的两个或两个以上的task可以同时运行于同一JVM上,而是排队按顺序执行。
一个tasktracker最多可以同时运行的task数目由mapred.tasktracker.map.tasks.maximum和mapred.tasktracker.reduce.tasks.maximum
决定,并且这两个参数在mapred-site.xml中设置。其他方法,如在JobClient端通过命令行
-Dmapred.tasktracker.map.tasks.maximum=number或者conf.set("mapred.tasktracker.map.tasks.maximum","number")设置都是无效的。
附《Hadoop权威指南》上的介绍:
Task JVM Reuse
Hadoop runs tasks in their own Java Virtual Machine to isolate them from other running tasks. The overhead of starting a new JVM for each task can take around a second, which for jobs that run for a minute or so is insignificant. However, jobs that have a large number of very short-lived tasks (these are usually map tasks), or that have lengthy initialization, can see performance gains when the JVM is reused for subsequent tasks.
With task JVM reuse enabled, tasks do not run concurrently in a single JVM. The JVM runs tasks sequentially. Tasktrackers can, however, run more than one task at a time, but this is always done in separate JVMs. The properties for controlling the tasktrackers number of map task slots and reduce task slots are discussed in “Memory” on page 269.
The property for controlling task JVM reuse is mapred.job.reuse.jvm.num.tasks: it specifies the maximum number of tasks to run for a given job for each JVM launched; the default is 1 (see Table 6-4). Tasks from different jobs are always run in separate JVMs. If the property is set to –1, there is no limit to the number of tasks from the same job that may share a JVM. The methodsetNumTasksToExecutePerJvm() on JobConf can also be used to configure this property.
Tasks that are CPU-bound may also benefit from task JVM reuse by taking advantage of runtime optimizations applied by the HotSpot JVM. After running for a while, the HotSpot JVM builds up enough information to detect performance-critical sections in the code and dynamically translates the Java byte codes of these hot spots into native machine code. This works well for long-running processes, but JVMs that run for seconds or a few minutes may not gain the full benefit of HotSpot. In these cases, it is worth enabling task JVM reuse.
Another place where a shared JVM is useful is for sharing state between the tasks of a job. By storing reference data in a static field, tasks get rapid access to the shared data.
推荐阅读
-
Hadoop的JVM重用
-
Hadoop的JVM重用
-
浅谈JVM编译原理->.java文件转变为.class文件的过程
-
阿里巴巴资深架构师熬几个通宵肛出来的Spark+Hadoop+中台实战pdf
-
【大数据开发】hadoop2.x碰到的问题一
-
[二]Java虚拟机 jvm内存结构 运行时数据内存 class文件与jvm内存结构的映射 jvm数据类型 虚拟机栈 方法区 堆 含义
-
win10 idea Address already in use: JVM_Bind 端口被占用的几个解决办法
-
hadoop2.2.0集群配置 - RedHat 使用CentOS的yum源
-
[hadoop2.7.1]I/O之MapFile(排过序的SequenceFile)读、写、重建index实例
-
hadoop2的automatic HA+Federation+Yarn配置的教程