Mac部署hadoop3(伪分布式)
程序员文章站
2023-12-27 10:40:03
环境信息 1. 操作系统:macOS Mojave 10.14.6 2. JDK:1.8.0_211 (安装位置:/Library/Java/JavaVirtualMachines/jdk1.8.0_211.jdk/Contents/Home) 3. hadoop:3.2.1 开通ssh 在"系统偏 ......
环境信息
- 操作系统:macos mojave 10.14.6
- jdk:1.8.0_211 (安装位置:/library/java/javavirtualmachines/jdk1.8.0_211.jdk/contents/home)
-
hadoop:3.2.1
开通ssh
在"系统偏好设置"->"共享",设置如下:
免密码登录
- 执行以下命令创建秘钥:
ssh-keygen -t rsa -p '' -f ~/.ssh/id_rsa
一路next,最终会在~/.ssh目录生成id_rsa和id_rsa.pub文件
- 执行以下命令,将自己的秘钥放在ssh授权目录,这样ssh登录自身就不需要输入密码了:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
- ssh登录试试,这次不需要密码了:
last login: sun oct 13 21:44:17 on ttys000 (base) zhaoqindembp:~ zhaoqin$ ssh localhost last login: sun oct 13 21:48:57 2019 (base) zhaoqindembp:~ zhaoqin$
下载hadoop
- 下载hadoop,地址是:
- 将下载文件hadoop-3.2.1.tar.gz解压,我这里解压后的地址是:~/software/hadoop-3.2.1/
如果只需要hadoop单机模式,现在就可以了,但是单机模式没有hdfs,因此接下来要做伪分布模式的设置;
伪分布模式设置
进入目录hadoop-3.2.1/etc/hadoop,做以下设置:
- 打开hadoop-env.sh文件,增加java的路径设置:
export java_home=/library/java/javavirtualmachines/jdk1.8.0_211.jdk/contents/home
- 打开core-site.xml文件,将configuration节点改为如下内容:
<configuration> <property> <name>fs.defaultfs</name> <value>hdfs://localhost:9000</value> </property> </configuration>
- 打开hdfs-site.xml文件,将configuration节点改为如下内容:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
- 打开mapred-site.xml文件,将configuration节点改为如下内容:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
- 打开yarn-site.xml文件,将configuration节点改为如下内容:
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>java_home,hadoop_common_home,hadoop_hdfs_home,hadoop_conf_dir,classpath_prepend_distcache,hadoop_yarn_home,hadoop_mapred_home</value> </property> </configuration>
- 在目录hadoop-3.2.1/bin执行以下命令,初始化hdfs:
./hdfs namenode -format
初始化成功后,可见如下信息:
2019-10-13 22:13:32,468 info namenode.nnstorageretentionmanager: going to retain 1 images with txid >= 0 2019-10-13 22:13:32,473 info namenode.fsimage: fsimagesaver clean checkpoint: txid=0 when meet shutdown. 2019-10-13 22:13:32,474 info namenode.namenode: shutdown_msg: /************************************************************ shutdown_msg: shutting down namenode at zhaoqindembp/192.168.50.12 ************************************************************/
启动
- 进入目录hadoop-3.2.1/sbin,执行./start-dfs.sh启动hdfs:
(base) zhaoqindembp:sbin zhaoqin$ ./start-dfs.sh starting namenodes on [localhost] starting datanodes starting secondary namenodes [zhaoqindembp] zhaoqindembp: warning: permanently added 'zhaoqindembp,192.168.50.12' (ecdsa) to the list of known hosts. 2019-10-13 22:28:30,597 warn util.nativecodeloader: unable to load native-hadoop library for your platform... using builtin-java classes where applicable
上面的警告不会影响使用;
- 浏览器访问地址:localhost:9870 ,可见hadoop的web页面如下图:
- 进入目录hadoop-3.2.1/sbin,执行./start-yarn.sh启动yarn:
base) zhaoqindembp:sbin zhaoqin$ ./start-yarn.sh starting resourcemanager starting nodemanagers
- 浏览器访问地址:localhost:8088 ,可见yarn的web页面如下图:
- 执行jps命令查看所有java进程,正常情况下可以见到以下进程:
(base) zhaoqindembp:sbin zhaoqin$ jps 2161 nodemanager 1825 secondarynamenode 2065 resourcemanager 1591 namenode 2234 jps 1691 datanode
至此,hadoop3伪分布式环境的部署、设置、启动都已经完成。
停止hadoop服务
进入目录hadoop-3.2.1/sbin,执行./stop-all.sh即可关闭hadoop的所有服务:
(base) zhaoqindembp:sbin zhaoqin$ ./stop-all.sh warning: stopping all apache hadoop daemons as zhaoqin in 10 seconds. warning: use ctrl-c to abort. stopping namenodes on [localhost] stopping datanodes stopping secondary namenodes [zhaoqindembp] 2019-10-13 22:49:00,941 warn util.nativecodeloader: unable to load native-hadoop library for your platform... using builtin-java classes where applicable stopping nodemanagers stopping resourcemanager
以上就是mac环境部署hadoop3的全部过程,希望能给您一些参考。