hadoop 动态增加节点

张映 发表于 2018-12-28

分类目录: hadoop/spark

标签:, , ,

前面写过一篇hadoop集群安装配置的文章。只用了二台机器,假如机器快满了,就需要在加机器。hadoop加节点,不需要重启hadoop服务。

一,增加节点的准备工作

1,修改主机名,所有节点机器hosts保持一至
2,ssh免密码登录
3,修改slaves,所有节点一样

scp把老节点的etc/hadoop的配置copy到新的节点

二,在新节点的启动

[root@bigserver3 hadoop]# ./sbin/hadoop-daemon.sh start datanode
[root@bigserver3 hadoop]# ./sbin/yarn-daemon.sh start nodemanager

[root@bigserver3 hadoop]# jps
1569 Jps
1401 DataNode
1499 NodeManager

[root@bigserver3 hadoop]# yarn node -list
18/12/27 22:35:41 INFO client.RMProxy: Connecting to ResourceManager at bigserver1/10.0.0.237:8032
Total Nodes:2
 Node-Id Node-State Node-Http-Address Number-of-Running-Containers
bigserver3:40901 RUNNING bigserver3:8042 0
bigserver2:43959 RUNNING bigserver2:8042 0

三,均衡hdfs存储

1,设置环境变量

# echo "export PATH=/bigdata/hadoop/bin:$PATH" >> ~/.bashrc
# source ~/.bashrc

这样就可以直接使用hadoop/bin目录下的命令了

2,配置均衡带宽,默认是1M

# hdfs dfsadmin -setBalancerBandwidth 52428800 //设置50M
Balancer bandwidth is set to 52428800

3,均衡hdfs的存储

[root@bigserver3 hadoop]# ./sbin/start-balancer.sh -threshold 5
starting balancer, logging to /home/bigdata/hadoop/logs/hadoop-root-balancer-bigserver3.out

默认是10,数字越大均衡时间越短,越不均衡,数字越小均衡时间越长,越均衡

4,查看一下均衡后的结果

[root@bigserver3 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 2382153052160 (2.17 TB)
Present Capacity: 2381362919833 (2.17 TB)
DFS Remaining: 2381087567872 (2.17 TB)
DFS Used: 275351961 (262.60 MB)
DFS Used%: 0.01%
Under replicated blocks: 20
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 10.0.0.193:50010 (bigserver3)
Hostname: bigserver3
Decommission Status : Normal
Configured Capacity: 441499058176 (411.18 GB)
DFS Used: 3000729 (2.86 MB)
Non DFS Used: 392808039 (374.61 MB)
DFS Remaining: 441103249408 (410.81 GB)
DFS Used%: 0.00% //是否均衡主要看这二项
DFS Remaining%: 99.91% //是否均衡主要看这二项
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Dec 27 22:38:51 EST 2018

Name: 10.0.0.236:50010 (bigserver2)
Hostname: bigserver2
Decommission Status : Normal
Configured Capacity: 1940653993984 (1.77 TB)
DFS Used: 272351232 (259.73 MB)
Non DFS Used: 397324288 (378.92 MB)
DFS Remaining: 1939984318464 (1.76 TB)
DFS Used%: 0.01% //是否均衡主要看这二项
DFS Remaining%: 99.97% //是否均衡主要看这二项
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Dec 27 22:38:51 EST 2018

四,在master测试关闭和启动hadoop集群

[root@bigserver1 sbing]# ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [bigserver1]
bigserver1: stopping namenode
bigserver2: stopping datanode
bigserver3: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
bigserver2: stopping nodemanager
bigserver3: stopping nodemanager
no proxyserver to stop

[root@bigserver1 sbing]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [bigserver1]
bigserver1: starting namenode, logging to /home/bigdata/hadoop/logs/hadoop-root-namenode-bigserver1.out
bigserver2: starting datanode, logging to /home/bigdata/hadoop/logs/hadoop-root-datanode-bigserver2.out
bigserver3: starting datanode, logging to /home/bigdata/hadoop/logs/hadoop-root-datanode-bigserver3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/bigdata/hadoop/logs/hadoop-root-secondarynamenode-bigserver1.out
starting yarn daemons
starting resourcemanager, logging to /home/bigdata/hadoop/logs/yarn-root-resourcemanager-bigserver1.out
bigserver2: starting nodemanager, logging to /home/bigdata/hadoop/logs/yarn-root-nodemanager-bigserver2.out
bigserver3: starting nodemanager, logging to /home/bigdata/hadoop/logs/yarn-root-nodemanager-bigserver3.out

这一步对于新增节点来说,是不需要的。只是个人测试。



转载请注明
作者:海底苍鹰
地址:http://blog.51yip.com/hadoop/2019.html

留下评论

留下评论
  • (必需)
  • (必需) (will not be published)
  • (必需)   9X8=?