spark 报 Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

张映 发表于 2019-05-15

分类目录: hadoop/spark/scala

标签:

spark申请资源时,报错了,如下

2019-05-15 10:15:15 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on namenode1:37836 (size: 83.1 KB, free: 6.2 GB)
2019-05-15 10:15:15 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1161
2019-05-15 10:15:15 INFO DAGScheduler:54 - Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at sql at run.scala:132) (first 15 tasks are for partitions Vector(0, 1))
2019-05-15 10:15:15 INFO YarnScheduler:54 - Adding task set 0.0 with 2 tasks
2019-05-15 10:15:30 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

提示已经很清楚了,资源不够。因资源不够报出来的错识,非常的多。以前写的博客里面有提到了一些。

解决办法有二种

一,增大spark可利用资源

1,修改yarn-site.xml

<property>
 <name>yarn.nodemanager.resource.cpu-vcores</name>
 <value>24</value>
</property>
<property>
 <name>yarn.scheduler.minimum-allocation-vcores</name>
 <value>1</value>
</property>
<property>
 <name>yarn.scheduler.maximum-allocation-vcores</name>
 <value>24</value>
</property>

<property>
 <name>yarn.nodemanager.resource.memory-mb</name>
 <value>24576</value>
</property>
<property>
 <name>yarn.scheduler.minimum-allocation-mb</name>
 <value>1024</value>
</property>
<property>
 <name>yarn.scheduler.maximum-allocation-mb</name>
 <value>24576</value>
</property>

2,修改spark-defaults.conf

#spark.driver.cores 1
spark.driver.memory 8g
spark.executor.cores 3
spark.executor.memory 3g
#spark.executor.instances 3

根据实际的硬件情况去调整,目的就是增加spark的可支配资源

增加spark的可支配资源

增加spark的可支配资源

二,采用spark动态资源分配,本人还没有尝试。




转载请注明
作者:海底苍鹰
地址:http://blog.51yip.com/hadoop/2137.html

1 条评论

  1. Alex 留言

    手动点赞