scala启动多个sparkcontext时,发现启动不起来。查看Applications任务时,发现Memory Total和Memory Used一样了。
一,调整spark,driver和executor的内存
1,log日志错误
For more detailed output, check application tracking page:http://bigserver1:8088/cluster/app/application_1555651019351_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=280568,containerID=container_e09_1555651019351_0001_02_000001] is running beyond virtual memory limits. Current usage: 383.9 MB of 1 GB physical memory used; 1.5 GB of 1.1 GB virtual memory used. Killing container.
Killed by external signal
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。省略。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt
2,java异常错误
2019-04-24 14:42:31 ERROR TransportRequestHandler:293 - Error sending result RpcResponse{requestId=8586978943123146370, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=13 cap=13]}} to /10.0.40.222:34982; closing connection
io.netty.handler.codec.EncoderException: java.lang.OutOfMemoryError: Java heap space
上面二个错误,都是因为spark分配的内存不足造成的,解决办法有二种
1,spark-submit的时候加上--driver-memory 2g --executor-memory 2g
2,修改spark-defaults.conf,添加以下内容
spark.driver.memory 2g spark.executor.memory 2g
第二方法,改完后要重启spark集群。
改过后,spark-submit不报错,跑完后,出现以下内容说明配置成功
二,调整yarn资源
SparkContext.java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
解决办法:修改yarn-site.xml
<property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>16</value> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>16</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>16384</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>16384</value> </property>
改过的文件同步到hadoop集群中的所有机器上面,然后重启hadoop集群,如果Memory Total,增加了,说明成功了。
转载请注明
作者:海底苍鹰
地址:http://blog.51yip.com/hadoop/2123.html