hadoop基本配置

core-site.xml:

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://node1:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/your/tmp/dir</value>

<description>需要默認創建tmp目錄用于臨時文件的存放,不然會放在/tmp目錄下</description>

</property>

<property>

<name>fs.checkpoint.period</name>

<value>300</value>

<description>The number of seconds between two periodic checkpoints</description>

</property>

<property>

<name>fs.checkpoint.dir</name>

<value>${hadoop.tmp.dir}/dfs/namesecondary</value>

<description>Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy</description>

</property>

</configuration>


hdfs-site.xml:

<configuration>

<property>

<name>dfs.replication</name>

<value> <num_replication> </value>

</property>

<property>

<name>dfs.namenode.name.dir</name>
<value>file:/data/hadoop/hdfs/namenode/</value>

</property>

</configuration>


yarn-site.xml:

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>node1:8025</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>node1:8030</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>node1:8050</value>

</property>

<property>

<name>yarn.nodemanager.resource.memory-mb</name>

<value>12288</value>

</property>

<property>

<name>yarn.scheduler.maximum-allocation-mb</name>

<value>36864<value>

<description>單個任務可申請最大內存,默認8192MB</description>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log.server.url</name>

<value>http://node1:19888/jobhistory/logs</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>node1</value>

</property>

</configuration>


mapred-site.xml:

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>


Hadoop pid 存放位置:

(1)修改hadoop-env.sh

修改如下,如果沒有下面的設置,可以直接添加:

export HADOOP_PID_DIR=/ROOT/server/pids_hadoop_hbase

export HADOOP_SECURE_DN_PID_DIR=/ROOT/server/pids_hadoop_hbase

上述配置,影響

NameNode

DataNode

SecondaryNameNode

進程pid存儲

(2)修改mapred-env.sh

修改

export HADOOP_MAPRED_PID_DIR=/ROOT/server/pids_hadoop_hbase

上述配置,影響

JobHistoryServer

進程pid存儲

(3)修改yarn-env.sh

修改或者添加(不存在此項配置時),這里面我沒有找到pid的環境設置變量,所以就直接添加了

export YARN_PID_DIR=/ROOT/server/pids_hadoop_hbase

上述配置,影響

NodeManager

ResourceManager

進程pid存儲

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容