(三)Hadoop集群環境搭建(完全分布式)

克隆三個主機,修改主機名分別為hadoop01,hadoop02,hadoop03:

[root@hadoop01 ~]# hostname
hadoop01
[root@hadoop01 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop01
[root@hadoop01 ~]# vi /etc/sysconfig/network
[root@hadoop01 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop02
[root@hadoop01 ~]# reboot

配置三臺機器:

[root@hadoop01 ~]# vi /etc/hosts
[root@hadoop01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.216.135     hadoop01
192.168.216.136     hadoop02
192.168.216.137     hadoop03
服務器功能規劃
hadoop01 hadoop02 hadoop03
NameNode
DataNode DataNode DataNode
NodeManager NodeManager NodeManager
HistoryServer ResourceManager SecondaryNameNode

1,在第一臺機器上安裝新的Hadoop
為了和之前機器上安裝偽分布式Hadoop區分開來,我們將第一臺機器上的Hadoop服務都停止掉,然后在一個新的目錄/opt/modules/app下安裝另外一個Hadoop。我們采用先在第一臺機器上解壓、配置Hadoop,然后再分發到其他兩臺機器上的方式來安裝集群。

2,解壓Hadoop目錄

3,配置Hadoop JDK路徑修改hadoop-env.sh、mapred-env.sh、yarn-env.sh文件中的JDK路徑

4,配置core-site.xml

[root@hadoop01 hadoop]# vi core-site.xml 
[root@hadoop01 hadoop]# cat core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://hadoop01:8020</value>
 </property>
 <property>
   <name>hadoop.tmp.dir</name>
   <value>/opt/modules/app/hadoop-2.5.0/data/tmp</value>
 </property>
</configuration>
[root@hadoop01 hadoop]# 

fs.defaultFS為NameNode的地址。

hadoop.tmp.dir為hadoop臨時目錄的地址,默認情況下,NameNode和DataNode的數據文件都會存在這個目錄下的對應子目錄下。應該保證此目錄是存在的,如果不存在,先創建。

5,配置hdfs-site.xml

[root@hadoop01 hadoop]# vi hdfs-site.xml 
[root@hadoop01 hadoop]# cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>dfs.namenode.secondary.http-address</name>
   <value>hadoop03:50090</value>
 </property>
</configuration>

dfs.namenode.secondary.http-address是指定secondaryNameNode的http訪問地址和端口號,因為在規劃中,我們將hadoop03規劃為SecondaryNameNode服務器。

6,配置slaves

[root@hadoop01 hadoop]# vi /opt/modules/app/hadoop/etc/hadoop/slaves 
[root@hadoop01 hadoop]# cat /opt/modules/app/hadoop/etc/hadoop/slaves 
hadoop01
hadoop02
hadoop03

slaves文件是指定HDFS上有哪些DataNode節點。

7,配置yarn-site.xml

[root@hadoop01 hadoop]# vi yarn-site.xml 
[root@hadoop01 hadoop]# cat yarn-site.xml 
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop02</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>106800</value>
    </property>

</configuration>

根據規劃yarn.resourcemanager.hostname這個指定resourcemanager服務器指向hadoop02。

yarn.log-aggregation-enable是配置是否啟用日志聚集功能。

yarn.log-aggregation.retain-seconds是配置聚集的日志在HDFS上最多保存多長時間。

8,配置mapred-site.xml

[root@hadoop01 hadoop]# vi mapred-site.xml
[root@hadoop01 hadoop]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop01:19888</value>
    </property>

</configuration>

mapreduce.framework.name設置mapreduce任務運行在yarn上。

mapreduce.jobhistory.address是設置mapreduce的歷史服務器安裝在hadoop01機器上。

mapreduce.jobhistory.webapp.address是設置歷史服務器的web頁面地址和端口號。

9,設置SSH無密碼登錄

Hadoop集群中的各個機器間會相互地通過SSH訪問,每次訪問都輸入密碼是不現實的,所以要配置各個機器間的SSH是無密碼登錄的。

a. 在hadoop01上生成公鑰

[root@hadoop01 hadoop]# ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
6c:c6:80:64:00:ec:ab:b0:94:21:71:2e:a8:8b:c2:40 root@hadoop01
The key's randomart image is:
+--[ RSA 2048]----+
|o...o            |
|...o .           |
|o+  . .          |
|+E.    +         |
|+.+     S        |
|++     o         |
|Bo               |
|*.               |
|.                |
+-----------------+

一路回車,都設置為默認值,然后再當前用戶的Home目錄下的.ssh目錄中會生成公鑰文件(id_rsa.pub)和私鑰文件(id_rsa)。

b. 分發公鑰

[root@hadoop01 hadoop]# yum install -y openssh-clients
[root@hadoop01 hadoop]# ssh-copy-id hadoop01
The authenticity of host 'hadoop01 (192.168.216.135)' can't be established.
RSA key fingerprint is bd:5c:85:99:82:b4:b9:9d:92:fa:35:48:63:e1:5c:ce.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop01,192.168.216.135' (RSA) to the list of known hosts.
root@hadoop01's password: 
Now try logging into the machine, with "ssh 'hadoop01'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@hadoop01 hadoop]# ssh-copy-id hadoop02
[root@hadoop01 hadoop]# ssh-copy-id hadoop03

同樣的在hadoop02、hadoop03上生成公鑰和私鑰后,將公鑰分發到三臺機器上。

分發Hadoop文件

1,首先在其他兩臺機器上創建存放Hadoop的目錄

[root@hadoop02 ~]# mkdir -p /opt/modules/app
[root@hadoop03 ~]# mkdir -p /opt/modules/app

2,通過Scp分發
Hadoop根目錄下的share/doc目錄是存放的hadoop的文檔,文件相當大,建議在分發之前將這個目錄刪除掉,可以節省硬盤空間并能提高分發的速度。

[root@hadoop01 hadoop]# du -sh /opt/modules/app/hadoop/share/doc
[root@hadoop01 hadoop]# rm -rf /opt/modules/app/hadoop/share/doc/
[root@hadoop01 hadoop]# scp -r /opt/modules/app/hadoop/ hadoop02:/opt/modules/app
[root@hadoop01 hadoop]# scp -r /opt/modules/app/hadoop/ hadoop03:/opt/modules/app

3,格式NameNode
在NameNode機器上執行格式化:

[root@hadoop01 hadoop]# /opt/modules/app/hadoop/bin/hdfs namenode -format

如果需要重新格式化NameNode,需要先將原來NameNode和DataNode下的文件全部刪除,不然會報錯,NameNode和DataNode所在目錄是在core-site.xml中hadoop.tmp.dir、dfs.namenode.name.dir、dfs.datanode.data.dir屬性配置的。

因為每次格式化,默認是創建一個集群ID,并寫入NameNode和DataNode的VERSION文件中(VERSION文件所在目錄為dfs/name/current 和 dfs/data/current),重新格式化時,默認會生成一個新的集群ID,如果不刪除原來的目錄,會導致namenode中的VERSION文件中是新的集群ID,而DataNode中是舊的集群ID,不一致時會報錯。

另一種方法是格式化時指定集群ID參數,指定為舊的集群ID。

啟動集群
[root@hadoop01 sbin]# /opt/modules/app/hadoop/sbin/start-dfs.sh
18/09/11 07:07:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /opt/modules/app/hadoop/logs/hadoop-root-namenode-hadoop01.out
hadoop03: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop02.out
hadoop01: starting datanode, logging to /opt/modules/app/hadoop/logs/hadoop-root-datanode-hadoop01.out
Starting secondary namenodes [hadoop03]
hadoop03: starting secondarynamenode, logging to /opt/modules/app/hadoop/logs/hadoop-root-secondarynamenode-hadoop03.out
18/09/11 07:07:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop01 sbin]# 



[root@hadoop01 sbin]# jps
3185 Jps
2849 NameNode
2974 DataNode
[root@hadoop02 ~]# jps
2305 Jps
2227 DataNode
[root@hadoop03 ~]# jps
2390 Jps
2312 SecondaryNameNode
2217 DataNode
啟動yarn
[root@hadoop01 sbin]# /opt/modules/app/hadoop/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop03: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop03.out
hadoop01: starting nodemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-nodemanager-hadoop01.out
[root@hadoop01 sbin]# jps
3473 Jps
3329 NodeManager
2849 NameNode
2974 DataNode
[root@hadoop01 sbin]# 

[root@hadoop02 ~]# jps
2337 NodeManager
2227 DataNode
2456 Jps
[root@hadoop02 ~]# 

[root@hadoop03 ~]# jps
2547 Jps
2312 SecondaryNameNode
2217 DataNode
2428 NodeManager
[root@hadoop03 ~]# 

在hadoop02上啟動ResourceManager:

[root@hadoop02 ~]# /opt/modules/app/hadoop/sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/modules/app/hadoop/logs/yarn-root-resourcemanager-hadoop02.out
[root@hadoop02 ~]# jps
2337 NodeManager
2227 DataNode
2708 Jps
2484 ResourceManager
[root@hadoop02 ~]# 
啟動日志服務器

因為我們規劃的是在hadoop03服務器上運行MapReduce日志服務,所以要在hadoop03上啟動。

[root@hadoop03 ~]# /opt/modules/app/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /opt/modules/app/hadoop/logs/mapred-root-historyserver-hadoop03.out
[root@hadoop03 ~]# jps
2312 SecondaryNameNode
2217 DataNode
2602 JobHistoryServer
2428 NodeManager
2639 Jps
[root@hadoop03 ~]# 

配置windows里面的host

查看HDFS Web頁面

hadoop01:50070

查看YARN Web 頁面

hadoop02:8088

測試Job

我們這里用hadoop自帶的wordcount例子來在本地模式下測試跑mapreduce。

1、 準備mapreduce輸入文件wc.input

[hadoop@bigdata-senior01 modules]$ cat /opt/data/wc.input
hadoop mapreduce hive
hbase spark storm
sqoop hadoop hive
spark hadoop

2、 在HDFS創建輸入目錄input

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -mkdir /input

3、 將wc.input上傳到HDFS

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -put /opt/data/wc.input /input/wc.input

4、 運行hadoop自帶的mapreduce Demo

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/wc.input /output

5、 查看輸出文件

[hadoop@bigdata-senior01 hadoop-2.5.0]$ bin/hdfs dfs -ls /output
Found 2 items
-rw-r--r--   3 hadoop supergroup          0 2016-07-14 16:36 /output/_SUCCESS
-rw-r--r--   3 hadoop supergroup         60 2016-07-14 16:36 /output/part-r-00000

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 229,763評論 6 539
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 99,238評論 3 428
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 177,823評論 0 383
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,604評論 1 317
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,339評論 6 410
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,713評論 1 328
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,712評論 3 445
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,893評論 0 289
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,448評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 41,201評論 3 357
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,397評論 1 372
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,944評論 5 363
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,631評論 3 348
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,033評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,321評論 1 293
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,128評論 3 398
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,347評論 2 377

推薦閱讀更多精彩內容

  • 一、系統參數配置優化 1、系統內核參數優化配置 修改文件/etc/sysctl.conf,添加如下配置,然后執行s...
    張偉科閱讀 3,771評論 0 14
  • 前言 Hadoop在大數據技術體系中的地位至關重要,Hadoop是大數據技術的基礎,對Hadoop基礎知識的掌握的...
    __豆約翰__閱讀 1,605評論 2 3
  • 終極算法 關注微信號每天收聽我們的消息終極算法為您推送精品閱讀 前言 Hadoop 在大數據技術體系中的地位至關...
    Yespon閱讀 130,210評論 12 168
  • 我的記性不好,我怕我忘了這發生的一切一切,其實真正忙碌的人沒有這么多的時間來感嘆生活,所以正確來講我是個閑人,回過...
    bestSummer閱讀 526評論 0 0
  • 在過去的四月份,感覺每一天都是忙碌的,竟不經意就到了五月。 這個五一假期,沒有回家跟家人團聚,也沒有跟朋友聚會侃大...
    阿爾法基閱讀 211評論 0 0