Docker 構(gòu)建 Flink 1.10.2 集群(ON YARN)

1. 借鑒

使用Docker在本地搭建Flink分布式集群
基于docker構(gòu)建flink大數(shù)據(jù)處理平臺(tái)
Flink集群搭建
Hadoop 集成
flink學(xué)習(xí)筆記-環(huán)境搭建篇
Apache Flink零基礎(chǔ)入門(一):基礎(chǔ)概念解析
Flink on YARN的第三種部署模式:Application Mode
Flink 系列(八)—— Flink Standalone 集群部署

2. 開始

我們的集群規(guī)劃如下:

flink01[172.173.16.23] flink02[172.173.16.24] flink03[172.173.16.25]
JOB MANAGER Master Slave Slave
HDFS DataNode DataNode DataNode
YARN NodeManager NodeManager NodeManager
PORT 8086 22 22

同時(shí)需要依賴hadoop集群,所以也一起列出來

hadoop01[172.173.16.10] hadoop02[172.173.16.11] hadoop03[172.173.16.12]
HDFS NameNode
DataNode
DataNode SecondaryNameNode
DataNode
YARN NodeManager ResourceManager
NodeManager
NodeManager
PORT 22,9000,50070 22 22

鏡像準(zhǔn)備

on yarn模式是依托于hadoop的,所以flink機(jī)器上需要hadoop環(huán)境
方式 1. docker hub 下載

docker pull caiserkaiser/hadoop:2.7.2

方式 2. 構(gòu)建
caiser/hadoop:2.7.2 鏡像

創(chuàng)建自定義網(wǎng)絡(luò)

Docker 網(wǎng)絡(luò)操作

docker network create -d bridge --subnet "172.173.16.0/24" --gateway "172.173.16.1"  datastore_net

啟動(dòng)容器

docker run -it -d --network datastore_net --ip 172.173.16.23 --name flink01 caiser/hadoop:2.7.2

下載并配置flink

  1. 下載
    flink
    flink-shaded-hadoop

  2. 拷貝到容器內(nèi)

    docker cp ~/Downloads/flink-1.10.2-bin-scala_2.12.tgz ccd2b9cb65a5:/opt/envs
    
  3. 解壓

    tar -zxvf flink-1.10.2-bin-scala_2.12.tgz
    
  4. 配置hadoop環(huán)境

    ①. vi /etc/profile

    ②. 配置 HADOOP_HOME

    export HADOOP_HOME=/opt/envs/hadoop-2.7.2
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    

    ③. soure /etc/profile

  5. 修改masters文件

    • a. 備份
    cp /opt/envs/flink-1.10.2/conf/masters /opt/envs/flink-1.10.2/conf/masters.bak
    
    • b. 修改為
    flink01:8081
    
  6. 修改slaves文件

    • a. 備份
    cp /opt/envs/flink-1.10.2/conf/slaves /opt/envs/flink-1.10.2/conf/slaves.bak
    
    • b. 修改為
    flink01
    flink02
    flink03
    
  7. 配置flink-shaded-hadoop

    docker cp ~/Downloads/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar ccd2b9cb65a5:/opt/envs/flink-1.10.2/lib
    
  8. 修改flink-conf.yaml配置文件

    • a. 備份
    cp /opt/envs/flink-1.10.2/conf/flink-conf.yaml /opt/envs/flink-1.10.2/conf/flink-conf.yaml.bak
    
    • b. 設(shè)置如下:
    jobmanager.rpc.address: flink01
    taskmanager.memory.process.size: 1024m
    rest.bind-port: 8086
    web.submit.enable: true
    

flink-節(jié)點(diǎn)配置

編輯/etc/hosts,并添加以下hostname

172.173.16.23 flink01
172.173.16.24 flink02
172.173.16.25 flink03

安裝which

yum install which

保存為鏡像并移除容器

docker commit ccd2b9cb65a5 caiser/flink:1.10.2
docker stop ccd2b9cb65a5
docker rm ccd2b9cb65a5

啟動(dòng)容器

docker run -it -d --network datastore_net --ip 172.173.16.23 --name flink01 caiser/flink:1.10.2 bin/bash

docker run -it -d --network datastore_net --ip 172.173.16.24 --name flink02 caiser/flink:1.10.2 bin/bash

docker run -it -d --network datastore_net --ip 172.173.16.25 --name flink03 caiser/flink:1.10.2 bin/bash

配置ssh免密登錄

  1. 進(jìn)入容器

    docker exec -it flink01 /bin/bash
    
  2. 到~/.ssh目錄下生成秘鑰

    ssh-keygen -t rsa
    
  3. 拷貝秘鑰到flink01,flink02和flink03

    a.[如果沒開啟]開啟ssh服務(wù)[ps -ef | grep ssh]

    /usr/sbin/sshd -D &
    

    b. 拷貝秘鑰到flink01,flink02,flink03

    ssh-copy-id flink01
    ssh-copy-id flink02
    ssh-copy-id flink03
    
  4. flink02和flink03依次執(zhí)行上述1-3步驟

啟動(dòng)(ON YARN)

第一步:向hadoop集群中添加節(jié)點(diǎn)

① 在flink01機(jī)器的hadoop/sbin目錄下啟動(dòng)datanode:

hadoop-daemon.sh start datanode

② 在flink01機(jī)器的hadoop/sbin目錄下啟動(dòng)nodemanager

yarn-daemon.sh start nodemanager

③ jps查看datanode和nodemanager是否已啟動(dòng)
④ 在flink02和flink03上重復(fù)以上操作
⑤ 回到namenode節(jié)點(diǎn)打印集群信息,或網(wǎng)頁登錄50070端口查看節(jié)點(diǎn)數(shù)量

hdfs dfsadmin -report

第二步:執(zhí)行yarn-session

在flink01 flink/bin中執(zhí)行

./yarn-session.sh -n 3 -s 1-jm 1024 -tm 1024

看到以下內(nèi)容則說明啟動(dòng)成功
ps. 也可以后臺(tái)運(yùn)行./yarn-session.sh -n 3 -s 1-jm 1024 -tm 1024 -d

[root@3b5491eb3eb9 bin]# ./yarn-session.sh -n 3 -s 1-jm 1024 -tm 1024
2020-11-21 16:43:12,787 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.address, flink01
2020-11-21 16:43:12,791 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.rpc.port, 6123
2020-11-21 16:43:12,791 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.heap.size, 1024m
2020-11-21 16:43:12,792 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.memory.process.size, 1024m
2020-11-21 16:43:12,792 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2020-11-21 16:43:12,792 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: parallelism.default, 1
2020-11-21 16:43:12,793 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: jobmanager.execution.failover-strategy, region
2020-11-21 16:43:12,793 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: rest.bind-port, 8086
2020-11-21 16:43:12,794 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: web.submit.enable, true
2020-11-21 16:43:13,592 WARN  org.apache.hadoop.util.NativeCodeLoader                       - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-11-21 16:43:13,783 INFO  org.apache.flink.runtime.security.modules.HadoopModule        - Hadoop user set to root (auth:SIMPLE)
2020-11-21 16:43:13,872 INFO  org.apache.flink.runtime.security.modules.JaasModule          - Jaas file will be created as /tmp/jaas-3674228881056681551.conf.
2020-11-21 16:43:13,893 WARN  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - The configuration directory ('/opt/envs/flink-1.10.2/conf') already contains a LOG4J config file.If you want to use logback, then please delete or rename the log configuration file.
2020-11-21 16:43:14,012 INFO  org.apache.hadoop.yarn.client.RMProxy                         - Connecting to ResourceManager at hadoop02/172.173.16.11:8032
2020-11-21 16:43:14,375 INFO  org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils  - The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2020-11-21 16:43:14,376 INFO  org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils  - The derived from fraction network memory (57.600mb (60397978 bytes)) is less than its min value 64.000mb (67108864 bytes), min value will be used instead
2020-11-21 16:43:14,609 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=1024, slotsPerTaskManager=1}
2020-11-21 16:43:17,126 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Submitting application master application_1605959782595_0008
2020-11-21 16:43:17,179 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl         - Submitted application application_1605959782595_0008
2020-11-21 16:43:17,180 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Waiting for the cluster to be allocated
2020-11-21 16:43:17,182 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Deploying cluster, current state ACCEPTED
2020-11-21 16:43:24,733 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - YARN application has been deployed successfully.
2020-11-21 16:43:24,739 INFO  org.apache.flink.yarn.YarnClusterDescriptor                   - Found Web Interface 8f4fdb3626d6:8086 of application 'application_1605959782595_0008'.
JobManager Web Interface: http://8f4fdb3626d6:8086

注:在這種模式下,WEB UI的host是會(huì)變的,所以開發(fā)還是使用單機(jī)或者單機(jī)集群模式。

啟動(dòng)(STANDALONE CLUSTER)

第一步:執(zhí)行start-cluster

在任意一臺(tái)機(jī)器上的 flink/bin中執(zhí)行

./start-cluster.sh

接著訪問8081(默認(rèn)端口)或者8086(本文配置端口)就可以訪問了
注:需要做端口映射

動(dòng)態(tài)添加端口

Mac下 Docker 動(dòng)態(tài)添加端口

"ExposedPorts":{"8086/tcp":{}}
"PortBindings":{"8086/tcp":[{"HostIp":"","HostPort":"8086"}]}
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容