完整集群搭建,hadoop,spark,zookeeper,kafka,redis等,修改hadoop默認日志級別。

環(huán)境變量配置

在根目錄下新建一個soft 文件夾

以下為安裝包,全部放到該文件夾下,解壓,并軟連接

0 jdk-8u191-linux-x64.tar
1 hadoop-2.7.2.tar
2 spark-2.3.1-bin-hadoop2.7
3 scala-2.11.12
4 zookeeper-3.4.10.tar
5 kafka_2.11-1.1.1 (更新了版本)
6 redis-3.2.12.tar
7 hbase-1.2.9-bin.tar
8 apache-tomcat-7.0.91.tar

然后master中 vim /etc/environment 文件 打開,復制其中所有內(nèi)容,到所有從機的 /etc/environment 文件中,然后 source /etc/environment 生效

JAVA_HOME="/soft/jdk/"
HADOOP_HOME="/soft/hadoop/"
HIVE_HOME="/soft/hive"
HBASE_HOME="/soft/hbase"
ZK_HOME="/soft/zookeeper"
KAFKA_HOME="/soft/kafka"
SCALA_HOME="/soft/scala"
SPARK_HOME="/soft/spark"
BIGDL_HOME="/soft/bigdl"
SQOOP_HOME="/soft/sqoop"
KE_HOME="/soft/kafka-eagle/kafka-eagle-web-1.2.4"
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/soft/jdk/bin:/soft/hadoop/bin:/soft/hadoop/sbin:/soft/hive/bin:/soft/zookeeper/bin:/soft/hbase/bin:/soft/kafka/bin:/soft/scala/bin:/soft/spark/bin:/soft/spark/sbin:/soft/bigdl/bin:/soft/sqoop/bin:/soft/kafka-eagle/kafka-eagle-web-1.2.4/bin"

hadoop集群搭建

1.將hadoop壓縮包拖進master主機中,解壓(我這里解壓的路徑是/home/soft/),配置環(huán)境變量,同jdk的環(huán)境變量配置

2.在hadoop-2.7.2文件夾里面先創(chuàng)建4個文件夾:(hdfs在hadoop-2.7.2下,其他三個文件夾在hdfs下)

sudo mkdir hdfs
cd hdfs
sudo mkdir data
sudo mkdir tmp
sudo mkdir name

3.配置 hadoop的配置文件
先進入配置文件的路徑:cd /home/soft/hadoop-2.7.2/etc/hadoop(再次強調(diào),使用自己的路徑)

ls

查看該文件夾下的文件。

集群/分布式模式需要修改 /home/soft/hadoop-2.7.2/etc/hadoop 中的5個配置文件: slaves、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml

slaves

slaves文件中記錄的主機都會作為DataNode使用,1臺從機都有DataNode,所以將從機的主機名寫入該文件中,配置如下:
執(zhí)行命令:

sudo gedit slaves

刪掉localhost,寫上slave1 ,如果沒有看到localhost,看你是不是進錯了文件夾,是在/etc/hadoop之下的slaves。有多臺從機時,注意在這里加上其他從機的主機名。如:slave1,slave2 ....

core-site.xml
sudo gedit core-site.xml

加入:

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>     
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/soft/hadoop-2.7.2/hdfs/tmp</value>      
</property>
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
</configuration>
hdfs-site.xml
sudo gedit hdfs-site.xml

加入:

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>                 
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/soft/hadoop-2.7.2/hdfs/name</value>    
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/soft/hadoop-2.7.2/hdfs/data</value>
</property>
</configuration>
mapred-site.xml

沒有這個文件的,需要先執(zhí)行

cp mapred-site.xml.template mapred-site.xml

復制一份,然后再

sudo gedit mapred-site.xml

加入:

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>  
</configuration>
yarn-site.xml
sudo gedit yarn-site.xml

加入:

<configuration>
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.resourcemanager.hostname</name>         
    <value>master</value>                           
</property>
</configuration>

以上操作在主機master 上執(zhí)行,然后格式化 namenode,只格式化一次,執(zhí)行下面的命令:

hadoop namenode -format

注意:上面只要出現(xiàn) “successfully formatted” 就表示成功了。

接下來,將hadoop傳到slave1,slave2...等從機上面去:

scp -r hadoop-2.7.2 zhjc@slave1:/home/soft/

注意:zhjc是從機的用戶名,創(chuàng)建slave1時設(shè)定的

傳過去后,在slave1上面同樣對hadoop進行環(huán)境變量配置,配置好之后,按上面的方法測試一下。 測試:hadoop version 有結(jié)果就代表成功。其他從機與slave1保持一致即可。

六、開啟hadoop

兩種方法:

start-all.sh

或者

start-dfs.sh
start-yarn.sh

如果在mater上面鍵入jps后看到:

在slave1上鍵入jps后看到:

則說明集群搭建成功。關(guān)閉集群:

stop-all.sh

七、最后用自帶的樣例測試hadoop集群能不能正常跑任務(wù)

使用命令:

hadoop jar /home/soft/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar  pi 2 10

用來求圓周率,pi是類名,第一個2表示Map次數(shù),第二個10表示隨機生成點的次數(shù)(與計算原理有關(guān))

最后出現(xiàn)結(jié)果:

hadoop集群搭建完成。?

spark 集群(完全分布式)

該步驟在hadoop集群搭建之后,默認主從機上的環(huán)境變量都配置完成。

Spark集群的四種運行模式

1、Local

單機運行,一般用于開發(fā)測試。

2、Yarn

Spark客戶端直接連接Yarn,不需要額外構(gòu)建Spark集群。

3、Standalone

構(gòu)建一個由Master+Worker構(gòu)成的Spark集群,Spark運行在集群中。

4、Mesos

Spark客戶端直接連接Mesos,不需要額外構(gòu)建Spark集群。

其他配置都與master上的一致,需要修改 conf/ 下的 slaves 文件, 然后發(fā)送到所有從機上

vim slaves
#配置集群worker
slave1
slave2
slave3
slave4
slave5
slave6

standalone啟動:

[zhjc@master spark]# ./sbin/start-all.sh

提交任務(wù)時,不用standalone,而是直接交給yarn管理。

具體配置如下:(打開master的spark文件夾下的conf/spark-env.sh 文件夾查看)

export JAVA_HOME=/soft/jdk   #Java環(huán)境變量
export SCALA_HOME=/soft/scala #SCALA環(huán)境變量
export SPARK_WORKING_MEMORY=8g  #每一個worker節(jié)點上可用的最大內(nèi)存
export SPARK_MASTER_IP=master   #驅(qū)動器節(jié)點IP
export HADOOP_HOME=/soft/hadoop  #Hadoop路徑
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop #Hadoop配置目錄
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$(hbase classpath)
export SPARK_YARN_USER_ENV=/soft/hadoop/etc/hadoop/

zookeeper集群搭建

默認主從機上的環(huán)境變量都配置完成。

進入zookeeper目錄中,首先創(chuàng)建一個data目錄(用于存放數(shù)據(jù)),然后進入到conf目錄使用mv命令將zoo_simple.cfg文件名修改為zoo.cfg,然后使用vi或vim編輯器進行相應(yīng)配置。

在data目錄中創(chuàng)建一個myid文件,用于指定節(jié)點id

[zhjc@master data]# vim myid  //master的myid為1,slave1的myid為2,依次類推

打開conf目錄下的zoo.cfg文件

[zhjc@master conf]# vim zoo.cfg

修改其中的相關(guān) dataDir , server.1 ,server.2 ,server.3 ... 其中的數(shù)字要與myid配置文件中數(shù)字保持一致,該id指定機器id在啟動時用于選舉使用。

具體配置如下:(打開master的zoo.cfg 文件查看)

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/soft/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
server.4=slave3:2888:3888
server.5=slave4:2888:3888
server.6=slave5:2888:3888
server.7=slave6:2888:3888

zookeeper集群啟動:(所有主從機都要啟動,都要運行以下指令)

[zhjc@master bin]# ./zkServer.sh start

各節(jié)點進入zookeeper的bin目錄使用zkServer.sh start 命令啟動zookeeper。然后使用zkServer.sh status查看zk狀態(tài)。

kafka集群搭建

默認主從機上的環(huán)境變量都配置完成。

修改 kafka文件夾下的conf/server.properties 文件

[zhjc@master conf]# vim server.properties

修改broker.id=1,默認是0, master 為1,其他類推

具體配置如下:(打開master的server.properties 文件查看)

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=72

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:2181,slave1:2181,slave2:2181,slave3:2181,slave4:2181,slave5:2181,slave6:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

最后對所有節(jié)點都要創(chuàng)建日志目錄: mkdir /hdfs/kafka 并且根據(jù)需要創(chuàng)建軟鏈接,完成之后kafka集群就安裝完畢了

kafka啟動:(所有主從機都要啟動,主從機都要運行以下指令)

[zhjc@master bin]# nohup kafka-server-start.sh /soft/kafka/config/server.properties > /dev/null 2>&1 &

或者:

[zhjc@master bin]# kafka-server-start.sh -daemon /soft/kafkaconfig/server.properties

新版本的kafka無需使用nohup掛起,直接使用-daemon參數(shù)就可以運行在后臺,啟動后通過jps查看有Kafka進程就啟動成功,對于創(chuàng)建topic,生產(chǎn),消費操作和之前基本都是一樣的,停止同樣執(zhí)行bin/kafka-server-stop.sh即可

centos下的Redis單機安裝

redis啟動:到/soft/redis目錄下:

redis-server redis.conf

redis關(guān)閉:

redis-cli shutdown

刪除固定前綴 key :

redis-cli  -p 6379 --scan --pattern "C0*" | xargs -L 5000 redis-cli -n 0 -p 6379 DEL

Redis安裝:

找到Redis的安裝包,解壓 到相關(guān)文件夾下,然后解壓編譯安裝

安裝redis需要c語言的編譯環(huán)境。如果沒有g(shù)cc需要在線安裝,命令:yum install gcc-c++

cd redis-4.0.9 //進入redis源碼目錄
make  //編譯
make install PREFIX=/soft/redis //安裝

PREFIX指定安裝目錄,安裝完成后,可以看到/soft/redis有個bin文件夾,進入bin文件夾, ll

至此,redis安裝完成,可以直接啟動 ./redis-server,這種方式是前端啟動方式,可以ctrl+c關(guān)閉redis。

也可以通過初始化腳本啟動Redis,在編譯后的目錄utils文件夾中有 redis_init_script 首先將初始化腳本復制到/etc/init.d 目錄中,文件名為 redis_端口號(這個mv成了redis_6379),其中端口號表示要讓Redis監(jiān)聽的端口號,客戶端通過該端口連接Redis。然后修改腳本中的 REDISPORT 變量的值為同樣的端口號。

然后建立存放Redis的配置文件目錄和存放Redis持久化的文件目錄

/etc/redis 存放Redis的配置文件

/var/redis/端口號 存放Redis的持久化文件(這里是 /var/redis/6379 )

修改配置文件

將配置文件模板 redis-4.0.9/redis.conf 復制到 /etc/redis 目錄中,以端口號命名(如 6379.conf ),然后對其中的部分參數(shù)進行編輯。

daemonize yes 使Redis以守護進程模式運行
pidfile /var/run/redis_端口號.pid 設(shè)置Redis的PID文件位置
port 端口號 設(shè)置Redis監(jiān)聽的端口號
dir /var/redis/端口號 設(shè)置持久化文件存放位置
#requirepass foobared 若需要設(shè)置密碼就把注釋打開,改成你要設(shè)置的密碼
bind 127.0.0.1   將其默認的127.0.0.1改為0.0.0.0(代表不做限制),這樣外網(wǎng)就能訪問了

現(xiàn)在也可以使用下面的命令來啟動和關(guān)閉Redis了

/etc/init.d/redis_6379 start
/etc/init.d/redis_6379 stop

redis隨系統(tǒng)自動啟動:

chkconfig redis_6379 on

通過上面的操作后,以后也可以直接用下面的命令對Redis進行啟動和關(guān)閉了,如下

service redis_6379 start

service redis_6379 stop

這樣系統(tǒng)重啟,Redis也會隨著系統(tǒng)啟動自動啟動起來。

上面的stop方法可以停止redis,但是考慮到 Redis 有可能正在將內(nèi)存中的數(shù)據(jù)同步到硬盤中,強行終止 Redis 進程可能會導致數(shù)據(jù)丟失。正確停止Redis的方式應(yīng)該是向Redis發(fā)送SHUTDOWN命令,方法為:

redis-cli SHUTDOWN

當Redis收到SHUTDOWN命令后,會先斷開所有客戶端連接,然后根據(jù)配置執(zhí)行持久化,最后完成退出。
Redis可以妥善處理 SIGTERM信號,所以使用 kill Redis 進程的 PID也可以正常結(jié)束Redis,效果與發(fā)送SHUTDOWN命令一樣。

如果需要外網(wǎng)訪問,首先檢查是否被防火墻擋住

然后在配置文件中將bind配置項默認的127.0.0.1改為0.0.0.0

修改Hadoop的默認日志級別

修改log4j.properties的配置

# Define some default values that can be overridden by system properties
hadoop.root.logger=WARN,console

這里的配置會被系統(tǒng)屬性覆蓋!

還需要修改以下2個文件,才能把默認的日志級別改掉(我這里只改的HDFS的,Yarn的自行參考即可):

第一處是${HADOOP_HOME}/etc/hadoop/hadoop-env.sh,把INFO改為WARN即可:

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Xmx30720m -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-WARN,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-WARN,NullAppender} $HADOOP_NAMENODE_OPTS"

還有yarn的也要這樣修改,${HADOOP_HOME}/etc/hadoop/yarn-env.sh 也需要這樣改一下

啟動腳本${HADOOP_HOME}/sbin/hadoop-daemon.sh,也需要這樣改一下:

export HADOOP_ROOT_LOGGER=${HADOOP_ROOT_LOGGER:-"WARN,RFA"}

還有yarn的啟動腳本也要這樣修改,${HADOOP_HOME}/sbin/yarn-daemon.sh 也需要這樣改一下

最后在重啟下NameNode就可以了。

mysql數(shù)據(jù)轉(zhuǎn)存Hbase

需要以下包:

1 sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar

2 mysql-connector-java-5.1.38

常用指令:

//sqoop轉(zhuǎn)存
sqoop import --connect jdbc:mysql://192.168.139.1:3306/transportation --username wcc --password 123456 --table 't_videodata_raw' --where "CreateTime>='2019-03-27 00:00:00' AND CreateTime < '2019-03-28 00:00:00'" --hbase-table 'transportation:t_videodata_raw' --hbase-row-key 'Rowkey' --column-family 'info' --split-by 'Rowkey'
//
拼接rowkey
UPDATE t_link_set as t SET Rowkey =CONCAT(LinkID,'_',create_time)
//yarn提交
spark-submit --master yarn --deploy-mode cluster --driver-memory 1G --executor-memory 1500m --executor-cores 2 --class Forecastion.knnTest SparkTrain-1.0-SNAPSHOT.jar

//hbase統(tǒng)計表數(shù)據(jù)條數(shù)
hbase org.apache.hadoop.hbase.mapreduce.RowCounter "transportation:t_videodata_raw"
//轉(zhuǎn)存+定時
***************************定時***************************************
sudo yum install crontabs
sudo systemctl enable crond (設(shè)為開機啟動)
sudo systemctl start crond(啟動crond服務(wù))
sudo systemctl status crond (查看狀態(tài))
sudo nano /etc/crontab
1 0 * * * root /usr/local/mycommand.sh (這樣就是每天凌晨零點過一分執(zhí)行一次命令腳本)
sudo crontab /etc/crontab
sudo crontab -l
***************************轉(zhuǎn)存***************************************
yesday=$(date -d last-day +%Y-%m-%d)
export SQOOP_HOME=/soft/sqoop

sqoop import
  --connect jdbc:mysql://192.168.139.1:3306/whtmb
  --username xxxx
  --password xxxx1234
  --table 't_videodata_raw'
  --check-column CreateTime
  --incremental lastmodified
  --last-value ${yesday}  
  --hbase-table 'transportation:t_videodata_raw'
  --merge-key 'RowKey'
  --hbase-row-key 'RowKey'
  --column-family 'info'
  --split-by 'RowKey'
#sqoop import --connect jdbc:mysql://192.168.139.1:3306/whtmb --username wcc --password 123456 --table 't_earthmagnetic_raw'  --hbase-table 'transportation:t_earthmagnetic_raw' --hbase-row-key 'RowKey' --colu$

#sqoop import --connect jdbc:mysql://192.168.139.1:3306/whtmb --username ${rdbms_username} --password ${rdbms_pwd} --table ${rdbms_table1} --hbase-table ${hbase_table1} --hbase-row-key ${row_key} --column-fam$
#sqoop import --connect ${rdbms_url} --username ${rdbms_username} --password ${rdbms_pwd} --table ${rdbms_table2} --hbase-table ${hbase_table2} --hbase-row-key ${row_key} --column-family ${column-family} --sp$
#sqoop import --connect ${rdbms_url} --username ${rdbms_username} --password ${rdbms_pwd} --table ${rdbms_table3} --hbase-table ${hbase_table3} --hbase-row-key ${row_key} --column-family ${column-family} --sp$
echo "等待批量任務(wù)完成"
         wait
echo "開始下一批導入"

輸入指令:

sqoop list-databases --connect jdbc:mysql://192.168.139.1:3306/whtmb --username xxxx --password xxxx1234

sqoop import --connect jdbc:mysql://192.168.139.1:3306/whtmb --username xxxx --password xxxx1234 --table 't_videodata_raw' --hbase-table 'transportation:t_videodata_raw' --hbase-row-key 'Rowkey' --column-family 'info' --split-by 'Rowkey'
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

推薦閱讀更多精彩內(nèi)容