 1.HDFS-HA集群配置
   
   https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
   环境准备
   1). 修改IP
   2). 修改主机名及主机名和IP地址的映射
   3). 关闭防火墙
   4). ssh免密登录
   5). 安装JDK，配置环境变量等
   集群规划
   linux121      linux122        linux123
   NameNode      NameNode
   JournalNode  JournalNode    JournalNode
   DataNode      DataNode        DataNode
     ZK            ZK              ZK
               ResourceManager
   NodeManager  NodeManager     NodeManager
   启动Zookeeper集群
   zk.sh start
   查看状态
   zk.sh status
 
 2.配置HDFS-HA集群
   
   1).停止原先HDFS集群
   stop-dfs.sh
   2).在所有节点，/opt/lagou/servers⽬目录下创建一个ha文件夹
   mkdir /opt/lagou/servers/ha
   3).将/opt/lagou/servers/目录下的 hadoop-2.9.2 拷贝到ha目录下
   cp -r hadoop-2.9.2 ha
   4).删除原集群data目录
   rm -rf /opt/lagou/servers/ha/hadoop-2.9.2/data
   5).配置hdfs-site.xml
   <property>
        <name>dfs.nameservices</name>
        <value>lagoucluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.lagoucluster</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.lagoucluster.nn1</name>
        <value>linux121:9000</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.lagoucluster.nn2</name>
        <value>linux122:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.lagoucluster.nn1</name>
        <value>linux121:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.lagoucluster.nn2</name>
        <value>linux122:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://linux121:8485;linux122:8485;linux123:8485/lagou</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.lagoucluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyPr
            ovider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/journalnode</value>
    </property>
	<property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
	6).配置core-site.xml
	<property>
        <name>fs.defaultFS</name>
        <value>hdfs://lagoucluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/lagou/servers/ha/hadoop-2.9.2/data/tmp</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>linux121:2181,linux122:2181,linux123:2181</value>
    </property>
	7).拷贝配置好的hadoop环境到其他节点
	rsync-script /opt/lagou/servers/ha/hadoop-2.9.2/
 
 3.启动HDFS-HA集群
   
   1).在各个JournalNode节点上，输⼊入以下命令启动journalnode服务(去往HA安装目录，不要使用环
境变量中命令)
   /opt/lagou/servers/ha/hadoop-2.9.2/sbin/hadoop-daemon.sh start journalnode
   2).在[nn1]上，对其进行格式化，并启动
   /opt/lagou/servers/ha/hadoop-2.9.2/bin/hdfs namenode -format
   /opt/lagou/servers/ha/hadoop-2.9.2/sbin/hadoop-daemon.sh start namenode
   3).在[nn2]上，同步nn1的元数据信息
   /opt/lagou/servers/ha/hadoop-2.9.2/bin/hdfs namenode -bootstrapStandby
   4).在[nn1]上初始化zkfc
   /opt/lagou/servers/ha/hadoop-2.9.2/bin/hdfs zkfc -formatZK
   5).在[nn1]上，启动集群
   /opt/lagou/servers/ha/hadoop-2.9.2/sbin/start-dfs.sh
   6).验证
   将Active NameNode进程kill
   kill -9 namenode的进程id
   