<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!-- saved from url=(0089)http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html -->
<html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>Apache Hadoop 2.7.2 – </title>
    <style type="text/css" media="all">
      @import url("./css/maven-base.css");
      @import url("./css/maven-theme.css");
      @import url("./css/site.css");
    </style>
    <link rel="stylesheet" href="./Apache Hadoop 2.7.2 –_files/print.css" type="text/css" media="print">
        <meta name="Date-Revision-yyyymmdd" content="20160126">
    
      <link type="text/css" rel="stylesheet" href="chrome-extension://kfgmnlgjmofpiicpgohgfpeabgpmhjdp/style.css"><script type="text/javascript" charset="utf-8" src="chrome-extension://kfgmnlgjmofpiicpgohgfpeabgpmhjdp/page_context.js"></script><style type="text/css">#yddContainer{display:block;font-family:Microsoft YaHei;position:relative;width:100%;height:100%;top:-4px;left:-4px;font-size:12px;border:1px solid}#yddTop{display:block;height:22px}#yddTopBorderlr{display:block;position:static;height:17px;padding:2px 28px;line-height:17px;font-size:12px;color:#5079bb;font-weight:bold;border-style:none solid;border-width:1px}#yddTopBorderlr .ydd-sp{position:absolute;top:2px;height:0;overflow:hidden}.ydd-icon{left:5px;width:17px;padding:0px 0px 0px 0px;padding-top:17px;background-position:-16px -44px}.ydd-close{right:5px;width:16px;padding-top:16px;background-position:left -44px}#yddKeyTitle{float:left;text-decoration:none}#yddMiddle{display:block;margin-bottom:10px}.ydd-tabs{display:block;margin:5px 0;padding:0 5px;height:18px;border-bottom:1px solid}.ydd-tab{display:block;float:left;height:18px;margin:0 5px -1px 0;padding:0 4px;line-height:18px;border:1px solid;border-bottom:none}.ydd-trans-container{display:block;line-height:160%}.ydd-trans-container a{text-decoration:none;}#yddBottom{position:absolute;bottom:0;left:0;width:100%;height:22px;line-height:22px;overflow:hidden;background-position:left -22px}.ydd-padding010{padding:0 10px}#yddWrapper{color:#252525;z-index:10001;background:url(chrome-extension://eopjamdnofihpioajgfdikhhbobonhbb/ab20.png);}#yddContainer{background:#fff;border-color:#4b7598}#yddTopBorderlr{border-color:#f0f8fc}#yddWrapper .ydd-sp{background-image:url(chrome-extension://eopjamdnofihpioajgfdikhhbobonhbb/ydd-sprite.png)}#yddWrapper a,#yddWrapper a:hover,#yddWrapper a:visited{color:#50799b}#yddWrapper .ydd-tabs{color:#959595}.ydd-tabs,.ydd-tab{background:#fff;border-color:#d5e7f3}#yddBottom{color:#363636}#yddWrapper{min-width:250px;max-width:400px;}</style></head>
  <body class="composite" gtools_scp_screen_capture_injected="true" youdao="bind">
    <div id="banner">
                  <a href="http://hadoop.apache.org/" id="bannerLeft">
                                        <img src="./Apache Hadoop 2.7.2 –_files/hadoop-logo.jpg" alt="">
                </a>
                        <a href="http://www.apache.org/" id="bannerRight">
                                        <img src="./Apache Hadoop 2.7.2 –_files/asf_logo_wide.png" alt="">
                </a>
            <div class="clear">
        <hr>
      </div>
    </div>
    <div id="breadcrumbs">
            
                                <div class="xleft">
                          <a href="http://www.apache.org/" class="externalLink">Apache</a>
                &gt;
                      <a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
                &gt;
                      <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/">Apache Hadoop Project Dist POM</a>
                &gt;
                Apache Hadoop 2.7.2
                </div>
            <div class="xright">            <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
            |
                <a href="https://git-wip-us.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
            |
                <a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
              
                                &nbsp;| Last Published: 2016-01-26
              &nbsp;| Version: 2.7.2
            </div>
      <div class="clear">
        <hr>
      </div>
    </div>
    <div id="leftColumn">
      <div id="navcolumn">
             
                                                <h5>General</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/index.html">Overview</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html">Hadoop Commands Reference</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html">Hadoop Compatibility</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
            </li>
          </ul>
                       <h5>Common</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-kms/index.html">Hadoop KMS</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
            </li>
          </ul>
                       <h5>HDFS</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS User Guide</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">HDFS Commands Reference</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">High Availability With QJM</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">High Availability With NFS</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs Guide</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">HDFS Snapshots</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS Architecture</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Hftp.html">HFTP</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/LibHdfs.html">C API libhdfs</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS REST API</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/index.html">HttpFS Gateway</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">HDFS NFS Gateway</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">HDFS Rolling Upgrade</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">HDFS Support for Multihoming</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Archival Storage, SSD &amp; Memory</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
            </li>
          </ul>
                       <h5>MapReduce</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">MapReduce Tutorial</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">MapReduce Commands Reference</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
            </li>
          </ul>
                       <h5>MapReduce REST APIs</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
            </li>
          </ul>
                       <h5>YARN</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html">Overview</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html">YARN Architecture</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html">YARN Timeline Server</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnCommands.html">YARN Commands</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html">NodeManager Restart</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainerExecutor.html">DockerContainerExecutor</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/registry/index.html">Registry</a>
            </li>
          </ul>
                       <h5>YARN REST APIs</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
            </li>
          </ul>
                       <h5>Hadoop Compatible File Systems</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-azure/index.html">Azure Blob Storage</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-openstack/index.html">OpenStack Swift</a>
            </li>
          </ul>
                       <h5>Auth</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-auth/index.html">Overview</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-auth/Examples.html">Examples</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-auth/Configuration.html">Configuration</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-auth/BuildingIt.html">Building</a>
            </li>
          </ul>
                       <h5>Tools</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html">DistCp</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-gridmix/GridMix.html">GridMix</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-rumen/Rumen.html">Rumen</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
            </li>
          </ul>
                       <h5>Reference</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/releasenotes.html">Release Notes</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/api/index.html">API docs</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CHANGES.txt">Common CHANGES.txt</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CHANGES.txt">HDFS CHANGES.txt</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-mapreduce/CHANGES.txt">MapReduce CHANGES.txt</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-yarn/CHANGES.txt">YARN CHANGES.txt</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
            </li>
          </ul>
                       <h5>Configuration</h5>
                  <ul>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
            </li>
                  <li class="none">
                  <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
            </li>
          </ul>
                                 <a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
          <img alt="Built by Maven" src="./Apache Hadoop 2.7.2 –_files/maven-feather.png">
        </a>
                       
                            </div>
    </div>
    <div id="bodyColumn">
      <div id="contentBox">
        <!-- -
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file. -->
<ul>
  
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Cluster_Setup">Hadoop Cluster Setup</a>
  
<ul>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Purpose">Purpose</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Prerequisites">Prerequisites</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Installation">Installation</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_Hadoop_in_Non-Secure_Mode">Configuring Hadoop in Non-Secure Mode</a>
    
<ul>
      
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_Environment_of_Hadoop_Daemons">Configuring Environment of Hadoop Daemons</a></li>
      
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_the_Hadoop_Daemons">Configuring the Hadoop Daemons</a></li>
    </ul></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Monitoring_Health_of_NodeManagers">Monitoring Health of NodeManagers</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Slaves_File">Slaves File</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness">Hadoop Rack Awareness</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Logging">Logging</a></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Operating_the_Hadoop_Cluster">Operating the Hadoop Cluster</a>
    
<ul>
      
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Startup">Hadoop Startup</a></li>
      
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Shutdown">Hadoop Shutdown</a></li>
    </ul></li>
    
<li><a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html#Web_Interfaces">Web Interfaces</a></li>
  </ul></li>
</ul>
<h1>Hadoop Cluster Setup</h1>
<div class="section">
<h2><a name="Purpose"></a>Purpose</h2>
<p>This document describes how to install and configure Hadoop clusters ranging from a few nodes to extremely large clusters with thousands of nodes. To play with Hadoop, you may first want to install it on a single machine (see <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>).</p>
<p>This document does not cover advanced topics such as <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html">Security</a> or High Availability.</p></div>
<div class="section">
<h2><a name="Prerequisites"></a>Prerequisites</h2>

<ul>
  
<li>Install Java. See the <a class="externalLink" href="http://wiki.apache.org/hadoop/HadoopJavaVersions">Hadoop Wiki</a> for known good versions.</li>
  
<li>Download a stable version of Hadoop from Apache mirrors.</li>
</ul></div>
<div class="section">
<h2><a name="Installation"></a>Installation</h2>
<p>Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions.</p>
<p>Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load.</p>
<p>The rest of the machines in the cluster act as both DataNode and NodeManager. These are the slaves.</p></div>
<div class="section">
<h2><a name="Configuring_Hadoop_in_Non-Secure_Mode"></a>Configuring Hadoop in Non-Secure Mode</h2>
<p>Hadoop’s Java configuration is driven by two types of important configuration files:</p>

<ul>
  
<li>
<p>Read-only default configuration - <tt>core-default.xml</tt>, <tt>hdfs-default.xml</tt>, <tt>yarn-default.xml</tt> and <tt>mapred-default.xml</tt>.</p></li>
  
<li>
<p>Site-specific configuration - <tt>etc/hadoop/core-site.xml</tt>, <tt>etc/hadoop/hdfs-site.xml</tt>, <tt>etc/hadoop/yarn-site.xml</tt> and <tt>etc/hadoop/mapred-site.xml</tt>.</p></li>
</ul>
<p>Additionally, you can control the Hadoop scripts found in the bin/ directory of the distribution, by setting site-specific values via the <tt>etc/hadoop/hadoop-env.sh</tt> and <tt>etc/hadoop/yarn-env.sh</tt>.</p>
<p>To configure the Hadoop cluster you will need to configure the <tt>environment</tt> in which the Hadoop daemons execute as well as the <tt>configuration parameters</tt> for the Hadoop daemons.</p>
<p>HDFS daemons are NameNode, SecondaryNameNode, and DataNode. YARN damones are ResourceManager, NodeManager, and WebAppProxy. If MapReduce is to be used, then the MapReduce Job History Server will also be running. For large installations, these are generally running on separate hosts.</p>
<div class="section">
<h3><a name="Configuring_Environment_of_Hadoop_Daemons"></a>Configuring Environment of Hadoop Daemons</h3>
<p>Administrators should use the <tt>etc/hadoop/hadoop-env.sh</tt> and optionally the <tt>etc/hadoop/mapred-env.sh</tt> and <tt>etc/hadoop/yarn-env.sh</tt> scripts to do site-specific customization of the Hadoop daemons’ process environment.</p>
<p>At the very least, you must specify the <tt>JAVA_HOME</tt> so that it is correctly defined on each remote node.</p>
<p>Administrators can configure individual daemons using the configuration options shown below in the table:</p>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Daemon </th>
      
<th align="left">Environment Variable </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left">NameNode </td>
      
<td align="left">HADOOP_NAMENODE_OPTS </td>
    </tr>
    
<tr class="a">
      
<td align="left">DataNode </td>
      
<td align="left">HADOOP_DATANODE_OPTS </td>
    </tr>
    
<tr class="b">
      
<td align="left">Secondary NameNode </td>
      
<td align="left">HADOOP_SECONDARYNAMENODE_OPTS </td>
    </tr>
    
<tr class="a">
      
<td align="left">ResourceManager </td>
      
<td align="left">YARN_RESOURCEMANAGER_OPTS </td>
    </tr>
    
<tr class="b">
      
<td align="left">NodeManager </td>
      
<td align="left">YARN_NODEMANAGER_OPTS </td>
    </tr>
    
<tr class="a">
      
<td align="left">WebAppProxy </td>
      
<td align="left">YARN_PROXYSERVER_OPTS </td>
    </tr>
    
<tr class="b">
      
<td align="left">Map Reduce Job History Server </td>
      
<td align="left">HADOOP_JOB_HISTORYSERVER_OPTS </td>
    </tr>
  </tbody>
</table>
<p>For example, To configure Namenode to use parallelGC, the following statement should be added in hadoop-env.sh :</p>

<div class="source">
<div class="source">
<pre>  export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC"
</pre></div></div>
<p>See <tt>etc/hadoop/hadoop-env.sh</tt> for other examples.</p>
<p>Other useful configuration parameters that you can customize include:</p>

<ul>
  
<li><tt>HADOOP_PID_DIR</tt> - The directory where the daemons’ process id files are stored.</li>
  
<li><tt>HADOOP_LOG_DIR</tt> - The directory where the daemons’ log files are stored. Log files are automatically created if they don’t exist.</li>
  
<li><tt>HADOOP_HEAPSIZE</tt> / <tt>YARN_HEAPSIZE</tt> - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you want to configure the values separately for each deamon you can use.</li>
</ul>
<p>In most cases, you should specify the <tt>HADOOP_PID_DIR</tt> and <tt>HADOOP_LOG_DIR</tt> directories such that they can only be written to by the users that are going to run the hadoop daemons. Otherwise there is the potential for a symlink attack.</p>
<p>It is also traditional to configure <tt>HADOOP_PREFIX</tt> in the system-wide shell environment configuration. For example, a simple script inside <tt>/etc/profile.d</tt>:</p>

<div class="source">
<div class="source">
<pre>  HADOOP_PREFIX=/path/to/hadoop
  export HADOOP_PREFIX
</pre></div></div>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Daemon </th>
      
<th align="left">Environment Variable </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left">ResourceManager </td>
      
<td align="left">YARN_RESOURCEMANAGER_HEAPSIZE </td>
    </tr>
    
<tr class="a">
      
<td align="left">NodeManager </td>
      
<td align="left">YARN_NODEMANAGER_HEAPSIZE </td>
    </tr>
    
<tr class="b">
      
<td align="left">WebAppProxy </td>
      
<td align="left">YARN_PROXYSERVER_HEAPSIZE </td>
    </tr>
    
<tr class="a">
      
<td align="left">Map Reduce Job History Server </td>
      
<td align="left">HADOOP_JOB_HISTORYSERVER_HEAPSIZE </td>
    </tr>
  </tbody>
</table></div>
<div class="section">
<h3><a name="Configuring_the_Hadoop_Daemons"></a>Configuring the Hadoop Daemons</h3>
<p>This section deals with important parameters to be specified in the given configuration files:</p>

<ul>
  
<li><tt>etc/hadoop/core-site.xml</tt></li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>fs.defaultFS</tt> </td>
      
<td align="left">NameNode URI </td>
      
<td align="left"><a class="externalLink" href="hdfs://host:port/">hdfs://host:port/</a> </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>io.file.buffer.size</tt> </td>
      
<td align="left">131072 </td>
      
<td align="left">Size of read/write buffer used in SequenceFiles. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>
<p><tt>etc/hadoop/hdfs-site.xml</tt></p></li>
  
<li>
<p>Configurations for NameNode:</p></li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>dfs.namenode.name.dir</tt> </td>
      
<td align="left">Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. </td>
      
<td align="left">If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>dfs.hosts</tt> / <tt>dfs.hosts.exclude</tt> </td>
      
<td align="left">List of permitted/excluded DataNodes. </td>
      
<td align="left">If necessary, use these files to control the list of allowable datanodes. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>dfs.blocksize</tt> </td>
      
<td align="left">268435456 </td>
      
<td align="left">HDFS blocksize of 256MB for large file-systems. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>dfs.namenode.handler.count</tt> </td>
      
<td align="left">100 </td>
      
<td align="left">More NameNode server threads to handle RPCs from large number of DataNodes. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>Configurations for DataNode:</li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>dfs.datanode.data.dir</tt> </td>
      
<td align="left">Comma separated list of paths on the local filesystem of a <tt>DataNode</tt> where it should store its blocks. </td>
      
<td align="left">If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>
<p><tt>etc/hadoop/yarn-site.xml</tt></p></li>
  
<li>
<p>Configurations for ResourceManager and NodeManager:</p></li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>yarn.acl.enable</tt> </td>
      
<td align="left"><tt>true</tt> / <tt>false</tt> </td>
      
<td align="left">Enable ACLs? Defaults to <i>false</i>. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.admin.acl</tt> </td>
      
<td align="left">Admin ACL </td>
      
<td align="left">ACL to set admins on the cluster. ACLs are of for <i>comma-separated-usersspacecomma-separated-groups</i>. Defaults to special value of <b>*</b> which means <i>anyone</i>. Special value of just <i>space</i> means no one has access. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.log-aggregation-enable</tt> </td>
      
<td align="left"><i>false</i> </td>
      
<td align="left">Configuration to enable or disable log aggregation </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>Configurations for ResourceManager:</li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>yarn.resourcemanager.address</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> host:port for clients to submit jobs. </td>
      
<td align="left"><i>host:port</i>&nbsp;If set, overrides the hostname set in <tt>yarn.resourcemanager.hostname</tt>. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.resourcemanager.scheduler.address</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> host:port for ApplicationMasters to talk to Scheduler to obtain resources. </td>
      
<td align="left"><i>host:port</i>&nbsp;If set, overrides the hostname set in <tt>yarn.resourcemanager.hostname</tt>. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.resourcemanager.resource-tracker.address</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> host:port for NodeManagers. </td>
      
<td align="left"><i>host:port</i>&nbsp;If set, overrides the hostname set in <tt>yarn.resourcemanager.hostname</tt>. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.resourcemanager.admin.address</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> host:port for administrative commands. </td>
      
<td align="left"><i>host:port</i>&nbsp;If set, overrides the hostname set in <tt>yarn.resourcemanager.hostname</tt>. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.resourcemanager.webapp.address</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> web-ui host:port. </td>
      
<td align="left"><i>host:port</i>&nbsp;If set, overrides the hostname set in <tt>yarn.resourcemanager.hostname</tt>. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.resourcemanager.hostname</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> host. </td>
      
<td align="left"><i>host</i>&nbsp;Single hostname that can be set in place of setting all <tt>yarn.resourcemanager*address</tt> resources. Results in default ports for ResourceManager components. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.resourcemanager.scheduler.class</tt> </td>
      
<td align="left"><tt>ResourceManager</tt> Scheduler class. </td>
      
<td align="left"><tt>CapacityScheduler</tt> (recommended), <tt>FairScheduler</tt> (also recommended), or <tt>FifoScheduler</tt> </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.scheduler.minimum-allocation-mb</tt> </td>
      
<td align="left">Minimum limit of memory to allocate to each container request at the <tt>Resource Manager</tt>. </td>
      
<td align="left">In MBs </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.scheduler.maximum-allocation-mb</tt> </td>
      
<td align="left">Maximum limit of memory to allocate to each container request at the <tt>Resource Manager</tt>. </td>
      
<td align="left">In MBs </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.resourcemanager.nodes.include-path</tt> / <tt>yarn.resourcemanager.nodes.exclude-path</tt> </td>
      
<td align="left">List of permitted/excluded NodeManagers. </td>
      
<td align="left">If necessary, use these files to control the list of allowable NodeManagers. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>Configurations for NodeManager:</li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.resource.memory-mb</tt> </td>
      
<td align="left">Resource i.e. available physical memory, in MB, for given <tt>NodeManager</tt> </td>
      
<td align="left">Defines total available resources on the <tt>NodeManager</tt> to be made available to running containers </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.vmem-pmem-ratio</tt> </td>
      
<td align="left">Maximum ratio by which virtual memory usage of tasks may exceed physical memory </td>
      
<td align="left">The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.local-dirs</tt> </td>
      
<td align="left">Comma-separated list of paths on the local filesystem where intermediate data is written. </td>
      
<td align="left">Multiple paths help spread disk i/o. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.log-dirs</tt> </td>
      
<td align="left">Comma-separated list of paths on the local filesystem where logs are written. </td>
      
<td align="left">Multiple paths help spread disk i/o. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.log.retain-seconds</tt> </td>
      
<td align="left"><i>10800</i> </td>
      
<td align="left">Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.remote-app-log-dir</tt> </td>
      
<td align="left"><i>/logs</i> </td>
      
<td align="left">HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.remote-app-log-dir-suffix</tt> </td>
      
<td align="left"><i>logs</i> </td>
      
<td align="left">Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.aux-services</tt> </td>
      
<td align="left">mapreduce_shuffle </td>
      
<td align="left">Shuffle service that needs to be set for Map Reduce applications. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>Configurations for History Server (Needs to be moved elsewhere):</li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>yarn.log-aggregation.retain-seconds</tt> </td>
      
<td align="left"><i>-1</i> </td>
      
<td align="left">How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.log-aggregation.retain-check-interval-seconds</tt> </td>
      
<td align="left"><i>-1</i> </td>
      
<td align="left">Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>
<p><tt>etc/hadoop/mapred-site.xml</tt></p></li>
  
<li>
<p>Configurations for MapReduce Applications:</p></li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.framework.name</tt> </td>
      
<td align="left">yarn </td>
      
<td align="left">Execution framework set to Hadoop YARN. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.map.memory.mb</tt> </td>
      
<td align="left">1536 </td>
      
<td align="left">Larger resource limit for maps. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.map.java.opts</tt> </td>
      
<td align="left">-Xmx1024M </td>
      
<td align="left">Larger heap-size for child jvms of maps. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.reduce.memory.mb</tt> </td>
      
<td align="left">3072 </td>
      
<td align="left">Larger resource limit for reduces. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.reduce.java.opts</tt> </td>
      
<td align="left">-Xmx2560M </td>
      
<td align="left">Larger heap-size for child jvms of reduces. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.task.io.sort.mb</tt> </td>
      
<td align="left">512 </td>
      
<td align="left">Higher memory-limit while sorting data for efficiency. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.task.io.sort.factor</tt> </td>
      
<td align="left">100 </td>
      
<td align="left">More streams merged at once while sorting files. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.reduce.shuffle.parallelcopies</tt> </td>
      
<td align="left">50 </td>
      
<td align="left">Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. </td>
    </tr>
  </tbody>
</table>

<ul>
  
<li>Configurations for MapReduce JobHistory Server:</li>
</ul>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.jobhistory.address</tt> </td>
      
<td align="left">MapReduce JobHistory Server <i>host:port</i> </td>
      
<td align="left">Default port is 10020. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.jobhistory.webapp.address</tt> </td>
      
<td align="left">MapReduce JobHistory Server Web UI <i>host:port</i> </td>
      
<td align="left">Default port is 19888. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>mapreduce.jobhistory.intermediate-done-dir</tt> </td>
      
<td align="left">/mr-history/tmp </td>
      
<td align="left">Directory where history files are written by MapReduce jobs. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>mapreduce.jobhistory.done-dir</tt> </td>
      
<td align="left">/mr-history/done </td>
      
<td align="left">Directory where history files are managed by the MR JobHistory Server. </td>
    </tr>
  </tbody>
</table></div></div>
<div class="section">
<h2><a name="Monitoring_Health_of_NodeManagers"></a>Monitoring Health of NodeManagers</h2>
<p>Hadoop provides a mechanism by which administrators can configure the NodeManager to run an administrator supplied script periodically to determine if a node is healthy or not.</p>
<p>Administrators can determine if the node is in a healthy state by performing any checks of their choice in the script. If the script detects the node to be in an unhealthy state, it must print a line to standard output beginning with the string ERROR. The NodeManager spawns the script periodically and checks its output. If the script’s output contains the string ERROR, as described above, the node’s status is reported as <tt>unhealthy</tt> and the node is black-listed by the ResourceManager. No further tasks will be assigned to this node. However, the NodeManager continues to run the script, so that if the node becomes healthy again, it will be removed from the blacklisted nodes on the ResourceManager automatically. The node’s health along with the output of the script, if it is unhealthy, is available to the administrator in the ResourceManager web interface. The time since the node was healthy is also displayed on the web interface.</p>
<p>The following parameters can be used to control the node health monitoring script in <tt>etc/hadoop/yarn-site.xml</tt>.</p>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Parameter </th>
      
<th align="left">Value </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.health-checker.script.path</tt> </td>
      
<td align="left">Node health script </td>
      
<td align="left">Script to check for node’s health status. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.health-checker.script.opts</tt> </td>
      
<td align="left">Node health script options </td>
      
<td align="left">Options for script to check for node’s health status. </td>
    </tr>
    
<tr class="b">
      
<td align="left"><tt>yarn.nodemanager.health-checker.script.interval-ms</tt> </td>
      
<td align="left">Node health script interval </td>
      
<td align="left">Time interval for running health script. </td>
    </tr>
    
<tr class="a">
      
<td align="left"><tt>yarn.nodemanager.health-checker.script.timeout-ms</tt> </td>
      
<td align="left">Node health script timeout interval </td>
      
<td align="left">Timeout for health script execution. </td>
    </tr>
  </tbody>
</table>
<p>The health checker script is not supposed to give ERROR if only some of the local disks become bad. NodeManager has the ability to periodically check the health of the local disks (specifically checks nodemanager-local-dirs and nodemanager-log-dirs) and after reaching the threshold of number of bad directories based on the value set for the config property yarn.nodemanager.disk-health-checker.min-healthy-disks, the whole node is marked unhealthy and this info is sent to resource manager also. The boot disk is either raided or a failure in the boot disk is identified by the health checker script.</p></div>
<div class="section">
<h2><a name="Slaves_File"></a>Slaves File</h2>
<p>List all slave hostnames or IP addresses in your <tt>etc/hadoop/slaves</tt> file, one per line. Helper scripts (described below) will use the <tt>etc/hadoop/slaves</tt> file to run commands on many hosts at once. It is not used for any of the Java-based Hadoop configuration. In order to use this functionality, ssh trusts (via either passphraseless ssh or some other means, such as Kerberos) must be established for the accounts used to run Hadoop.</p></div>
<div class="section">
<h2><a name="Hadoop_Rack_Awareness"></a>Hadoop Rack Awareness</h2>
<p>Many Hadoop components are rack-aware and take advantage of the network topology for performance and safety. Hadoop daemons obtain the rack information of the slaves in the cluster by invoking an administrator configured module. See the <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a> documentation for more specific information.</p>
<p>It is highly recommended configuring rack awareness prior to starting HDFS.</p></div>
<div class="section">
<h2><a name="Logging"></a>Logging</h2>
<p>Hadoop uses the <a class="externalLink" href="http://logging.apache.org/log4j/2.x/">Apache log4j</a> via the Apache Commons Logging framework for logging. Edit the <tt>etc/hadoop/log4j.properties</tt> file to customize the Hadoop daemons’ logging configuration (log-formats and so on).</p></div>
<div class="section">
<h2><a name="Operating_the_Hadoop_Cluster"></a>Operating the Hadoop Cluster</h2>
<p>Once all the necessary configuration is complete, distribute the files to the <tt>HADOOP_CONF_DIR</tt> directory on all the machines. This should be the same directory on all machines.</p>
<p>In general, it is recommended that HDFS and YARN run as separate users. In the majority of installations, HDFS processes execute as ‘hdfs’. YARN is typically using the ‘yarn’ account.</p>
<div class="section">
<h3><a name="Hadoop_Startup"></a>Hadoop Startup</h3>
<p>To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.</p>
<p>The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format &lt;cluster_name&gt;
</pre></div></div>
<p>Start the HDFS NameNode with the following command on the designated node as <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
</pre></div></div>
<p>Start a HDFS DataNode with the following command on each designated node as <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
</pre></div></div>
<p>If <tt>etc/hadoop/slaves</tt> and ssh trusted access is configured (see <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>), all of the HDFS processes can be started with a utility script. As <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/start-dfs.sh
</pre></div></div>
<p>Start the YARN with the following command, run on the designated ResourceManager as <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
</pre></div></div>
<p>Run a script to start a NodeManager on each designated host as <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager
</pre></div></div>
<p>Start a standalone WebAppProxy server. Run on the WebAppProxy server as <i>yarn</i>. If multiple servers are used with load balancing it should be run on each of them:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start proxyserver
</pre></div></div>
<p>If <tt>etc/hadoop/slaves</tt> and ssh trusted access is configured (see <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>), all of the YARN processes can be started with a utility script. As <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_PREFIX/sbin/start-yarn.sh
</pre></div></div>
<p>Start the MapReduce JobHistory Server with the following command, run on the designated server as <i>mapred</i>:</p>

<div class="source">
<div class="source">
<pre>[mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
</pre></div></div></div>
<div class="section">
<h3><a name="Hadoop_Shutdown"></a>Hadoop Shutdown</h3>
<p>Stop the NameNode with the following command, run on the designated NameNode as <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
</pre></div></div>
<p>Run a script to stop a DataNode as <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
</pre></div></div>
<p>If <tt>etc/hadoop/slaves</tt> and ssh trusted access is configured (see <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>), all of the HDFS processes may be stopped with a utility script. As <i>hdfs</i>:</p>

<div class="source">
<div class="source">
<pre>[hdfs]$ $HADOOP_PREFIX/sbin/stop-dfs.sh
</pre></div></div>
<p>Stop the ResourceManager with the following command, run on the designated ResourceManager as <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
</pre></div></div>
<p>Run a script to stop a NodeManager on a slave as <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR stop nodemanager
</pre></div></div>
<p>If <tt>etc/hadoop/slaves</tt> and ssh trusted access is configured (see <a href="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>), all of the YARN processes can be stopped with a utility script. As <i>yarn</i>:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_PREFIX/sbin/stop-yarn.sh
</pre></div></div>
<p>Stop the WebAppProxy server. Run on the WebAppProxy server as <i>yarn</i>. If multiple servers are used with load balancing it should be run on each of them:</p>

<div class="source">
<div class="source">
<pre>[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop proxyserver
</pre></div></div>
<p>Stop the MapReduce JobHistory Server with the following command, run on the designated server as <i>mapred</i>:</p>

<div class="source">
<div class="source">
<pre>[mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR stop historyserver
</pre></div></div></div></div>
<div class="section">
<h2><a name="Web_Interfaces"></a>Web Interfaces</h2>
<p>Once the Hadoop cluster is up and running check the web-ui of the components as described below:</p>

<table border="0" class="bodyTable">
  <thead>
    
<tr class="a">
      
<th align="left">Daemon </th>
      
<th align="left">Web Interface </th>
      
<th align="left">Notes </th>
    </tr>
  </thead>
  <tbody>
    
<tr class="b">
      
<td align="left">NameNode </td>
      
<td align="left"><a class="externalLink" href="http://nn_host:port/">http://nn_host:port/</a> </td>
      
<td align="left">Default HTTP port is 50070. </td>
    </tr>
    
<tr class="a">
      
<td align="left">ResourceManager </td>
      
<td align="left"><a class="externalLink" href="http://rm_host:port/">http://rm_host:port/</a> </td>
      
<td align="left">Default HTTP port is 8088. </td>
    </tr>
    
<tr class="b">
      
<td align="left">MapReduce JobHistory Server </td>
      
<td align="left"><a class="externalLink" href="http://jhs_host:port/">http://jhs_host:port/</a> </td>
      
<td align="left">Default HTTP port is 19888. </td>
    </tr>
  </tbody>
</table></div>
      </div>
    </div>
    <div class="clear">
      <hr>
    </div>
    <div id="footer">
      <div class="xright">©            2016
              Apache Software Foundation
            
                       - <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a></div>
      <div class="clear">
        <hr>
      </div>
    </div>
  

</body></html>