Getting Started with Apache Hadoop 0.23.0

Hadoop 0.23.0 was released November 11, 2011. Being the future of the Hadoop platform, it’s worth checking out even though it is an alpha release.

Note: Many of the instructions in this article came from trial and error, and there are lots of alternative (and possibly better ways) to configure the systems. Please feel free to suggest improvements in the comments. Also, all commands were only tested on Mac OS X.


To get started, download the hadoop-0.23.0.tar.gz file from one of the mirrors here:

Once downloaded, decompress the file. The bundled documentation is available in share/doc/hadoop/index.html

Notes for Users of Previous Versions of Hadoop

The directory layout of the hadoop distribution changed in hadoop 0.23.0 and 0.20.204 vs. previous versions. In particular, there are now sbin, libexec, and etc directories in the root of distribution tarball.

scripts and executables

In hadoop 0.23.0, a number of commonly used scripts from the bin directory have been removed or drastically changed.  Specifically, the following scripts were removed (vs

  • hadoop-daemon(s).sh
  • and
  • and
  • and
  • and
  • task-controller

The start/stop mapred-related scripts have been replaced by “map-reduce 2.0″ scripts called yarn-*.  The and scripts no longer start or stop HDFS, but they are used to start and stop the yarn daemons.  Finally, bin/hadoop has been deprecated. Instead, users should use bin/hdfs and bin/mapred.

Hadoop distributions now also include scripts in a sbin directory. The scripts include,, and (and the stop versions of those scripts).

configuration directories and files

The conf directory that comes with Hadoop is no longer the default configuration directory.  Rather, Hadoop looks in etc/hadoop for configuration files.  The libexec directory contains scripts and for configuring where Hadoop pulls configuration information, and it’s possible to override the location of the configuration directory the following ways:

  • calls in $HADOOP_COMMON_HOME/libexec and $HADOOP_HOME/libexec
  • accepts a –config option for specifying a config directory, or the directory can be specified using $HADOOP_CONF_DIR.
    • This scripts also accepts a –hosts parameter to specify the hosts / slaves
    • This script uses variables typically set in, such as: $JAVA_HOME, $HADOOP_HEAPSIZE, $HADOOP_CLASSPATH, $HADOOP_LOG_DIR, $HADOOP_LOGFILE and more.  See the file for a full list of variables.

Configure HDFS

To start hdfs, we will use sbin/ which pulls configuration from etc/hadoop by default. We’ll be putting configuration files in that directory, starting with core-site.xml.  In core-site.xml, we must specify a

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

Next, we want to override the locations that the NameNode and DataNode store data so that it’s in a non-transient location. The two relevant parameters are and  We also set replication to 1, since we’re using a single datanode.

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>



  • as of HDFS-456 and HDFS-873, the namenode and datanode dirs should be specified with a full URI.
  • by default, hadoop starts up with 1000 megabytes of RAM allocated to each daemon. You can change this by adding a to etc/hadoop. There’s a template that can be added with: $ cp ./share/hadoop/common/templates/conf/ etc/hadoop
    • The template sets up a bogus value for HADOOP_LOG_DIR
    • HADOOP_PID_DIR defaults to /tmp, so you might want to change that variable, too.

Start HDFS

Start the NameNode:

sbin/ start namenode

Start a DataNode:

sbin/ start datanode

(Optionally) start the SecondaryNameNode (this is not required for local development, but definitely for production).

sbin/ start secondarynamenode

To confirm that the processes are running, issue jps and look for lines for NameNode, DataNode and SecondaryNameNode:

$ jps
55036 Jps
55000 SecondaryNameNode
54807 NameNode
54928 DataNode


  • the hadoop daemons log to the “logs” dir.  Stdout goes to a file ending in “.out” and a logfile ends in “.log”. If a daemon doesn’t start up, check the file that includes the daemon name (e.g. logs/hadoop-joecrow-datanode-jcmba.local.out).
  • the commands might say “Unable to load realm info from SCDynamicStore” (at least on Mac OS X). This appears to be harmless output, see HADOOP-7489 for details.

Stopping HDFS

Eventually you’ll want to stop HDFS. Here are the commands to execute, in the given order:

sbin/ stop secondarynamenode
sbin/ stop datanode
sbin/ stop namenode

Use jps to confirm that the daemons are no longer running.

Running an example MR Job

This section just gives the commands for configuring and starting the Resource Manager, Node Manager, and Job History Server, but it doesn’t explain the details of those. Please refer to the References and Links section for more details.

The Yarn daemons use the conf directory in the distribution for configuration by default. Since we used etc/hadoop as the configuration directory for HDFS, it would be nice to use that as the config directory for mapreduce, too.  As a result, we update the following files:

In conf/, add the following lines under the definition of YARN_CONF_DIR:


In conf/yarn-site.xml, update the contents to:

<?xml version="1.0"?>

Set the contents of etc/hadoop/mapred-site.xml to:

<?xml version="1.0"?>
<?xml-stylesheet href="configuration.xsl"?>

Now, start up the yarn daemons:

 $ bin/ start resourcemanager
 $ bin/ start nodemanager
 $ bin/ start historyserver

A bunch of example jobs are available via the hadoop-examples jar. For example, to run the program that calculates pi:

$ bin/hadoop jar hadoop-mapreduce-examples-0.23.0.jar pi \ \
-libjars modules/hadoop-mapreduce-client-jobclient-0.23.0.jar 16 10000

The command will output a lot of output, but towards the end you’ll see:

Job Finished in 67.705 seconds
Estimated value of Pi is 3.14127500000000000000


  • By default, the resource manager uses a number of IPC ports, including 8025, 8030, 8040, and 8141.  The web UI is exposed on port 8088.
  • By default, the JobHistoryServer uses port 19888 for a web UI and port 10020 for IPC.
  • By default, the node manager uses port 9999 for a web UI and port 4344 for IPC. Port 8080 is used for something? Also so random port… 65176 ?
  • The resource manager has a “proxy” url that it uses to link-through to the JobHistoryServer UI. e.g.:
    $ curl -I
    HTTP/1.1 302 Found
    Content-Type: text/plain; charset=utf-8
    Content-Length: 0
    Server: Jetty(6.1.26)


While Hadoop 0.23 is an alpha-release, getting it up and running in psuedo-distributed mode isn’t too difficult.  The new architecture will take some getting used to for users of previous releases of Hadoop, but it’s an exciting step forward.

Observations and Notes

There are a few bugs or gotchas that I discovered or verified to keep an eye on as you’re going through these steps.  These include:

  • HADOOP-7837 log4j isn’t setup correctly when using sbin/
  • HDFS-2574 Deprecated parameters appear in the hdfs-site.xml templates.
  • HDFS-2595 misleading message when not set and running sbin/
  • HDFS-2553 BlockPoolScanner spinning in a loop (causes DataNode to peg one cpu to 100%).
  • HDFS-2608 NameNode webui references missing hadoop.css

References and Links

This entry was posted in hadoop. Bookmark the permalink.

19 Responses to Getting Started with Apache Hadoop 0.23.0

  1. Jie Li says:

    Good job! This is the best instruction so far!

    One more step: before starting the namenode, we need to format it by
    “bin/hadoop namenode -format”

    The other steps are all easy to follow. Thanks a lot!

  2. Nourl says:

    Thanks for your hard work and good post !
    I have run my hadoop program under guide of your blog.
    Thank you again!

  3. MRK says:


    IN hadoop 0.23.o release there is no conf/masters file, which is used to specify the secondarynamenode host address. Could you please let me know how secondaryname node starts and where it will start. In this tutorial i have seen three commands to start HDFS.
    sbin/ stop secondarynamenode
    sbin/ stop datanode
    sbin/ stop namenode

    Datanode starts on the nodes mentioned in the conf/slaves file. let me know where secondary name node starts and how to configure the same.

  4. Pingback: Quora

  5. Praveen says:

    Hadoop 0.23 requires protoc 2.4.1+, Ubuntu 11.10 has 2.4.0. So, protoc source has to be got, built and installed.

    • joecrow says:

      Praveen, I wasn’t compiling the source at all in this example. If you download the distro, you should be able to run as is.

  6. srikanth says:

    Nice notes..but i am not able to start recource manager and node manager. did u face any problem like this?

  7. Pingback: Mongo-Hadoop Streaming – Bukan Tutorial | robee di sini!

  8. Krish says:

    Joe, If it hadn’t been for this blog post, I wouldn’t have CDH4B2 running, thanks for the great job. I am trying to run the included PI sample and am running into this weird issue, it complains about the output directory not existing. I thought hadoop created that automatically.

    hadoop jar /Users/hadoop/hadoop-0.23.1-cdh4.0.0b2/share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.1-cdh4.0.0b2.jar pi -libjars /Users/hadoop/hadoop-0.23.1-cdh4.0.0b2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.1-cdh4.0.0b2.jar 16 10000

    12/05/29 16:35:49 INFO mapreduce.Job: map 0% reduce 0%
    12/05/29 16:35:49 INFO mapreduce.Job: Job job_1338334106848_0004 failed with state FAILED due to: Application application_1338334106848_0004 failed 1 times due to AM Container for appattempt_1338334106848_0004_000001 exited with exitCode: 127 due to:
    .Failing this attempt.. Failing the application.
    12/05/29 16:35:49 INFO mapreduce.Job: Counters: 0
    Job Finished in 3.599 seconds File does not exist: hdfs://localhost:9000/user/hadoop/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out

  9. Pingback: Hadoop – Installation (on Ubuntu) | Daniel Adeniji's – Learning in the Open

  10. rashmi says:


    For hadoop-2.0.0 installation on two linux machines, what should be values of fs.defaultFS and and properties on both name nodes????

    one machine hostname is rsi-nod-nsn1 and another one is rsi-nod-nsn2…

    i want to make both federated namenodes.. and both should be used as datanodes too..

    what should be configuration changes for the same? i am not finding masters, mapred-site.xml, and files in hadoopHome/etc/hadoop folder… how do i make changes for these files?

  11. says:

    To start history server use
    $ sbin/ start

    With cdh401 I was unable to start history server using
    $ ysbin/ start historyserver

    starting historyserver, logging to /tmp/
    Exception in thread “main” java.lang.NoClassDefFoundError: historyserver
    Caused by: java.lang.ClassNotFoundException: historyserver
    at Method)
    at java.lang.ClassLoader.loadClass(
    at sun.misc.Launcher$AppClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(
    Could not find the main class: historyserver. Program will exit.

  12. Hardik says:

    I get the same FileNotFoundException running “pi” example, anyone with some idea pleas help

    bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.1.jar pi -libjars share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.1.jar 16 10000
    12/09/06 19:56:10 WARN conf.Configuration: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
    Number of Maps = 16
    Samples per Map = 10000
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Wrote input for Map #5
    Wrote input for Map #6
    Wrote input for Map #7
    Wrote input for Map #8
    Wrote input for Map #9
    Wrote input for Map #10
    Wrote input for Map #11
    Wrote input for Map #12
    Wrote input for Map #13
    Wrote input for Map #14
    Wrote input for Map #15
    Starting Job
    12/09/06 19:56:22 WARN conf.Configuration: is deprecated. Instead, use fs.defaultFS
    12/09/06 19:56:22 INFO input.FileInputFormat: Total input paths to process : 16
    12/09/06 19:56:23 INFO mapreduce.JobSubmitter: number of splits:16
    12/09/06 19:56:25 INFO mapred.ResourceMgrDelegate: Submitted application application_1346972652940_0002 to ResourceManager at /
    12/09/06 19:56:27 INFO mapreduce.Job: The url to track the job:
    12/09/06 19:56:27 INFO mapreduce.Job: Running job: job_1346972652940_0002
    12/09/06 19:56:55 INFO mapreduce.Job: Job job_1346972652940_0002 running in uber mode : false
    12/09/06 19:56:55 INFO mapreduce.Job: map 0% reduce 0%
    12/09/06 19:56:56 INFO mapreduce.Job: Job job_1346972652940_0002 failed with state FAILED due to: Application application_1346972652940_0002 failed 1 times due to AM Container for appattempt_1346972652940_0002_000001 exited with exitCode: 1 due to:
    .Failing this attempt.. Failing the application.
    12/09/06 19:56:56 INFO mapreduce.Job: Counters: 0
    Job Finished in 35.048 seconds File does not exist: hdfs://localhost:9000/user/hardikpandya/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    at java.lang.reflect.Method.invoke(
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(
    at org.apache.hadoop.util.ProgramDriver.driver(
    at org.apache.hadoop.examples.ExampleDriver.main(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
    at java.lang.reflect.Method.invoke(
    at org.apache.hadoop.util.RunJar.main(

  13. Keith Wiley says:

    Gah! Even though a comment pointed out that you forgot to mention formatting the namenode, and even though you replied that you would update the article…the omission is still there. I spent quite a while trying to figure out why the datanode would start but not the namenode (I figured it out by investigating the namenode error log in logs/).

    You should really put that update in the article. :-)


  14. kasi says:

    Hi all,
    I’m using cyswin in windows,
    When i type the command
    user@user-PC ~/hadoop-0.23.7
    $ bin/hadoop namenode -format
    I’m getting the following error, can any one please help me in resolving the issue.
    Thanks in Advance

    cygpath: can’t convert empty path
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.

    which: no hdfs in (./D:\cygwin\home\user\hadoop-0.23.7/bin)
    dirname: missing operand
    Try `dirname –help’ for more information.
    D:\cygwin\home\user\hadoop-0.23.7/bin/hdfs: line 24: /home/user/hadoop-0.23.7/../libexec/ No such file or directory
    cygpath: can’t convert empty path
    D:\cygwin\home\user\hadoop-0.23.7/bin/hdfs: line 142: exec: : not found

  15. Deepak says:

    While formatting name node i am getting the following error:

    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.

    Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode

    Please help me on this


  16. Harneet says:

    While configuring hadoop in cygwin on windows , when i run the command
    /bin/hadoop namenode
    it gives me the following error.
    /usr/local/hadoop-0.20.0/bin/../conf/ line2: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line7: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line10: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line13: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line16: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line19: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line29: $’\r’ :command not found
    /usr/local/hadoop-0.20.0/bin/../conf/ line32: $’\r’ :command not found
    bin/hadoop: line 258: /cygdrive/C/Program: No such file or directory
    /bin/java: No such file or directoryogram Files/Java/jdk1.7.0_03
    /bin/java: cannot execute: No such file or directorys/Java/jdk1.7.0_03

    Please help me to solve this error.

Leave a Reply