How to run MapReduce Program on Hadoop?

This is for Hadoop eco system like HDFS, Map reduce, Hive, Hbase, Pig, sqoop,sqoop2, Avro, solr, hcatalog, impala, Oozie, Zoo Keeper and Hadoop distribution like Cloudera, Hortonwork etc.
alpeshviranik
Posts: 81
Joined: Thu Jul 17, 2014 4:58 pm
Contact:

How to run MapReduce Program on Hadoop?

Postby alpeshviranik » Mon Jul 21, 2014 7:22 pm

Can anybody tell the different way to run MapReduce Java Program on Hadoop? Which is the best way to run MapReduce Program?


alpeshviranik
Posts: 81
Joined: Thu Jul 17, 2014 4:58 pm
Contact:

Re: How to run MapReduce Program on Hadoop?

Postby alpeshviranik » Wed Jul 23, 2014 4:00 am

The steps to run MapReduce Java Program on Hadoop in command line
1) COMPILE
javac -classpath ${HADOOP_CLASSES} -d WordCount/ WordCount.java

2) CREATE JAR
jar -cvf WordCount.jar -C WordCount/ .

3) RUN
hadoop jar WordCount.jar org.myorg.WordCount test/ testout


Running wordcount example with -libjars and -files:
hadoop jar hadoop-examples.jar wordcount -files cachefile.txt -libjars mylib.jar input output

You can also run it in Eclipse: See the below link
http://www.mapr.com/blog/basic-notes-on ... 88zYxZfFjA


Return to “Hadoop and Big Data”

Who is online

Users browsing this forum: No registered users and 2 guests