use of job conf class in Hadoop MapReduce

This is for Hadoop eco system like HDFS, Map reduce, Hive, Hbase, Pig, sqoop,sqoop2, Avro, solr, hcatalog, impala, Oozie, Zoo Keeper and Hadoop distribution like Cloudera, Hortonwork etc.
mohit123
Posts: 162
Joined: Sat Sep 20, 2014 11:29 pm
Contact:

use of job conf class in Hadoop MapReduce

Postby mohit123 » Tue Sep 23, 2014 1:55 am

What is the use of job conf class in Hadoop MapReduce? Why we need ob conf class in Hadoop MapReduce?


Guest

Re: use of job conf class in Hadoop MapReduce

Postby Guest » Tue Sep 23, 2014 5:27 pm

MapReduce jobs needs to logically separate different jobs running on the same cluster.
‘Job conf class‘ helps to do job level settings such as declaring a job in real environment like input/ouput/jar/key/value etc. It is recommended that Job name should be descriptive and represent the type of job that is being executed.

Guest

Re: use of job conf class in Hadoop MapReduce

Postby Guest » Tue Sep 23, 2014 5:28 pm

public class JobConf extends Configuration
A map/reduce job configuration.

JobConf is the primary interface for a user to describe a map-reduce job to the Hadoop framework for execution. The framework tries to faithfully execute the job as-is described by JobConf, however:

Some configuration parameters might have been marked as final by administrators and hence cannot be altered.
While some job parameters are straight-forward to set (e.g. setNumReduceTasks(int)), some parameters interact subtly rest of the framework and/or job-configuration and is relatively more complex for the user to control finely (e.g. setNumMapTasks(int)).
JobConf typically specifies the Mapper, combiner (if any), Partitioner, Reducer, InputFormat and OutputFormat implementations to be used etc.

see:
https://hadoop.apache.org/docs/r1.2.1/a ... bConf.html



Return to “Hadoop and Big Data”

Who is online

Users browsing this forum: No registered users and 1 guest