slaves and masters in Hadoop cluster

This is for Hadoop eco system like HDFS, Map reduce, Hive, Hbase, Pig, sqoop,sqoop2, Avro, solr, hcatalog, impala, Oozie, Zoo Keeper and Hadoop distribution like Cloudera, Hortonwork etc.
dharama123
Posts: 125
Joined: Wed Aug 27, 2014 1:10 am
Contact:

slaves and masters in Hadoop cluster

Postby dharama123 » Thu Sep 18, 2014 3:32 am

What are the slaves and masters in Hadoop cluster?


Guest

Re: slaves and masters in Hadoop cluster

Postby Guest » Sat Sep 20, 2014 6:00 pm

Hadoop have slave and master daemons:

e.g. datanode and tasktracker are slave while Namenode, secondary namenode and Jobtracker are master
you can take that way also, hadoop cluster may have multiple slave as multiple datanodes and task trackers. only one master as only one Namenode, secondary namenode and Jobtracker.

Hadoop will keep the master and salve dameon information in hadoop conf/masters and conf/slaves.


The slaves file lists all the compute node hostnames (that is the nodes that you want to run both a Data Node and Task Tracker service on), while the masters file contains the hostname of the node to run the namenode and secondary name node.

The slaves and masters files in the conf folder are only used by the start-mapred.sh, start-dfs.sh and start-all.sh scripts in the bin folder. These scripts are convenience scripts such that you can run them on a single node to ssh into each master / slave node and start the desired hadoop service daemons. These scripts are also meant to be launched from the appropriate 'master' node:


Return to “Hadoop and Big Data”

Who is online

Users browsing this forum: No registered users and 1 guest