This is for Hadoop eco system like HDFS, Map reduce, Hive, Hbase, Pig, sqoop,sqoop2, Avro, solr, hcatalog, impala, Oozie, Zoo Keeper and Hadoop distribution like Cloudera, Hortonwork etc.
2 posts • Page 1 of 1
Hadoop have slave and master daemons:
e.g. datanode and tasktracker are slave while Namenode, secondary namenode and Jobtracker are master
you can take that way also, hadoop cluster may have multiple slave as multiple datanodes and task trackers. only one master as only one Namenode, secondary namenode and Jobtracker.
Hadoop will keep the master and salve dameon information in hadoop conf/masters and conf/slaves.
The slaves file lists all the compute node hostnames (that is the nodes that you want to run both a Data Node and Task Tracker service on), while the masters file contains the hostname of the node to run the namenode and secondary name node.
The slaves and masters files in the conf folder are only used by the start-mapred.sh, start-dfs.sh and start-all.sh scripts in the bin folder. These scripts are convenience scripts such that you can run them on a single node to ssh into each master / slave node and start the desired hadoop service daemons. These scripts are also meant to be launched from the appropriate 'master' node:
Users browsing this forum: No registered users and 1 guest