fsck command in Hadoop

This is for Hadoop eco system like HDFS, Map reduce, Hive, Hbase, Pig, sqoop,sqoop2, Avro, solr, hcatalog, impala, Oozie, Zoo Keeper and Hadoop distribution like Cloudera, Hortonwork etc.
alpeshviranik
Posts: 81
Joined: Thu Jul 17, 2014 4:58 pm
Contact:

fsck command in Hadoop

Postby alpeshviranik » Fri Aug 01, 2014 9:11 pm

What fsck command will do in Hadoop file system. is it working same as we use in Linux?


Guest

Re: fsck command in Hadoop

Postby Guest » Fri Aug 01, 2014 9:26 pm

HDFS supports the fsck command to check for various inconsistencies. It it is designed for reporting problems with various files.

for example, missing blocks for a file or under-replicated blocks.

Unlike a traditional fsck utility for native file systems, this command does not correct the errors it detects. Normally NameNode automatically corrects most of the recoverable failures.
So it works to find only the corrupt block while Linux fsck will correct it. so after finding corrupt HDFS block, you can remove the corrupt block.
%hadoop fs -rm /path/to/file/with/permanently/missing/blocks

By default fsck ignores open files but provides an option to select all files during reporting. The HDFS fsck command is not a Hadoop shell command. It can be run as 'bin/hadoop fsck'.

%hadoop fsck [GENERIC_OPTIONS] <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]
<path> Start checking from this path.
-move Move corrupted files to /lost+found
-delete Delete corrupted files.
-openforwrite Print out files opened for write.
-files Print out files being checked.
-blocks Print out block report.
-locations Print out locations for every block.
-racks Print out network topology for data-node locations.

See the below link which i prefer for hadoop command:
http://hadoop.apache.org/docs/r1.2.1/co ... .html#fsck


Return to “Hadoop and Big Data”

Who is online

Users browsing this forum: No registered users and 3 guests