@InterfaceAudience.Private public abstract class CommonFSUtils extends Object
限定符和类型 | 类和说明 |
---|---|
static class |
CommonFSUtils.StreamLacksCapabilityException
Helper exception for those cases where the place where we need to check a stream capability
is not where we have the needed context to explain the impact and mitigation for a lack.
|
限定符和类型 | 字段和说明 |
---|---|
static String |
FULL_RWX_PERMISSIONS
Full access permissions (starting point for a umask)
|
static String |
HBASE_WAL_DIR
Parameter name for HBase WAL directory
|
static String |
UNSAFE_STREAM_CAPABILITY_ENFORCE
Parameter to disable stream capability enforcement checks
|
限定符 | 构造器和说明 |
---|---|
protected |
CommonFSUtils() |
限定符和类型 | 方法和说明 |
---|---|
static void |
checkShortCircuitReadBufferSize(org.apache.hadoop.conf.Configuration conf)
Check if short circuit read buffer size is set and if not, set it to hbase value.
|
static org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission perm,
boolean overwrite)
Create the specified file on the filesystem.
|
static boolean |
delete(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
boolean recursive)
Calls fs.delete() and returns the value returned by the fs.delete()
|
static boolean |
deleteDirectory(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir)
Delete if exists.
|
static org.apache.hadoop.fs.FileSystem |
getCurrentFileSystem(org.apache.hadoop.conf.Configuration conf) |
static long |
getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
|
static int |
getDefaultBufferSize(org.apache.hadoop.fs.FileSystem fs)
Returns the default buffer size to use during writes.
|
static short |
getDefaultReplication(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path) |
static org.apache.hadoop.fs.permission.FsPermission |
getFilePermissions(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration conf,
String permssionConfKey)
Get the file permissions specified in the configuration, if they are
enabled.
|
static org.apache.hadoop.fs.Path |
getNamespaceDir(org.apache.hadoop.fs.Path rootdir,
String namespace)
Returns the
Path object representing
the namespace directory under path rootdir |
static String |
getPath(org.apache.hadoop.fs.Path p)
Return the 'path' component of a Path.
|
static org.apache.hadoop.fs.Path |
getRootDir(org.apache.hadoop.conf.Configuration c) |
static org.apache.hadoop.fs.FileSystem |
getRootDirFileSystem(org.apache.hadoop.conf.Configuration c) |
static org.apache.hadoop.fs.Path |
getTableDir(org.apache.hadoop.fs.Path rootdir,
TableName tableName)
Returns the
Path object representing the table directory under
path rootdir |
static TableName |
getTableName(org.apache.hadoop.fs.Path tablePath)
Returns the
TableName object representing
the table directory under
path rootdir |
static org.apache.hadoop.fs.FileSystem |
getWALFileSystem(org.apache.hadoop.conf.Configuration c) |
static org.apache.hadoop.fs.Path |
getWALRootDir(org.apache.hadoop.conf.Configuration c) |
static boolean |
hasCapability(org.apache.hadoop.fs.FSDataOutputStream stream,
String capability)
If our FileSystem version includes the StreamCapabilities class, check if
the given stream has a particular capability.
|
static boolean |
isExists(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
Calls fs.exists().
|
static boolean |
isHDFS(org.apache.hadoop.conf.Configuration conf) |
static boolean |
isMatchingTail(org.apache.hadoop.fs.Path pathToSearch,
org.apache.hadoop.fs.Path pathTail)
Compare path component of the Path URI; e.g. if hdfs://a/b/c and /a/b/c, it will compare the
'/a/b/c' part.
|
static boolean |
isMatchingTail(org.apache.hadoop.fs.Path pathToSearch,
String pathTail)
Compare path component of the Path URI; e.g. if hdfs://a/b/c and /a/b/c, it will compare the
'/a/b/c' part.
|
static boolean |
isRecoveredEdits(org.apache.hadoop.fs.Path path)
Checks if the given path is the one with 'recovered.edits' dir.
|
static boolean |
isStartingWithPath(org.apache.hadoop.fs.Path rootPath,
String path)
Compare of path component.
|
static List<org.apache.hadoop.fs.LocatedFileStatus> |
listLocatedStatus(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir)
Calls fs.listFiles() to get FileStatus and BlockLocations together for reducing rpc call
|
static org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir)
Calls fs.listStatus() and treats FileNotFoundException as non-fatal
This would accommodates differences between hadoop versions
|
static org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.PathFilter filter)
Calls fs.listStatus() and treats FileNotFoundException as non-fatal
This accommodates differences between hadoop versions, where hadoop 1
does not throw a FileNotFoundException, and return an empty FileStatus[]
while Hadoop 2 will throw FileNotFoundException.
|
static void |
logFileSystemState(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path root,
org.slf4j.Logger LOG)
Log the current state of the filesystem from a certain root directory
|
static String |
removeWALRootPath(org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Checks for the presence of the WAL log root path (using the provided conf object) in the given
path.
|
static boolean |
renameAndSetModifyTime(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dest) |
static void |
setFsDefault(org.apache.hadoop.conf.Configuration c,
org.apache.hadoop.fs.Path root) |
static void |
setRootDir(org.apache.hadoop.conf.Configuration c,
org.apache.hadoop.fs.Path root) |
static void |
setStoragePolicy(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
String storagePolicy)
Sets storage policy for given path.
|
static void |
setupShortCircuitRead(org.apache.hadoop.conf.Configuration conf)
Do our short circuit read setup.
|
static void |
setWALRootDir(org.apache.hadoop.conf.Configuration c,
org.apache.hadoop.fs.Path root) |
static org.apache.hadoop.fs.Path |
validateRootPath(org.apache.hadoop.fs.Path root)
Verifies root directory path is a valid URI with a scheme
|
public static final String HBASE_WAL_DIR
public static final String UNSAFE_STREAM_CAPABILITY_ENFORCE
public static boolean isStartingWithPath(org.apache.hadoop.fs.Path rootPath, String path)
path
starts with rootPath
,
then the function returns truerootPath
- value to check forpath
- subject to checkpath
starts with rootPath
public static boolean isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, String pathTail)
pathToSearch
- Path we will be trying to match against.pathTail
- what to matchpathTail
is tail on the path of pathToSearch
public static boolean isMatchingTail(org.apache.hadoop.fs.Path pathToSearch, org.apache.hadoop.fs.Path pathTail)
pathToSearch
- Path we will be trying to match agains againstpathTail
- what to matchpathTail
is tail on the path of pathToSearch
public static boolean deleteDirectory(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
fs
- filesystem objectdir
- directory to deletedir
IOException
- epublic static long getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException
fs
- filesystem objectIOException
- epublic static short getDefaultReplication(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException
IOException
public static int getDefaultBufferSize(org.apache.hadoop.fs.FileSystem fs)
fs
- filesystem objectpublic static org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission perm, boolean overwrite) throws IOException
fs
- FileSystem
on which to write the filepath
- Path
to the file to writeperm
- intial permissionsoverwrite
- Whether or not the created file should be overwritten.IOException
- if the file cannot be createdpublic static org.apache.hadoop.fs.permission.FsPermission getFilePermissions(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf, String permssionConfKey)
fs
- filesystem that the file will be created on.conf
- configuration to read for determining if permissions are
enabled and which to usepermssionConfKey
- property key in the configuration to use when
finding the permissionpublic static org.apache.hadoop.fs.Path validateRootPath(org.apache.hadoop.fs.Path root) throws IOException
root
- root directory pathroot
argument.IOException
- if not a valid URI with a schemepublic static String removeWALRootPath(org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf) throws IOException
path
- must not be nullconf
- must not be nullIOException
- from underlying filesystempublic static String getPath(org.apache.hadoop.fs.Path p)
hdfs://example.org:9000/hbase_trunk/TestTable/compaction.dir
,
this method returns /hbase_trunk/TestTable/compaction.dir
.
This method is useful if you want to print out a Path without qualifying
Filesystem instance.p
- Filesystem Path whose 'path' component we are to return.public static org.apache.hadoop.fs.Path getRootDir(org.apache.hadoop.conf.Configuration c) throws IOException
c
- configurationPath
to hbase root directory from
configuration as a qualified Path.IOException
- epublic static void setRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root) throws IOException
IOException
public static void setFsDefault(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root) throws IOException
IOException
public static org.apache.hadoop.fs.FileSystem getRootDirFileSystem(org.apache.hadoop.conf.Configuration c) throws IOException
IOException
public static org.apache.hadoop.fs.Path getWALRootDir(org.apache.hadoop.conf.Configuration c) throws IOException
c
- configurationPath
to hbase log root directory: e.g. "hbase.wal.dir" from
configuration as a qualified Path. Defaults to HBase root dir.IOException
- epublic static void setWALRootDir(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path root) throws IOException
IOException
public static org.apache.hadoop.fs.FileSystem getWALFileSystem(org.apache.hadoop.conf.Configuration c) throws IOException
IOException
public static org.apache.hadoop.fs.Path getTableDir(org.apache.hadoop.fs.Path rootdir, TableName tableName)
Path
object representing the table directory under
path rootdirrootdir
- qualified path of HBase root directorytableName
- name of tablePath
for tablepublic static TableName getTableName(org.apache.hadoop.fs.Path tablePath)
TableName
object representing
the table directory under
path rootdirtablePath
- path of tablePath
for tablepublic static org.apache.hadoop.fs.Path getNamespaceDir(org.apache.hadoop.fs.Path rootdir, String namespace)
Path
object representing
the namespace directory under path rootdirrootdir
- qualified path of HBase root directorynamespace
- namespace namePath
for tablepublic static void setStoragePolicy(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, String storagePolicy)
fs
- We only do anything it implements a setStoragePolicy methodpath
- the Path whose storage policy is to be setstoragePolicy
- Policy to set on path
; see hadoop 2.6+
org.apache.hadoop.hdfs.protocol.HdfsConstants for possible list e.g
'COLD', 'WARM', 'HOT', 'ONE_SSD', 'ALL_SSD', 'LAZY_PERSIST'.public static boolean isHDFS(org.apache.hadoop.conf.Configuration conf) throws IOException
conf
- must not be nullIOException
- from underlying FileSystempublic static boolean isRecoveredEdits(org.apache.hadoop.fs.Path path)
path
- must not be nullpublic static org.apache.hadoop.fs.FileSystem getCurrentFileSystem(org.apache.hadoop.conf.Configuration conf) throws IOException
conf
- must not be nullIOException
- from underlying FileSystempublic static org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.PathFilter filter) throws IOException
fs
- file systemdir
- directoryfilter
- path filterIOException
public static org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
fs
- file systemdir
- directoryIOException
public static List<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException
fs
- file systemdir
- directoryIOException
public static boolean delete(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, boolean recursive) throws IOException
fs
- must not be nullpath
- must not be nullrecursive
- delete tree rooted at pathIOException
- from underlying FileSystempublic static boolean isExists(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException
fs
- must not be nullpath
- must not be nullIOException
- from underlying FileSystempublic static void logFileSystemState(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path root, org.slf4j.Logger LOG) throws IOException
fs
- filesystem to investigateroot
- root file/directory to start logging fromLOG
- log to output informationIOException
- if an unexpected exception occurspublic static boolean renameAndSetModifyTime(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dest) throws IOException
IOException
public static void setupShortCircuitRead(org.apache.hadoop.conf.Configuration conf)
conf
- must not be nullpublic static void checkShortCircuitReadBufferSize(org.apache.hadoop.conf.Configuration conf)
conf
- must not be nullpublic static boolean hasCapability(org.apache.hadoop.fs.FSDataOutputStream stream, String capability)
stream
- capabilities are per-stream instance, so check this one specifically. must not be
nullcapability
- what to look for, per Hadoop Common's FileSystem docsCopyright © 2007–2018 The Apache Software Foundation. All rights reserved.