public abstract class GoogleHadoopFileSystemBase extends org.apache.hadoop.fs.FileSystem implements FileSystemDescriptor
It is implemented as a thin abstraction layer on top of GCS. The layer hides any specific characteristics of the underlying store and exposes FileSystem interface understood by the Hadoop engine.
Users interact with the files in the storage using fully qualified URIs. The file system
exposed by this class is identified using the 'gs' scheme. For example, gs://dir1/dir2/file1.txt
.
This implementation translates paths between hadoop Path and GCS URI with the convention that the Hadoop root directly corresponds to the GCS "root", e.g. gs:/. This is convenient for many reasons, such as data portability and close equivalence to gsutil paths, but imposes certain inherited constraints, such as files not being allowed in root (only 'directories' can be placed in root), and directory names inside root have a more limited set of allowed characters.
One of the main goals of this implementation is to maintain compatibility with behavior of HDFS implementation when accessed through FileSystem interface. HDFS implementation is not very consistent about the cases when it throws versus the cases when methods return false. We run GHFS tests and HDFS tests against the same test data and use that as a guide to decide whether to throw or to return false.
Modifier and Type | Class and Description |
---|---|
static class |
GoogleHadoopFileSystemBase.Counter
Defines names of counters we track for each operation.
|
static class |
GoogleHadoopFileSystemBase.GcsFileChecksumType
Available GCS checksum types for use with
GoogleHadoopFileSystemConfiguration.GCS_FILE_CHECKSUM_TYPE . |
static class |
GoogleHadoopFileSystemBase.OutputStreamType
Available types for use with
GoogleHadoopFileSystemConfiguration.GCS_OUTPUT_STREAM_TYPE . |
static class |
GoogleHadoopFileSystemBase.ParentTimestampUpdateIncludePredicate
A predicate that processes individual directory paths and evaluates the conditions set in
fs.gs.parent.timestamp.update.enable, fs.gs.parent.timestamp.update.substrings.include and
fs.gs.parent.timestamp.update.substrings.exclude to determine if a path should be ignored
when running directory timestamp updates.
|
Modifier and Type | Field and Description |
---|---|
static java.lang.String |
AUTHENTICATION_PREFIX
Prefix to use for common authentication keys.
|
protected com.google.common.collect.ImmutableMap<GoogleHadoopFileSystemBase.Counter,java.util.concurrent.atomic.AtomicLong> |
counters
Map of counter values
|
static org.apache.hadoop.fs.PathFilter |
DEFAULT_FILTER
Default PathFilter that accepts all paths.
|
protected long |
defaultBlockSize
Default block size.
|
protected GcsDelegationTokens |
delegationTokens
Delegation token support
|
static java.lang.String |
GHFS_ID
Identifies this version of the GoogleHadoopFileSystemBase library.
|
protected java.net.URI |
initUri
The URI the File System is passed in initialize.
|
static java.lang.String |
PATH_CODEC_USE_LEGACY_ENCODING
Use LEGACY_PATH_CODEC.
|
static java.lang.String |
PATH_CODEC_USE_URI_ENCODING
Use new URI_ENCODED_PATH_CODEC.
|
protected PathCodec |
pathCodec |
static java.lang.String |
PROPERTIES_FILE
A resource file containing GCS related build properties.
|
static short |
REPLICATION_FACTOR_DEFAULT
Default value of replication factor.
|
static java.lang.String |
UNKNOWN_VERSION
The version returned when one cannot be found in properties.
|
static java.lang.String |
VERSION
Current version.
|
static java.lang.String |
VERSION_PROPERTY
The key in the PROPERTIES_FILE that contains the version built.
|
Constructor and Description |
---|
GoogleHadoopFileSystemBase()
Constructs an instance of GoogleHadoopFileSystemBase; the internal
GoogleCloudStorageFileSystem will be set up with config settings when initialize() is called. |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.fs.FSDataOutputStream |
append(org.apache.hadoop.fs.Path hadoopPath,
int bufferSize,
org.apache.hadoop.util.Progressable progress)
Appends to an existing file (optional operation).
|
protected void |
checkPath(org.apache.hadoop.fs.Path path) |
void |
close() |
void |
completeLocalOutput(org.apache.hadoop.fs.Path fsOutputFile,
org.apache.hadoop.fs.Path tmpLocalFile) |
void |
concat(org.apache.hadoop.fs.Path trg,
org.apache.hadoop.fs.Path[] psrcs)
Concat existing files into one file.
|
protected abstract void |
configureBuckets(GoogleCloudStorageFileSystem gcsFs)
Validates and possibly creates buckets needed by subclass.
|
void |
copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path[] srcs,
org.apache.hadoop.fs.Path dst) |
void |
copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst) |
void |
copyToLocalFile(boolean delSrc,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst) |
org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.Path hadoopPath,
org.apache.hadoop.fs.permission.FsPermission permission,
boolean overwrite,
int bufferSize,
short replication,
long blockSize,
org.apache.hadoop.util.Progressable progress)
Opens the given file for writing.
|
protected com.google.common.collect.ImmutableMap<GoogleHadoopFileSystemBase.Counter,java.util.concurrent.atomic.AtomicLong> |
createCounterMap() |
org.apache.hadoop.fs.FSDataOutputStream |
createNonRecursive(org.apache.hadoop.fs.Path hadoopPath,
org.apache.hadoop.fs.permission.FsPermission permission,
java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blockSize,
org.apache.hadoop.util.Progressable progress) |
boolean |
delete(org.apache.hadoop.fs.Path hadoopPath,
boolean recursive)
Deletes the given file or directory.
|
boolean |
deleteOnExit(org.apache.hadoop.fs.Path f) |
java.lang.String |
getCanonicalServiceName() |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(org.apache.hadoop.fs.Path f) |
long |
getDefaultBlockSize() |
protected int |
getDefaultPort()
The default port is listed as -1 as an indication that ports are not used.
|
short |
getDefaultReplication()
Gets the default replication factor.
|
abstract org.apache.hadoop.fs.Path |
getDefaultWorkingDirectory()
Gets the default value of working directory.
|
org.apache.hadoop.security.token.Token<?> |
getDelegationToken(java.lang.String renewer) |
org.apache.hadoop.fs.FileChecksum |
getFileChecksum(org.apache.hadoop.fs.Path hadoopPath) |
org.apache.hadoop.fs.FileStatus |
getFileStatus(org.apache.hadoop.fs.Path hadoopPath)
Gets status of the given path item.
|
abstract org.apache.hadoop.fs.Path |
getFileSystemRoot()
Returns the Hadoop path representing the root of the FileSystem associated with this
FileSystemDescriptor.
|
GoogleCloudStorageFileSystem |
getGcsFs()
Gets GCS FS instance.
|
abstract java.net.URI |
getGcsPath(org.apache.hadoop.fs.Path hadoopPath)
Gets GCS path corresponding to the given Hadoop path, which can be relative or absolute, and
can have either gs://
|
abstract org.apache.hadoop.fs.Path |
getHadoopPath(java.net.URI gcsPath)
Gets Hadoop path corresponding to the given GCS path.
|
org.apache.hadoop.fs.Path |
getHomeDirectory()
Returns home directory of the current user.
|
protected abstract java.lang.String |
getHomeDirectorySubpath()
Returns an unqualified path without any leading slash, relative to the filesystem root,
which serves as the home directory of the current user; see
getHomeDirectory for
a description of what the home directory means. |
abstract java.lang.String |
getScheme()
Returns the URI scheme for the Hadoop FileSystem associated with this FileSystemDescriptor.
|
java.net.URI |
getUri()
Returns a URI of the root of this FileSystem.
|
long |
getUsed() |
org.apache.hadoop.fs.Path |
getWorkingDirectory()
Gets the current working directory.
|
byte[] |
getXAttr(org.apache.hadoop.fs.Path path,
java.lang.String name) |
java.util.Map<java.lang.String,byte[]> |
getXAttrs(org.apache.hadoop.fs.Path path) |
java.util.Map<java.lang.String,byte[]> |
getXAttrs(org.apache.hadoop.fs.Path path,
java.util.List<java.lang.String> names) |
org.apache.hadoop.fs.FileStatus[] |
globStatus(org.apache.hadoop.fs.Path pathPattern)
Returns an array of FileStatus objects whose path names match pathPattern.
|
org.apache.hadoop.fs.FileStatus[] |
globStatus(org.apache.hadoop.fs.Path pathPattern,
org.apache.hadoop.fs.PathFilter filter)
Returns an array of FileStatus objects whose path names match pathPattern and is accepted by
the user-supplied path filter.
|
void |
initialize(java.net.URI path,
org.apache.hadoop.conf.Configuration config)
See
initialize(URI, Configuration, boolean) for details; calls with third arg
defaulting to 'true' for initializing the superclass. |
void |
initialize(java.net.URI path,
org.apache.hadoop.conf.Configuration config,
boolean initSuperclass)
Initializes this file system instance.
|
org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.Path hadoopPath)
Lists file status.
|
java.util.List<java.lang.String> |
listXAttrs(org.apache.hadoop.fs.Path path) |
org.apache.hadoop.fs.Path |
makeQualified(org.apache.hadoop.fs.Path path)
Overridden to make root it's own parent.
|
boolean |
mkdirs(org.apache.hadoop.fs.Path hadoopPath,
org.apache.hadoop.fs.permission.FsPermission permission)
Makes the given path and all non-existent parents directories.
|
org.apache.hadoop.fs.FSDataInputStream |
open(org.apache.hadoop.fs.Path hadoopPath,
int bufferSize)
Opens the given file for reading.
|
protected void |
processDeleteOnExit() |
void |
removeXAttr(org.apache.hadoop.fs.Path path,
java.lang.String name) |
boolean |
rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Renames src to dst.
|
void |
setOwner(org.apache.hadoop.fs.Path p,
java.lang.String username,
java.lang.String groupname) |
void |
setPermission(org.apache.hadoop.fs.Path p,
org.apache.hadoop.fs.permission.FsPermission permission) |
void |
setTimes(org.apache.hadoop.fs.Path p,
long mtime,
long atime) |
void |
setVerifyChecksum(boolean verifyChecksum) |
void |
setWorkingDirectory(org.apache.hadoop.fs.Path hadoopPath)
Sets the current working directory to the given path.
|
void |
setXAttr(org.apache.hadoop.fs.Path path,
java.lang.String name,
byte[] value,
java.util.EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flags) |
org.apache.hadoop.fs.Path |
startLocalOutput(org.apache.hadoop.fs.Path fsOutputFile,
org.apache.hadoop.fs.Path tmpLocalFile) |
access, addDelegationTokens, append, append, appendFile, areSymlinksEnabled, cancelDeleteOnExit, canonicalizeUri, clearStatistics, closeAll, closeAllForUGI, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, create, createFile, createNewFile, createNonRecursive, createNonRecursive, createSnapshot, createSnapshot, createSymlink, delete, deleteSnapshot, enableSymlinks, exists, fixRelativePart, get, get, get, getAclStatus, getAllStatistics, getAllStoragePolicies, getBlockSize, getCanonicalUri, getChildFileSystems, getDefaultBlockSize, getDefaultReplication, getDefaultUri, getFileBlockLocations, getFileBlockLocations, getFileChecksum, getFileLinkStatus, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getInitialWorkingDirectory, getLength, getLinkTarget, getLocal, getName, getNamed, getQuotaUsage, getReplication, getServerDefaults, getServerDefaults, getStatistics, getStatistics, getStatus, getStatus, getStoragePolicy, getStorageStatistics, getTrashRoot, getTrashRoots, getUsed, isDirectory, isFile, listCorruptFileBlocks, listFiles, listLocatedStatus, listLocatedStatus, listStatus, listStatus, listStatus, listStatusBatch, listStatusIterator, mkdirs, mkdirs, modifyAclEntries, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, newInstance, newInstance, newInstance, newInstanceLocal, open, primitiveCreate, primitiveMkdir, primitiveMkdir, printStatistics, removeAcl, removeAclEntries, removeDefaultAcl, rename, renameSnapshot, resolveLink, resolvePath, setAcl, setDefaultUri, setDefaultUri, setReplication, setStoragePolicy, setWriteChecksum, setXAttr, supportsSymlinks, truncate, unsetStoragePolicy
public static final java.lang.String PATH_CODEC_USE_URI_ENCODING
public static final java.lang.String PATH_CODEC_USE_LEGACY_ENCODING
public static final short REPLICATION_FACTOR_DEFAULT
public static final org.apache.hadoop.fs.PathFilter DEFAULT_FILTER
public static final java.lang.String AUTHENTICATION_PREFIX
public static final java.lang.String PROPERTIES_FILE
public static final java.lang.String VERSION_PROPERTY
public static final java.lang.String UNKNOWN_VERSION
public static final java.lang.String VERSION
public static final java.lang.String GHFS_ID
protected java.net.URI initUri
protected GcsDelegationTokens delegationTokens
protected PathCodec pathCodec
protected long defaultBlockSize
protected final com.google.common.collect.ImmutableMap<GoogleHadoopFileSystemBase.Counter,java.util.concurrent.atomic.AtomicLong> counters
public GoogleHadoopFileSystemBase()
GoogleCloudStorageFileSystem
will be set up with config settings when initialize() is called.protected com.google.common.collect.ImmutableMap<GoogleHadoopFileSystemBase.Counter,java.util.concurrent.atomic.AtomicLong> createCounterMap()
protected abstract java.lang.String getHomeDirectorySubpath()
getHomeDirectory
for
a description of what the home directory means.public abstract org.apache.hadoop.fs.Path getHadoopPath(java.net.URI gcsPath)
gcsPath
- Fully-qualified GCS path, of the form gs://public abstract java.net.URI getGcsPath(org.apache.hadoop.fs.Path hadoopPath)
hadoopPath
- Hadoop path.public abstract org.apache.hadoop.fs.Path getDefaultWorkingDirectory()
public abstract org.apache.hadoop.fs.Path getFileSystemRoot()
FileSystemDescriptor
getFileSystemRoot
in interface FileSystemDescriptor
public abstract java.lang.String getScheme()
FileSystemDescriptor
getScheme
in interface FileSystemDescriptor
getScheme
in class org.apache.hadoop.fs.FileSystem
public org.apache.hadoop.fs.Path makeQualified(org.apache.hadoop.fs.Path path)
Overridden to make root it's own parent. This is POSIX compliant, but more importantly guards against poor directory accounting in the PathData class of Hadoop 2's FsShell.
makeQualified
in class org.apache.hadoop.fs.FileSystem
protected void checkPath(org.apache.hadoop.fs.Path path)
checkPath
in class org.apache.hadoop.fs.FileSystem
public void initialize(java.net.URI path, org.apache.hadoop.conf.Configuration config) throws java.io.IOException
initialize(URI, Configuration, boolean)
for details; calls with third arg
defaulting to 'true' for initializing the superclass.initialize
in class org.apache.hadoop.fs.FileSystem
path
- URI of a file/directory within this file system.config
- Hadoop configuration.java.io.IOException
public void initialize(java.net.URI path, org.apache.hadoop.conf.Configuration config, boolean initSuperclass) throws java.io.IOException
path
- URI of a file/directory within this file system.config
- Hadoop configuration.initSuperclass
- if false, doesn't call super.initialize(path, config); avoids
registering a global Statistics object for this instance.java.io.IOException
public java.net.URI getUri()
getUri
in class org.apache.hadoop.fs.FileSystem
protected int getDefaultPort()
getDefaultPort
in class org.apache.hadoop.fs.FileSystem
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path hadoopPath, int bufferSize) throws java.io.IOException
Note: This function overrides the given bufferSize value with a higher number unless further
overridden using configuration parameter fs.gs.inputstream.buffer.size
.
open
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- File to open.bufferSize
- Size of buffer to use for IO.java.io.FileNotFoundException
- if the given path does not exist.java.io.IOException
- if an error occurs.public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path hadoopPath, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException
Note: This function overrides the given bufferSize value with a higher number unless further
overridden using configuration parameter fs.gs.outputstream.buffer.size
.
create
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- The file to open.permission
- Permissions to set on the new file. Ignored.overwrite
- If a file with this name already exists, then if true, the file will be
overwritten, and if false an error will be thrown.bufferSize
- The size of the buffer to use.replication
- Required block replication for the file. Ignored.blockSize
- The block-size to be used for the new file. Ignored.progress
- Progress is reported through this. Ignored.java.io.IOException
- if an error occurs.setPermission(Path, FsPermission)
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path hadoopPath, org.apache.hadoop.fs.permission.FsPermission permission, java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException
createNonRecursive
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path hadoopPath, int bufferSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException
append
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- The existing file to be appended.bufferSize
- The size of the buffer to be used.progress
- For reporting progress if it is not null.java.io.IOException
- if an error occurs.public void concat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) throws java.io.IOException
concat
in class org.apache.hadoop.fs.FileSystem
trg
- the path to the target destination.psrcs
- the paths to the sources to use for the concatenation.java.io.IOException
- IO failurepublic boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws java.io.IOException
rename
in class org.apache.hadoop.fs.FileSystem
src
- Source path.dst
- Destination path.java.io.FileNotFoundException
- if src does not exist.java.io.IOException
- if an error occurs.public boolean delete(org.apache.hadoop.fs.Path hadoopPath, boolean recursive) throws java.io.IOException
delete
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- The path to delete.recursive
- If path is a directory and set to
true, the directory is deleted, else throws an exception.
In case of a file, the recursive parameter is ignored.java.io.IOException
- if an error occurs.public org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path hadoopPath) throws java.io.IOException
listStatus
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- Given path.java.io.IOException
- if an error occurs.public void setWorkingDirectory(org.apache.hadoop.fs.Path hadoopPath)
setWorkingDirectory
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- New working directory.public org.apache.hadoop.fs.Path getWorkingDirectory()
getWorkingDirectory
in class org.apache.hadoop.fs.FileSystem
public boolean mkdirs(org.apache.hadoop.fs.Path hadoopPath, org.apache.hadoop.fs.permission.FsPermission permission) throws java.io.IOException
mkdirs
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- Given path.permission
- Permissions to set on the given directory.java.io.IOException
- if an error occurs.public short getDefaultReplication()
getDefaultReplication
in class org.apache.hadoop.fs.FileSystem
public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path hadoopPath) throws java.io.IOException
getFileStatus
in class org.apache.hadoop.fs.FileSystem
hadoopPath
- The path we want information about.java.io.FileNotFoundException
- when the path does not exist;java.io.IOException
- on other errors.public org.apache.hadoop.fs.FileStatus[] globStatus(org.apache.hadoop.fs.Path pathPattern) throws java.io.IOException
globStatus
in class org.apache.hadoop.fs.FileSystem
pathPattern
- A regular expression specifying the path pattern.java.io.IOException
- if an error occurs.public org.apache.hadoop.fs.FileStatus[] globStatus(org.apache.hadoop.fs.Path pathPattern, org.apache.hadoop.fs.PathFilter filter) throws java.io.IOException
Return null if pathPattern has no glob and the path does not exist. Return an empty array if pathPattern has a glob and no path matches it.
globStatus
in class org.apache.hadoop.fs.FileSystem
pathPattern
- A regular expression specifying the path pattern.filter
- A user-supplied path filter.java.io.IOException
- if an error occurs.public org.apache.hadoop.fs.Path getHomeDirectory()
getHomeDirectory
in class org.apache.hadoop.fs.FileSystem
public java.lang.String getCanonicalServiceName()
Returns the service if delegation tokens are configured, otherwise, null.
getCanonicalServiceName
in class org.apache.hadoop.fs.FileSystem
public GoogleCloudStorageFileSystem getGcsFs()
protected abstract void configureBuckets(GoogleCloudStorageFileSystem gcsFs) throws java.io.IOException
gcsFs
- GoogleCloudStorageFileSystem
to configure bucketsjava.io.IOException
- if bucket name is invalid or cannot be found.public boolean deleteOnExit(org.apache.hadoop.fs.Path f) throws java.io.IOException
deleteOnExit
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
protected void processDeleteOnExit()
processDeleteOnExit
in class org.apache.hadoop.fs.FileSystem
public org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f) throws java.io.IOException
getContentSummary
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public org.apache.hadoop.security.token.Token<?> getDelegationToken(java.lang.String renewer) throws java.io.IOException
getDelegationToken
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void copyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path[] srcs, org.apache.hadoop.fs.Path dst) throws java.io.IOException
copyFromLocalFile
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void copyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws java.io.IOException
copyFromLocalFile
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void copyToLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws java.io.IOException
copyToLocalFile
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public org.apache.hadoop.fs.Path startLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile) throws java.io.IOException
startLocalOutput
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void completeLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile) throws java.io.IOException
completeLocalOutput
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void close() throws java.io.IOException
close
in interface java.io.Closeable
close
in interface java.lang.AutoCloseable
close
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public long getUsed() throws java.io.IOException
getUsed
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public long getDefaultBlockSize()
getDefaultBlockSize
in class org.apache.hadoop.fs.FileSystem
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path hadoopPath) throws java.io.IOException
getFileChecksum
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void setVerifyChecksum(boolean verifyChecksum)
setVerifyChecksum
in class org.apache.hadoop.fs.FileSystem
public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws java.io.IOException
setPermission
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void setOwner(org.apache.hadoop.fs.Path p, java.lang.String username, java.lang.String groupname) throws java.io.IOException
setOwner
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void setTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) throws java.io.IOException
setTimes
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public byte[] getXAttr(org.apache.hadoop.fs.Path path, java.lang.String name) throws java.io.IOException
getXAttr
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public java.util.Map<java.lang.String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path) throws java.io.IOException
getXAttrs
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public java.util.Map<java.lang.String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> names) throws java.io.IOException
getXAttrs
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public java.util.List<java.lang.String> listXAttrs(org.apache.hadoop.fs.Path path) throws java.io.IOException
listXAttrs
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void setXAttr(org.apache.hadoop.fs.Path path, java.lang.String name, byte[] value, java.util.EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flags) throws java.io.IOException
setXAttr
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
public void removeXAttr(org.apache.hadoop.fs.Path path, java.lang.String name) throws java.io.IOException
removeXAttr
in class org.apache.hadoop.fs.FileSystem
java.io.IOException
Copyright © 2019. All rights reserved.