|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
AccessControlException
instead.RemoteException
.
AccessControlException
with the specified detail message.
ACTIVE = 1;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated string groups = 1;
repeated uint32 methods = 2;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated uint64 versions = 2;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
Configuration.addDeprecation(String key, String newKey,
String customMessage)
instead
Configuration.addDeprecation(String key, String newKey)
instead
repeated string groups = 1;
repeated string groups = 1;
Service
,
add it to the list of services managed by this CompositeService
repeated uint32 methods = 2;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
Service
to the list of services managed by this
CompositeService
repeated uint64 versions = 2;
FSDataInputStream
to Avro's SeekableInput interface.FSDataInputStream
and its length.
FileContext
and a Path
.
WritableComparable
types supporting ordering/permutation by a representative set of bytes.CompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockCompressorStream
.
BlockCompressorStream
with given output-stream and
compressor.
DecompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockDecompressorStream
.
BlockDecompressorStream
.
MapFile
and provides very much the same
functionality.Decompressor
based on the popular gzip compressed file format.Compressor
based on the popular
bzip2 compression algorithm.Decompressor
based on the popular
bzip2 compression algorithm.Client.call(RPC.RpcKind, Writable, ConnectionId)
for RPC_BUILTIN
Client.call(RPC.RpcKind, Writable,
ConnectionId)
instead
Client.call(RPC.RpcKind, Writable,
ConnectionId)
instead
Client.call(RPC.RpcKind, Writable,
ConnectionId)
instead
Client.call(RPC.RpcKind, Writable, InetSocketAddress,
Class, UserGroupInformation, int, Configuration)
except that rpcKind is writable.
Client.call(Writable, InetSocketAddress,
Class, UserGroupInformation, int, Configuration)
except that specifying serviceClass.
param
, to the IPC server running at
address
which is servicing the protocol
protocol,
with the ticket
credentials, rpcTimeout
as
timeout and conf
as conf for this connection, returning the
value.
Client.call(RPC.RpcKind, Writable, ConnectionId)
except the rpcKind is RPC_BUILTIN
rpcRequest
, to the IPC server defined by
remoteId
, returning the rpc respond.
rpcRequest
, to the IPC server defined by
remoteId
, returning the rpc respond.
#call(RpcPayloadHeader.RpcKind, String,
Writable, long)
instead
File.canExecute()
File.canRead()
File.canWrite()
rpc cedeActive(.hadoop.common.CedeActiveRequestProto) returns (.hadoop.common.CedeActiveResponseProto);
rpc cedeActive(.hadoop.common.CedeActiveRequestProto) returns (.hadoop.common.CedeActiveResponseProto);
CHALLENGE = 3;
position
.
IOException
or
null pointers.
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
required sint32 callId = 3;
required uint32 callId = 1;
optional bytes challenge = 5;
required bytes clientId = 4;
optional bytes clientId = 7;
required uint64 clientProtocolVersion = 3;
required string declaringClassProtocolName = 2;
optional string effectiveUser = 1;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional string errorMsg = 5;
optional string exceptionClassName = 4;
repeated string groups = 1;
required bytes identifier = 1;
required string kind = 3;
required string mechanism = 2;
required string message = 1;
required string message = 1;
required string method = 1;
required string methodName = 1;
repeated uint32 methods = 2;
required uint32 millisToCede = 1;
required uint64 newExpiryTime = 1;
optional string notReadyReason = 3;
required bytes password = 2;
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
optional bool readyToBecomeActive = 2;
optional string realUser = 2;
required string renewer = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HARequestSource reqSource = 1;
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 8 [default = -1];
required string rpcKind = 2;
required string rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional string serverId = 4;
optional uint32 serverIpcVersionNum = 3;
required string service = 4;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
optional bytes token = 3;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required string user = 1;
optional .hadoop.common.UserInformationProto userInfo = 2;
required uint64 version = 1;
optional uint32 version = 1;
repeated uint64 versions = 2;
Writable
class.
AbstractService.stop()
IOException
IOException
.
CompressionOutputStream
to compress data.Configuration
.Configuration
.ViewFs
NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.ViewFs
for examples.RuntimeException
.
RuntimeException
.
FileContext.create(Path, EnumSet, Options.CreateOpts...)
except
that the Path f must be fully qualified and the permission is absolute
(i.e.
FailoverProxyProvider
and the same retry policy for each
method in the interface.
FailoverProxyProvider
and the a set of retry policies
specified by method name.
Compressor
for use by this CompressionCodec
.
Compressor
for use by this CompressionCodec
.
Compressor
for use by this CompressionCodec
.
Compressor
for use by this CompressionCodec
.
SequenceFile.Reader
returned.
Decompressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
FsPermission
object.
CompressionInputStream
that will read from the given
input stream and return a stream for uncompressed data.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
, and return a
stream for uncompressed data.
CompressionInputStream
that will read from the given
input stream.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
CompressionInputStream
that will read from the given
input stream.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
CompressionInputStream
that will read from the given
input stream.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
AbstractFileSystem.create(Path, EnumSet, Options.CreateOpts...)
except that the opts
have been declared explicitly.
IOException
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
recordName
.
TFile.Reader.createScannerByKey(byte[], byte[])
instead.
TFile.Reader.createScannerByKey(RawComparable, RawComparable)
instead.
FileContext.createSymlink(Path, Path, boolean)
;
FileContext.createSymlink(Path, Path, boolean)
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
CompressionInputStream
to compress data.FailoverProxyProvider
which does nothing in the
event of failover, and always returns the same proxy object.Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record
implementation.
FileContext.delete(Path, boolean)
except that Path f must be for
this file system.
FileSystem.delete(Path, boolean)
instead.
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
rpc echo(.hadoop.common.EchoRequestProto) returns (.hadoop.common.EchoResponseProto);
rpc echo(.hadoop.common.EchoRequestProto) returns (.hadoop.common.EchoResponseProto);
rpc echo2(.hadoop.common.EchoRequestProto) returns (.hadoop.common.EchoResponseProto);
rpc echo2(.hadoop.common.EchoRequestProto) returns (.hadoop.common.EchoResponseProto);
o
is a ByteWritable with the same value.
o
is a DoubleWritable with the same value.
o
is an EnumSetWritable with the same value,
or both are null.
o
is a FloatWritable with the same value.
o
is a IntWritable with the same value.
o
is a LongWritable with the same value.
o
is an MD5Hash whose digest contains the
same values.
o
is a ShortWritable with the same value.
o
is a Text with the same contents.
o
is a VIntWritable with the same value.
o
is a VLongWritable with the same value.
rpc error(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc error(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc error2(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc error2(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
ERROR_APPLICATION = 1;
ERROR_NO_SUCH_METHOD = 2;
ERROR_NO_SUCH_PROTOCOL = 3;
ERROR_RPC_SERVER = 4;
ERROR_RPC_VERSION_MISMATCH = 6;
ERROR_SERIALIZING_RESPONSE = 5;
ERROR = 1;
EventCounter
insteadFATAL_DESERIALIZING_REQUEST = 13;
FATAL_INVALID_RPC_HEADER = 12;
FATAL_UNAUTHORIZED = 15;
FATAL_UNKNOWN = 10;
FATAL_UNSUPPORTED_SERIALIZATION = 11;
FATAL = 2;
FATAL_VERSION_MISMATCH = 14;
HardLink
FilterFileSystem
contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.what
in the backing
buffer, starting as position start
.
true
if the end of the decompressed
data output stream has been reached.
FileContext.fixRelativePart(org.apache.hadoop.fs.Path)
FSInputStream
in a DataInputStream
and buffers input through a BufferedInputStream
.OutputStream
in a DataOutputStream
,
buffers output through a BufferedOutputStream
and creates a checksum
file.FsAction
.
FileSystem
.Throwable
into a Runtime Exception.FileSystem
backed by an FTP client provided by Apache Commons Net.FileSystem.delete(Path, boolean)
name
property, null
if
no such property exists.
name
.
n
th value in the file.
MapFile.Reader.get(WritableComparable, Writable)
method.
BytesWritable.getBytes()
instead.
key
.
WritableComparable
implementation.
ShutdownHookManager
singleton.
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
name
property as a boolean
.
Text.getLength()
is
valid.
required sint32 callId = 3;
required sint32 callId = 3;
required sint32 callId = 3;
required uint32 callId = 1;
required uint32 callId = 1;
required uint32 callId = 1;
optional bytes challenge = 5;
optional bytes challenge = 5;
optional bytes challenge = 5;
name
property as a Class
.
name
property as a Class
implementing the interface specified by xface
.
Class
of the given object.
name
property
as an array of Class
.
ClassLoader
for this job.
required bytes clientId = 4;
required bytes clientId = 4;
required bytes clientId = 4;
optional bytes clientId = 7;
optional bytes clientId = 7;
optional bytes clientId = 7;
required uint64 clientProtocolVersion = 3;
required uint64 clientProtocolVersion = 3;
required uint64 clientProtocolVersion = 3;
Compressor
for the given CompressionCodec
from the
pool or a new one.
Compressor
needed by this CompressionCodec
.
Compressor
needed by this CompressionCodec
.
Compressor
needed by this CompressionCodec
.
Compressor
needed by this CompressionCodec
.
name
.
Reader
attached to the configuration resource with the
given name
.
ContentSummary
of path f.
ContentSummary
of a given Path
.
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
Decompressor
for the given CompressionCodec
from the
pool or a new one.
Decompressor
needed by this CompressionCodec
.
Decompressor
needed by this CompressionCodec
.
Decompressor
needed by this CompressionCodec
.
Decompressor
needed by this CompressionCodec
.
FileSystem.getDefaultBlockSize(Path)
instead
FileSystem.getDefaultReplication(Path)
instead
name
property as a double
.
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
Service.getFailureCause()
occurred.
FileContext.getFileBlockLocations(Path, long, long)
except that
Path f must be for this file system.
FileContext.getFileChecksum(Path)
except that Path f must be for
this file system.
FileContext.getFileLinkStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path leading up to the final path component.
FileContext.getFileLinkStatus(Path)
FileContext.getFileStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
name
property as a float
.
FileContext.getFsStatus(Path)
except that Path f must be for this
file system.
FileContext.getFsStatus(Path)
.
FsAction
.
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
rpc getGroupsForUser(.hadoop.common.GetGroupsForUserRequestProto) returns (.hadoop.common.GetGroupsForUserResponseProto);
rpc getGroupsForUser(.hadoop.common.GetGroupsForUserRequestProto) returns (.hadoop.common.GetGroupsForUserResponseProto);
repeated string groups = 1;
repeated string groups = 1;
repeated string groups = 1;
required bytes identifier = 1;
required bytes identifier = 1;
required bytes identifier = 1;
name
property as a List
of objects implementing the interface specified by xface
.
name
property as an int
.
name
property as a set of comma-delimited
int
values.
required string kind = 3;
required string kind = 3;
required string kind = 3;
required string kind = 3;
required string kind = 3;
required string kind = 3;
Compressor
s for this
CompressionCodec
Decompressor
s for this
CompressionCodec
FileContext.getLinkTarget(Path)
name
property as a long
.
name
property as a long
or
human readable format.
required string mechanism = 2;
required string mechanism = 2;
required string mechanism = 2;
required string mechanism = 2;
required string mechanism = 2;
required string mechanism = 2;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string method = 1;
required string method = 1;
required string method = 1;
required string method = 1;
required string method = 1;
required string method = 1;
required string methodName = 1;
required string methodName = 1;
required string methodName = 1;
required string methodName = 1;
required string methodName = 1;
required string methodName = 1;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
repeated uint32 methods = 2;
required uint32 millisToCede = 1;
required uint32 millisToCede = 1;
required uint32 millisToCede = 1;
required uint64 newExpiryTime = 1;
required uint64 newExpiryTime = 1;
required uint64 newExpiryTime = 1;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
FsAction
.
required bytes password = 2;
required bytes password = 2;
required bytes password = 2;
name
property as a Pattern
.
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
GetUserMappingsProtocol
implementation is running.
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
rpc getProtocolSignature(.hadoop.common.GetProtocolSignatureRequestProto) returns (.hadoop.common.GetProtocolSignatureResponseProto);
rpc getProtocolSignature(.hadoop.common.GetProtocolSignatureRequestProto) returns (.hadoop.common.GetProtocolSignatureResponseProto);
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
rpc getProtocolVersions(.hadoop.common.GetProtocolVersionsRequestProto) returns (.hadoop.common.GetProtocolVersionsResponseProto);
rpc getProtocolVersions(.hadoop.common.GetProtocolVersionsRequestProto) returns (.hadoop.common.GetProtocolVersionsResponseProto);
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
name
property, without doing
variable expansion.If the key is
deprecated, it returns the value of the first key which replaces
the deprecated key and is not null.
optional bool readyToBecomeActive = 2;
optional bool readyToBecomeActive = 2;
optional bool readyToBecomeActive = 2;
optional string realUser = 2;
optional string realUser = 2;
optional string realUser = 2;
optional string realUser = 2;
optional string realUser = 2;
optional string realUser = 2;
0
.
0
.
required string renewer = 1;
required string renewer = 1;
required string renewer = 1;
required string renewer = 1;
required string renewer = 1;
required string renewer = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HARequestSource reqSource = 1;
required .hadoop.common.HARequestSource reqSource = 1;
required .hadoop.common.HARequestSource reqSource = 1;
URL
for the named resource.
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 8 [default = -1];
optional sint32 retryCount = 8 [default = -1];
optional sint32 retryCount = 8 [default = -1];
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 1;
required string rpcKind = 1;
required string rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 1;
required string rpcKind = 1;
required string rpcKind = 1;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
FileSystem.getServerDefaults(Path)
instead
optional string serverId = 4;
optional string serverId = 4;
optional string serverId = 4;
optional string serverId = 4;
optional string serverId = 4;
optional string serverId = 4;
optional uint32 serverIpcVersionNum = 3;
optional uint32 serverIpcVersionNum = 3;
optional uint32 serverIpcVersionNum = 3;
required string service = 4;
required string service = 4;
required string service = 4;
required string service = 4;
required string service = 4;
required string service = 4;
rpc getServiceStatus(.hadoop.common.GetServiceStatusRequestProto) returns (.hadoop.common.GetServiceStatusResponseProto);
rpc getServiceStatus(.hadoop.common.GetServiceStatusRequestProto) returns (.hadoop.common.GetServiceStatusResponseProto);
BytesWritable.getLength()
instead.
name
property as a
InetSocketAddress
.
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
FileSystem.getAllStatistics()
instead
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
name
property as
a collection of String
s.
name
property as
an array of String
s.
name
property as
an array of String
s.
optional bytes token = 3;
optional bytes token = 3;
optional bytes token = 3;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
name
property as a trimmed String
,
null
if no such property exists.
name
property as a trimmed String
,
defaultValue
if no such property exists.
name
property as
a collection of String
s, trimmed of the leading and trailing whitespace.
name
property as
an array of String
s, trimmed of the leading and trailing whitespace.
name
property as
an array of String
s, trimmed of the leading and trailing whitespace.
GetUserMappingsProtocol
.
UMASK_LABEL
config param has umask value that is either symbolic
or octal.
required string user = 1;
required string user = 1;
required string user = 1;
FsAction
.
required string user = 1;
required string user = 1;
required string user = 1;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
hadoop.common.GetGroupsForUserRequestProto
hadoop.common.GetGroupsForUserRequestProto
hadoop.common.GetGroupsForUserResponseProto
hadoop.common.GetGroupsForUserResponseProto
hadoop.common.GetUserMappingsProtocolService
required uint64 version = 1;
required uint64 version = 1;
required uint64 version = 1;
optional uint32 version = 1;
optional uint32 version = 1;
optional uint32 version = 1;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
repeated uint64 versions = 2;
rpc gracefulFailover(.hadoop.common.GracefulFailoverRequestProto) returns (.hadoop.common.GracefulFailoverResponseProto);
rpc gracefulFailover(.hadoop.common.GracefulFailoverRequestProto) returns (.hadoop.common.GracefulFailoverResponseProto);
Groups
.required sint32 callId = 3;
required sint32 callId = 3;
required sint32 callId = 3;
required uint32 callId = 1;
required uint32 callId = 1;
required uint32 callId = 1;
optional bytes challenge = 5;
optional bytes challenge = 5;
optional bytes challenge = 5;
required bytes clientId = 4;
required bytes clientId = 4;
required bytes clientId = 4;
optional bytes clientId = 7;
optional bytes clientId = 7;
optional bytes clientId = 7;
required uint64 clientProtocolVersion = 3;
required uint64 clientProtocolVersion = 3;
required uint64 clientProtocolVersion = 3;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional string effectiveUser = 1;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string errorMsg = 5;
HAServiceProtocol
RPC calls.hadoop.common.GetServiceStatusRequestProto
hadoop.common.GetServiceStatusRequestProto
hadoop.common.GetServiceStatusResponseProto
hadoop.common.GetServiceStatusResponseProto
hadoop.common.HARequestSource
hadoop.common.HAServiceProtocolService
hadoop.common.HAServiceStateProto
hadoop.common.HAStateChangeRequestInfoProto
hadoop.common.HAStateChangeRequestInfoProto
hadoop.common.MonitorHealthRequestProto
hadoop.common.MonitorHealthRequestProto
hadoop.common.MonitorHealthResponseProto
hadoop.common.MonitorHealthResponseProto
hadoop.common.TransitionToActiveRequestProto
hadoop.common.TransitionToActiveRequestProto
hadoop.common.TransitionToActiveResponseProto
hadoop.common.TransitionToActiveResponseProto
hadoop.common.TransitionToStandbyRequestProto
hadoop.common.TransitionToStandbyRequestProto
hadoop.common.TransitionToStandbyResponseProto
hadoop.common.TransitionToStandbyResponseProto
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
required bytes identifier = 1;
required bytes identifier = 1;
required bytes identifier = 1;
required string kind = 3;
required string kind = 3;
required string kind = 3;
required string mechanism = 2;
required string mechanism = 2;
required string mechanism = 2;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string method = 1;
required string method = 1;
required string method = 1;
required string methodName = 1;
required string methodName = 1;
required string methodName = 1;
required uint32 millisToCede = 1;
required uint32 millisToCede = 1;
required uint32 millisToCede = 1;
required uint64 newExpiryTime = 1;
required uint64 newExpiryTime = 1;
required uint64 newExpiryTime = 1;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
required bytes password = 2;
required bytes password = 2;
required bytes password = 2;
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
optional string protocol = 3;
optional string protocol = 3;
optional bool readyToBecomeActive = 2;
optional bool readyToBecomeActive = 2;
optional bool readyToBecomeActive = 2;
optional string realUser = 2;
optional string realUser = 2;
optional string realUser = 2;
required string renewer = 1;
required string renewer = 1;
required string renewer = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HARequestSource reqSource = 1;
required .hadoop.common.HARequestSource reqSource = 1;
required .hadoop.common.HARequestSource reqSource = 1;
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 8 [default = -1];
optional sint32 retryCount = 8 [default = -1];
optional sint32 retryCount = 8 [default = -1];
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 2;
required string rpcKind = 1;
required string rpcKind = 1;
required string rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional string serverId = 4;
optional string serverId = 4;
optional string serverId = 4;
optional uint32 serverIpcVersionNum = 3;
optional uint32 serverIpcVersionNum = 3;
optional uint32 serverIpcVersionNum = 3;
required string service = 4;
required string service = 4;
required string service = 4;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
optional bytes token = 3;
optional bytes token = 3;
optional bytes token = 3;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required string user = 1;
required string user = 1;
required string user = 1;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
required uint64 version = 1;
required uint64 version = 1;
required uint64 version = 1;
optional uint32 version = 1;
optional uint32 version = 1;
optional uint32 version = 1;
INITIALIZING = 0;
INITIATE = 2;
hadoop.common.IpcConnectionContextProto
hadoop.common.IpcConnectionContextProto
hadoop.common.UserInformationProto
hadoop.common.UserInformationProto
key
is deprecated.
FileStatus.isFile()
,
FileStatus.isDirectory()
, and FileStatus.isSymlink()
instead.
DNSToSwitchMapping
instance being on a single
switch.
AbstractDNSToSwitchMapping.isMappingSingleSwitch(DNSToSwitchMapping)
Iterator
to go through the list of String
key-value pairs in the configuration.
Serialization
for Java Serializable
classes.RawComparator
that uses a JavaSerialization
Deserializer
to deserialize objects that are then compared via
their Comparable
interfaces.ArrayFile.Reader.seek(long)
, ArrayFile.Reader.next(Writable)
, or ArrayFile.Reader.get(long,Writable)
.
LightWeightCache
.LightWeightGSet
.File.list()
.
File.listFiles()
.
FileContext.listLocatedStatus(Path)
except that Path f
must be for this file system.
FileContext.Util.listStatus(Path)
except that Path f must be
for this file system.
FileContext.Util.listStatus(Path[], PathFilter)
f
is a file, this method will make a single call to S3.
FileContext.listStatus(Path)
except that Path f must be for this
file system.
Compressor
based on the lz4 compression algorithm.Decompressor
based on the lz4 compression algorithm.MBeans.register(String, String, Object)
SegmentDescriptor
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.UserInformationProto userInfo = 2;
FileContext.mkdir(Path, FsPermission, boolean)
except that the Path
f must be fully qualified and the permission is absolute (i.e.
FileSystem.mkdirs(Path, FsPermission)
with default permission.
rpc monitorHealth(.hadoop.common.MonitorHealthRequestProto) returns (.hadoop.common.MonitorHealthResponseProto);
rpc monitorHealth(.hadoop.common.MonitorHealthRequestProto) returns (.hadoop.common.MonitorHealthResponseProto);
IOException
into an IOException
MutableQuantiles
for a metric that rolls itself
over on the specified time interval.
FileSystem
for reading and writing files stored on
Amazon S3.true
if a preset dictionary is needed for decompression.
false
.
false
.
true
if the input data buffer is empty and
Decompressor.setInput(byte[], int, int)
should be called to
provide more input.
Lz4Decompressor.setInput(byte[], int, int)
should be called to
provide more input.
SnappyDecompressor.setInput(byte[], int, int)
should be called to
provide more input.
NEGOTIATE = 1;
WritableComparable
instance.
key
and
val
.
key
, skipping its
value.
key
and
val
.
key
.
ScriptBasedMapping.toString()
method if there is no string
"no script"
FileContext.open(Path)
except that Path f must be for this
file system.
FileContext.open(Path, int)
except that Path f must be for this
file system.
FSDataInputStream
returned.
FSDataInputStream
on the requested file on local file
system, verifying the expected user/group constraints if security is
enabled.
FileSystem
that uses Amazon S3
as a backing store.FileSystem
for reading and writing files on
Amazon S3.JMXJsonServlet
class.FileSystem
.RetryPolicy
determines that an error
warrants failing over.
rpc ping(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc ping(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc ping2(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
rpc ping2(.hadoop.common.EmptyRequestProto) returns (.hadoop.common.EmptyResponseProto);
hadoop.common.RequestHeaderProto
hadoop.common.RequestHeaderProto
hadoop.common.GetProtocolSignatureRequestProto
hadoop.common.GetProtocolSignatureRequestProto
hadoop.common.GetProtocolSignatureResponseProto
hadoop.common.GetProtocolSignatureResponseProto
hadoop.common.GetProtocolVersionsRequestProto
hadoop.common.GetProtocolVersionsRequestProto
hadoop.common.GetProtocolVersionsResponseProto
hadoop.common.GetProtocolVersionsResponseProto
hadoop.common.ProtocolInfoService
hadoop.common.ProtocolSignatureProto
hadoop.common.ProtocolSignatureProto
hadoop.common.ProtocolVersionProto
hadoop.common.ProtocolVersionProto
RawComparator
.Comparator
that operates directly on byte representations of
objects.FsPermission
from DataInput
.
in
.
CompressedWritable.readFields(DataInput)
.
FSDataInputStream.readFully(long, byte[], int, int)
.
Writable
, String
, primitive type, or an array of
the preceding.
Writable
, String
, primitive type, or an array of
the preceding.
Configuration
can be changed at run time.Configuration
conf.
ReconfigurationException
.
ReconfigurationException
.
ReconfigurationException
.
Record
comparison implementation.
hadoop.common.RefreshAuthorizationPolicyProtocolService
hadoop.common.RefreshServiceAclRequestProto
hadoop.common.RefreshServiceAclRequestProto
hadoop.common.RefreshServiceAclResponseProto
hadoop.common.RefreshServiceAclResponseProto
rpc refreshServiceAcl(.hadoop.common.RefreshServiceAclRequestProto) returns (.hadoop.common.RefreshServiceAclResponseProto);
rpc refreshServiceAcl(.hadoop.common.RefreshServiceAclRequestProto) returns (.hadoop.common.RefreshServiceAclResponseProto);
rpc refreshSuperUserGroupsConfiguration(.hadoop.common.RefreshSuperUserGroupsConfigurationRequestProto) returns (.hadoop.common.RefreshSuperUserGroupsConfigurationResponseProto);
rpc refreshSuperUserGroupsConfiguration(.hadoop.common.RefreshSuperUserGroupsConfigurationRequestProto) returns (.hadoop.common.RefreshSuperUserGroupsConfigurationResponseProto);
hadoop.common.RefreshSuperUserGroupsConfigurationRequestProto
hadoop.common.RefreshSuperUserGroupsConfigurationRequestProto
hadoop.common.RefreshSuperUserGroupsConfigurationResponseProto
hadoop.common.RefreshSuperUserGroupsConfigurationResponseProto
hadoop.common.RefreshUserMappingsProtocolService
hadoop.common.RefreshUserToGroupsMappingsRequestProto
hadoop.common.RefreshUserToGroupsMappingsRequestProto
hadoop.common.RefreshUserToGroupsMappingsResponseProto
hadoop.common.RefreshUserToGroupsMappingsResponseProto
rpc refreshUserToGroupsMappings(.hadoop.common.RefreshUserToGroupsMappingsRequestProto) returns (.hadoop.common.RefreshUserToGroupsMappingsResponseProto);
rpc refreshUserToGroupsMappings(.hadoop.common.RefreshUserToGroupsMappingsRequestProto) returns (.hadoop.common.RefreshUserToGroupsMappingsResponseProto);
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
FileContext.rename(Path, Path, Options.Rename...)
except that Path
f must be for this file system.
FileContext.rename(Path, Path, Options.Rename...)
except that Path
f must be for this file system and NO OVERWRITE is performed.
FileContext.rename(Path, Path, Options.Rename...)
except that Path
f must be for this file system.
REQUEST_BY_USER_FORCED = 1;
REQUEST_BY_USER = 0;
REQUEST_BY_ZKFC = 2;
AbstractFileSystem.getLinkTarget(Path)
RESPONSE = 4;
RetryPolicy
.RetryPolicy.shouldRetry(Exception, int, int, boolean)
.Compressor
to the pool.
Decompressor
to the pool.
RPC_BUILTIN = 0;
RPC_CLOSE_CONNECTION = 2;
RPC_CONTINUATION_PACKET = 1;
RPC_FINAL_PACKET = 0;
RPC_PROTOCOL_BUFFER = 2;
RPC_WRITABLE = 1;
hadoop.common.RpcKindProto
hadoop.common.RpcRequestHeaderProto
hadoop.common.RpcRequestHeaderProto
hadoop.common.RpcRequestHeaderProto.OperationProto
hadoop.common.RpcResponseHeaderProto
hadoop.common.RpcResponseHeaderProto
hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto
hadoop.common.RpcResponseHeaderProto.RpcStatusProto
hadoop.common.RpcSaslProto
hadoop.common.RpcSaslProto
hadoop.common.RpcSaslProto.SaslAuth
hadoop.common.RpcSaslProto.SaslAuth
hadoop.common.RpcSaslProto.SaslState
PrintStream
configured earlier.
Tool
by Tool.run(String[])
, after
parsing with the given generic arguments.
Tool
with its Configuration
.
FileSystem
backed by
Amazon S3.S3FileSystem
.DNSToSwitchMapping
interface using a
script configured via the
CommonConfigurationKeysPublic.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY
option.hadoop.common.CancelDelegationTokenRequestProto
hadoop.common.CancelDelegationTokenRequestProto
hadoop.common.CancelDelegationTokenResponseProto
hadoop.common.CancelDelegationTokenResponseProto
hadoop.common.GetDelegationTokenRequestProto
hadoop.common.GetDelegationTokenRequestProto
hadoop.common.GetDelegationTokenResponseProto
hadoop.common.GetDelegationTokenResponseProto
hadoop.common.RenewDelegationTokenRequestProto
hadoop.common.RenewDelegationTokenRequestProto
hadoop.common.RenewDelegationTokenResponseProto
hadoop.common.RenewDelegationTokenResponseProto
hadoop.common.TokenProto
hadoop.common.TokenProto
n
th value.
SequenceFile
s are flat files consisting of binary key/value
pairs.SequenceFile
.RawComparator
.
RawComparator
.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
ServiceStateChangeListener
instances,
including a notification loop that is robust against changes to the list
during the notification process.Service.STATE.NOTINITED
state.
value
of the name
property.
value
of the name
property.
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
repeated .hadoop.common.RpcSaslProto.SaslAuth auths = 4;
name
property to a boolean
.
required sint32 callId = 3;
required uint32 callId = 1;
optional bytes challenge = 5;
name
property to the name of a
theClass
implementing the given interface xface
.
required bytes clientId = 4;
optional bytes clientId = 7;
required uint64 clientProtocolVersion = 3;
required string declaringClassProtocolName = 2;
required string declaringClassProtocolName = 2;
name
property to a double
.
optional string effectiveUser = 1;
optional string effectiveUser = 1;
name
property to the given type.
optional .hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProto errorDetail = 6;
optional string errorMsg = 5;
optional string errorMsg = 5;
optional string exceptionClassName = 4;
optional string exceptionClassName = 4;
File.setExecutable(boolean)
File#setExecutable does not work as expected on Windows.
name
property to a float
.
repeated string groups = 1;
required bytes identifier = 1;
name
property to an int
.
required string kind = 3;
required string kind = 3;
name
property to a long
.
required string mechanism = 2;
required string mechanism = 2;
required string message = 1;
required string message = 1;
required string message = 1;
required string message = 1;
required string method = 1;
required string method = 1;
required string methodName = 1;
required string methodName = 1;
repeated uint32 methods = 2;
required uint32 millisToCede = 1;
required uint64 newExpiryTime = 1;
optional string notReadyReason = 3;
optional string notReadyReason = 3;
FileContext.setOwner(Path, String, String)
except that Path f must
be for this file system.
required bytes password = 2;
Pattern
.
FileContext.setPermission(Path, FsPermission)
except that Path f
must be for this file system.
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
optional string protocol = 3;
required string protocol = 1;
required string protocol = 1;
optional string protocol = 3;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolSignatureProto protocolSignature = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
repeated .hadoop.common.ProtocolVersionProto protocolVersions = 1;
File.setReadable(boolean)
File#setReadable does not work as expected on Windows.
optional bool readyToBecomeActive = 2;
optional string realUser = 2;
optional string realUser = 2;
required string renewer = 1;
required string renewer = 1;
FileContext.setReplication(Path, short)
except that Path f must be
for this file system.
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HAStateChangeRequestInfoProto reqInfo = 1;
required .hadoop.common.HARequestSource reqSource = 1;
optional sint32 retryCount = 5 [default = -1];
optional sint32 retryCount = 8 [default = -1];
required string rpcKind = 2;
required string rpcKind = 1;
optional .hadoop.common.RpcKindProto rpcKind = 1;
required string rpcKind = 2;
required string rpcKind = 1;
optional .hadoop.common.RpcRequestHeaderProto.OperationProto rpcOp = 2;
optional string serverId = 4;
optional string serverId = 4;
optional uint32 serverIpcVersionNum = 3;
required string service = 4;
required string service = 4;
name
property as
a host:port
.
required .hadoop.common.HAServiceStateProto state = 1;
required .hadoop.common.RpcSaslProto.SaslState state = 2;
required .hadoop.common.RpcResponseHeaderProto.RpcStatusProto status = 2;
name
property as
as comma delimited values.
name
to the given time duration.
FileContext.setTimes(Path, long, long)
except that Path f must be
for this file system.
Path
's last modified time only to the given
valid time.
optional bytes token = 3;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
optional .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required .hadoop.common.TokenProto token = 1;
required string user = 1;
required string user = 1;
optional .hadoop.common.UserInformationProto userInfo = 2;
optional .hadoop.common.UserInformationProto userInfo = 2;
FileContext.setVerifyChecksum(boolean, Path)
except that Path f
must be for this file system.
required uint64 version = 1;
optional uint32 version = 1;
repeated uint64 versions = 2;
File.setWritable(boolean)
File#setWritable does not work as expected on Windows.
ShutdownHookManager
enables running shutdownHook
in a deterministic order, higher priority first.Thread
s and ExecutorService
s.Compressor
based on the snappy compression algorithm.Decompressor
based on the snappy compression algorithm.fuser
to kill the process listening on the service's
TCP port.STANDBY = 2;
fileName
attribute,
if specified.
SUCCESS = 0;
SUCCESS = 0;
AbstractFileSystem.supportsSymlinks()
SequenceFile.Writer.hsync()
or SequenceFile.Writer.hflush()
instead
DNSToSwitchMapping
implementation that reads a 2 column text
file.hadoop.common.EchoRequestProto
hadoop.common.EchoRequestProto
hadoop.common.EchoResponseProto
hadoop.common.EchoResponseProto
hadoop.common.EmptyRequestProto
hadoop.common.EmptyRequestProto
hadoop.common.EmptyResponseProto
hadoop.common.EmptyResponseProto
hadoop.common.TestProtobufRpc2Proto
hadoop.common.TestProtobufRpcProto
Tool
s.rpc transitionToActive(.hadoop.common.TransitionToActiveRequestProto) returns (.hadoop.common.TransitionToActiveResponseProto);
rpc transitionToActive(.hadoop.common.TransitionToActiveRequestProto) returns (.hadoop.common.TransitionToActiveResponseProto);
rpc transitionToStandby(.hadoop.common.TransitionToStandbyRequestProto) returns (.hadoop.common.TransitionToStandbyResponseProto);
rpc transitionToStandby(.hadoop.common.TransitionToStandbyRequestProto) returns (.hadoop.common.TransitionToStandbyResponseProto);
name
property as a host:port
.
S3FileSystem
.VersionedWritable.readFields(DataInput)
when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion()
.FileSystem.createFileSystem(URI, Configuration)
After this constructor is called initialize() is called.
WRAP = 5;
InputStream
.
DataInput
and DataOutput
.Writable
which is also Comparable
.WritableComparable
s.WritableComparable
implementation.
Serialization
for Writable
s that delegates to
Writable.write(java.io.DataOutput)
and
Writable.readFields(java.io.DataInput)
.out
.
CompressedWritable.write(DataOutput)
.
Writable
, String
, primitive type, or an array of
the preceding.
Writable
, String
, primitive type, or an array of
the preceding.
OutputStream
using UTF-8 encoding.
Writer
.
hadoop.common.CedeActiveRequestProto
hadoop.common.CedeActiveRequestProto
hadoop.common.CedeActiveResponseProto
hadoop.common.CedeActiveResponseProto
hadoop.common.GracefulFailoverRequestProto
hadoop.common.GracefulFailoverRequestProto
hadoop.common.GracefulFailoverResponseProto
hadoop.common.GracefulFailoverResponseProto
hadoop.common.ZKFCProtocolService
Compressor
based on the popular
zlib compression algorithm.Decompressor
based on the popular
zlib compression algorithm.
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |