Package org.platanios.tensorflow.proto
Interface ConfigProto.ExperimentalOrBuilder
-
- All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder
,com.google.protobuf.MessageOrBuilder
- All Known Implementing Classes:
ConfigProto.Experimental
,ConfigProto.Experimental.Builder
- Enclosing class:
- ConfigProto
public static interface ConfigProto.ExperimentalOrBuilder extends com.google.protobuf.MessageOrBuilder
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description boolean
getCollectiveDeterministicSequentialExecution()
If true, make collective op execution order sequential and deterministic for potentially concurrent collective instances.java.lang.String
getCollectiveGroupLeader()
Task name for group resolution.com.google.protobuf.ByteString
getCollectiveGroupLeaderBytes()
Task name for group resolution.boolean
getCollectiveNccl()
If true, use NCCL for CollectiveOps.boolean
getDisableOutputPartitionGraphs()
If true, the session will not store an additional copy of the graph for each subgraph.boolean
getDisableThreadSpinning()
If using a direct session, disable spinning while waiting for work in the thread pool.boolean
getEnableMlirBridge()
Whether to enable the MLIR-based TF->XLA bridge.boolean
getEnableMlirGraphOptimization()
Whether to enable the MLIR-based Graph optimizations.java.lang.String
getExecutorType()
Which executor to use, the default executor will be used if it is an empty string or "DEFAULT"com.google.protobuf.ByteString
getExecutorTypeBytes()
Which executor to use, the default executor will be used if it is an empty string or "DEFAULT"boolean
getOptimizeForStaticGraph()
If true, the session may treat the graph as being static for optimization purposes.int
getRecvBufMaxChunk()
Guidance to formatting of large RecvBuf fields for transfer.SessionMetadata
getSessionMetadata()
Metadata about the session.SessionMetadataOrBuilder
getSessionMetadataOrBuilder()
Metadata about the session.boolean
getShareClusterDevicesInSession()
This was promoted to a non-experimental API.boolean
getShareSessionStateInClusterspecPropagation()
In the following, session state means the value of a variable, elements in a hash table, or any other resource, accessible by worker sessions held by a TF server.boolean
getUseNumaAffinity()
If true, and supported by the platform, the runtime will attempt to use NUMA affinity where applicable.long
getXlaFusionAutotunerThresh()
Minimum number of batches run through the XLA graph before XLA fusion autotuner is enabled.boolean
hasSessionMetadata()
Metadata about the session.-
Methods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
-
-
-
Method Detail
-
getCollectiveGroupLeader
java.lang.String getCollectiveGroupLeader()
Task name for group resolution.
string collective_group_leader = 1;
- Returns:
- The collectiveGroupLeader.
-
getCollectiveGroupLeaderBytes
com.google.protobuf.ByteString getCollectiveGroupLeaderBytes()
Task name for group resolution.
string collective_group_leader = 1;
- Returns:
- The bytes for collectiveGroupLeader.
-
getExecutorType
java.lang.String getExecutorType()
Which executor to use, the default executor will be used if it is an empty string or "DEFAULT"
string executor_type = 3;
- Returns:
- The executorType.
-
getExecutorTypeBytes
com.google.protobuf.ByteString getExecutorTypeBytes()
Which executor to use, the default executor will be used if it is an empty string or "DEFAULT"
string executor_type = 3;
- Returns:
- The bytes for executorType.
-
getRecvBufMaxChunk
int getRecvBufMaxChunk()
Guidance to formatting of large RecvBuf fields for transfer. Any positive value sets the max chunk size. 0 defaults to 4096. Any negative value indicates no max, i.e. one chunk only.
int32 recv_buf_max_chunk = 4;
- Returns:
- The recvBufMaxChunk.
-
getUseNumaAffinity
boolean getUseNumaAffinity()
If true, and supported by the platform, the runtime will attempt to use NUMA affinity where applicable. One consequence will be the existence of as many CPU devices as there are available NUMA nodes.
bool use_numa_affinity = 5;
- Returns:
- The useNumaAffinity.
-
getCollectiveDeterministicSequentialExecution
boolean getCollectiveDeterministicSequentialExecution()
If true, make collective op execution order sequential and deterministic for potentially concurrent collective instances.
bool collective_deterministic_sequential_execution = 6;
- Returns:
- The collectiveDeterministicSequentialExecution.
-
getCollectiveNccl
boolean getCollectiveNccl()
If true, use NCCL for CollectiveOps. This feature is highly experimental.
bool collective_nccl = 7;
- Returns:
- The collectiveNccl.
-
getShareSessionStateInClusterspecPropagation
boolean getShareSessionStateInClusterspecPropagation()
In the following, session state means the value of a variable, elements in a hash table, or any other resource, accessible by worker sessions held by a TF server. When ClusterSpec propagation is enabled, the value of isolate_session_state is ignored when deciding whether to share session states in a TF server (for backwards compatibility reasons). - If share_session_state_in_clusterspec_propagation is true, the session states are shared. - If share_session_state_in_clusterspec_propagation is false, session states are isolated. When clusterspec propagation is not used, the value of share_session_state_in_clusterspec_propagation is ignored when deciding whether to share session states in a TF server. - If isolate_session_state is true, session states are isolated. - If isolate_session_state is false, session states are shared. TODO(b/129330037): Add a single API that consistently treats isolate_session_state and ClusterSpec propagation.
bool share_session_state_in_clusterspec_propagation = 8;
- Returns:
- The shareSessionStateInClusterspecPropagation.
-
getDisableThreadSpinning
boolean getDisableThreadSpinning()
If using a direct session, disable spinning while waiting for work in the thread pool. This may result in higher latency for completing ops, but in the case where there is a lot of spinning may result in lower CPU usage.
bool disable_thread_spinning = 9;
- Returns:
- The disableThreadSpinning.
-
getShareClusterDevicesInSession
boolean getShareClusterDevicesInSession()
This was promoted to a non-experimental API. Please use ConfigProto.share_cluster_devices_in_session instead.
bool share_cluster_devices_in_session = 10;
- Returns:
- The shareClusterDevicesInSession.
-
hasSessionMetadata
boolean hasSessionMetadata()
Metadata about the session. If set, this can be used by the runtime and the Ops for debugging, monitoring, etc. NOTE: This is currently used and propagated only by the direct session.
.org.platanios.tensorflow.proto.SessionMetadata session_metadata = 11;
- Returns:
- Whether the sessionMetadata field is set.
-
getSessionMetadata
SessionMetadata getSessionMetadata()
Metadata about the session. If set, this can be used by the runtime and the Ops for debugging, monitoring, etc. NOTE: This is currently used and propagated only by the direct session.
.org.platanios.tensorflow.proto.SessionMetadata session_metadata = 11;
- Returns:
- The sessionMetadata.
-
getSessionMetadataOrBuilder
SessionMetadataOrBuilder getSessionMetadataOrBuilder()
Metadata about the session. If set, this can be used by the runtime and the Ops for debugging, monitoring, etc. NOTE: This is currently used and propagated only by the direct session.
.org.platanios.tensorflow.proto.SessionMetadata session_metadata = 11;
-
getOptimizeForStaticGraph
boolean getOptimizeForStaticGraph()
If true, the session may treat the graph as being static for optimization purposes. If this option is set to true when a session is created, the full GraphDef must be passed in a single call to Session::Create(), and Session::Extend() may not be supported.
bool optimize_for_static_graph = 12;
- Returns:
- The optimizeForStaticGraph.
-
getEnableMlirBridge
boolean getEnableMlirBridge()
Whether to enable the MLIR-based TF->XLA bridge. This is a replacement to the existing bridge, and not ready for production usage yet. If this option is set to true when a session is created, MLIR is used to perform the set of graph transformations to put the graph in a form that can be executed with delegation of some computations to an accelerator. This builds on the model of XLA where a subset of the graph is encapsulated and attached to a "compile" operation, whose result is fed to an "execute" operation. The kernel for these operations is responsible to lower the encapsulated graph to a particular device.
bool enable_mlir_bridge = 13;
- Returns:
- The enableMlirBridge.
-
getEnableMlirGraphOptimization
boolean getEnableMlirGraphOptimization()
Whether to enable the MLIR-based Graph optimizations. This will become a part of standard Tensorflow graph optimization pipeline, currently this is only used for gradual migration and testing new passes that are replacing existing optimizations in Grappler.
bool enable_mlir_graph_optimization = 16;
- Returns:
- The enableMlirGraphOptimization.
-
getDisableOutputPartitionGraphs
boolean getDisableOutputPartitionGraphs()
If true, the session will not store an additional copy of the graph for each subgraph. If this option is set to true when a session is created, the `RunOptions.output_partition_graphs` options must not be set.
bool disable_output_partition_graphs = 14;
- Returns:
- The disableOutputPartitionGraphs.
-
getXlaFusionAutotunerThresh
long getXlaFusionAutotunerThresh()
Minimum number of batches run through the XLA graph before XLA fusion autotuner is enabled. Default value of zero disables the autotuner. The XLA fusion autotuner can improve performance by executing a heuristic search on the compiler parameters.
int64 xla_fusion_autotuner_thresh = 15;
- Returns:
- The xlaFusionAutotunerThresh.
-
-