Class GPUOptions.Experimental.Builder

    • Method Detail

      • getDescriptor

        public static final org.nd4j.shade.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected org.nd4j.shade.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • clear

        public GPUOptions.Experimental.Builder clear()
        Specified by:
        clear in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        clear in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        clear in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • getDescriptorForType

        public org.nd4j.shade.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface org.nd4j.shade.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • getDefaultInstanceForType

        public GPUOptions.Experimental getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface org.nd4j.shade.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface org.nd4j.shade.protobuf.MessageOrBuilder
      • build

        public GPUOptions.Experimental build()
        Specified by:
        build in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        build in interface org.nd4j.shade.protobuf.MessageLite.Builder
      • buildPartial

        public GPUOptions.Experimental buildPartial()
        Specified by:
        buildPartial in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        buildPartial in interface org.nd4j.shade.protobuf.MessageLite.Builder
      • clone

        public GPUOptions.Experimental.Builder clone()
        Specified by:
        clone in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        clone in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        clone in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • clearField

        public GPUOptions.Experimental.Builder clearField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field)
        Specified by:
        clearField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        clearField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • clearOneof

        public GPUOptions.Experimental.Builder clearOneof​(org.nd4j.shade.protobuf.Descriptors.OneofDescriptor oneof)
        Specified by:
        clearOneof in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        clearOneof in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • setRepeatedField

        public GPUOptions.Experimental.Builder setRepeatedField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field,
                                                                int index,
                                                                Object value)
        Specified by:
        setRepeatedField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        setRepeatedField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • addRepeatedField

        public GPUOptions.Experimental.Builder addRepeatedField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field,
                                                                Object value)
        Specified by:
        addRepeatedField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        addRepeatedField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface org.nd4j.shade.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • mergeFrom

        public GPUOptions.Experimental.Builder mergeFrom​(org.nd4j.shade.protobuf.CodedInputStream input,
                                                         org.nd4j.shade.protobuf.ExtensionRegistryLite extensionRegistry)
                                                  throws IOException
        Specified by:
        mergeFrom in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class org.nd4j.shade.protobuf.AbstractMessage.Builder<GPUOptions.Experimental.Builder>
        Throws:
        IOException
      • getVirtualDevicesList

        public List<GPUOptions.Experimental.VirtualDevices> getVirtualDevicesList()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
        Specified by:
        getVirtualDevicesList in interface GPUOptions.ExperimentalOrBuilder
      • getVirtualDevicesCount

        public int getVirtualDevicesCount()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
        Specified by:
        getVirtualDevicesCount in interface GPUOptions.ExperimentalOrBuilder
      • getVirtualDevices

        public GPUOptions.Experimental.VirtualDevices getVirtualDevices​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
        Specified by:
        getVirtualDevices in interface GPUOptions.ExperimentalOrBuilder
      • setVirtualDevices

        public GPUOptions.Experimental.Builder setVirtualDevices​(int index,
                                                                 GPUOptions.Experimental.VirtualDevices value)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • setVirtualDevices

        public GPUOptions.Experimental.Builder setVirtualDevices​(int index,
                                                                 GPUOptions.Experimental.VirtualDevices.Builder builderForValue)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addVirtualDevices

        public GPUOptions.Experimental.Builder addVirtualDevices​(GPUOptions.Experimental.VirtualDevices value)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addVirtualDevices

        public GPUOptions.Experimental.Builder addVirtualDevices​(int index,
                                                                 GPUOptions.Experimental.VirtualDevices value)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addVirtualDevices

        public GPUOptions.Experimental.Builder addVirtualDevices​(GPUOptions.Experimental.VirtualDevices.Builder builderForValue)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addVirtualDevices

        public GPUOptions.Experimental.Builder addVirtualDevices​(int index,
                                                                 GPUOptions.Experimental.VirtualDevices.Builder builderForValue)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addAllVirtualDevices

        public GPUOptions.Experimental.Builder addAllVirtualDevices​(Iterable<? extends GPUOptions.Experimental.VirtualDevices> values)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • clearVirtualDevices

        public GPUOptions.Experimental.Builder clearVirtualDevices()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • removeVirtualDevices

        public GPUOptions.Experimental.Builder removeVirtualDevices​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesBuilder

        public GPUOptions.Experimental.VirtualDevices.Builder getVirtualDevicesBuilder​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesOrBuilder

        public GPUOptions.Experimental.VirtualDevicesOrBuilder getVirtualDevicesOrBuilder​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
        Specified by:
        getVirtualDevicesOrBuilder in interface GPUOptions.ExperimentalOrBuilder
      • getVirtualDevicesOrBuilderList

        public List<? extends GPUOptions.Experimental.VirtualDevicesOrBuilder> getVirtualDevicesOrBuilderList()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
        Specified by:
        getVirtualDevicesOrBuilderList in interface GPUOptions.ExperimentalOrBuilder
      • addVirtualDevicesBuilder

        public GPUOptions.Experimental.VirtualDevices.Builder addVirtualDevicesBuilder()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • addVirtualDevicesBuilder

        public GPUOptions.Experimental.VirtualDevices.Builder addVirtualDevicesBuilder​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesBuilderList

        public List<GPUOptions.Experimental.VirtualDevices.Builder> getVirtualDevicesBuilderList()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getUseUnifiedMemory

        public boolean getUseUnifiedMemory()
         If true, uses CUDA unified memory for memory allocations. If
         per_process_gpu_memory_fraction option is greater than 1.0, then unified
         memory is used regardless of the value for this field. See comments for
         per_process_gpu_memory_fraction field for more details and requirements
         of the unified memory. This option is useful to oversubscribe memory if
         multiple processes are sharing a single GPU while individually using less
         than 1.0 per process memory fraction.
         
        bool use_unified_memory = 2;
        Specified by:
        getUseUnifiedMemory in interface GPUOptions.ExperimentalOrBuilder
        Returns:
        The useUnifiedMemory.
      • setUseUnifiedMemory

        public GPUOptions.Experimental.Builder setUseUnifiedMemory​(boolean value)
         If true, uses CUDA unified memory for memory allocations. If
         per_process_gpu_memory_fraction option is greater than 1.0, then unified
         memory is used regardless of the value for this field. See comments for
         per_process_gpu_memory_fraction field for more details and requirements
         of the unified memory. This option is useful to oversubscribe memory if
         multiple processes are sharing a single GPU while individually using less
         than 1.0 per process memory fraction.
         
        bool use_unified_memory = 2;
        Parameters:
        value - The useUnifiedMemory to set.
        Returns:
        This builder for chaining.
      • clearUseUnifiedMemory

        public GPUOptions.Experimental.Builder clearUseUnifiedMemory()
         If true, uses CUDA unified memory for memory allocations. If
         per_process_gpu_memory_fraction option is greater than 1.0, then unified
         memory is used regardless of the value for this field. See comments for
         per_process_gpu_memory_fraction field for more details and requirements
         of the unified memory. This option is useful to oversubscribe memory if
         multiple processes are sharing a single GPU while individually using less
         than 1.0 per process memory fraction.
         
        bool use_unified_memory = 2;
        Returns:
        This builder for chaining.
      • setUnknownFields

        public final GPUOptions.Experimental.Builder setUnknownFields​(org.nd4j.shade.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        setUnknownFields in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        setUnknownFields in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>
      • mergeUnknownFields

        public final GPUOptions.Experimental.Builder mergeUnknownFields​(org.nd4j.shade.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        mergeUnknownFields in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        mergeUnknownFields in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<GPUOptions.Experimental.Builder>