Interface GPUOptions.ExperimentalOrBuilder

  • All Superinterfaces:
    org.nd4j.shade.protobuf.MessageLiteOrBuilder, org.nd4j.shade.protobuf.MessageOrBuilder
    All Known Implementing Classes:
    GPUOptions.Experimental, GPUOptions.Experimental.Builder
    Enclosing class:
    GPUOptions

    public static interface GPUOptions.ExperimentalOrBuilder
    extends org.nd4j.shade.protobuf.MessageOrBuilder
    • Method Detail

      • getVirtualDevicesList

        List<GPUOptions.Experimental.VirtualDevices> getVirtualDevicesList()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevices

        GPUOptions.Experimental.VirtualDevices getVirtualDevices​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesCount

        int getVirtualDevicesCount()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesOrBuilderList

        List<? extends GPUOptions.Experimental.VirtualDevicesOrBuilder> getVirtualDevicesOrBuilderList()
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getVirtualDevicesOrBuilder

        GPUOptions.Experimental.VirtualDevicesOrBuilder getVirtualDevicesOrBuilder​(int index)
         The multi virtual device settings. If empty (not set), it will create
         single virtual device on each visible GPU, according to the settings
         in "visible_device_list" above. Otherwise, the number of elements in the
         list must be the same as the number of visible GPUs (after
         "visible_device_list" filtering if it is set), and the string represented
         device names (e.g. /device:GPU:<id>) will refer to the virtual
         devices and have the <id> field assigned sequentially starting from 0,
         according to the order they appear in this list and the "memory_limit"
         list inside each element. For example,
           visible_device_list = "1,0"
           virtual_devices { memory_limit: 1GB memory_limit: 2GB }
           virtual_devices {}
         will create three virtual devices as:
           /device:GPU:0 -> visible GPU 1 with 1GB memory
           /device:GPU:1 -> visible GPU 1 with 2GB memory
           /device:GPU:2 -> visible GPU 0 with all available memory
         NOTE:
         1. It's invalid to set both this and "per_process_gpu_memory_fraction"
            at the same time.
         2. Currently this setting is per-process, not per-session. Using
            different settings in different sessions within same process will
            result in undefined behavior.
         
        repeated .tensorflow.GPUOptions.Experimental.VirtualDevices virtual_devices = 1;
      • getUseUnifiedMemory

        boolean getUseUnifiedMemory()
         If true, uses CUDA unified memory for memory allocations. If
         per_process_gpu_memory_fraction option is greater than 1.0, then unified
         memory is used regardless of the value for this field. See comments for
         per_process_gpu_memory_fraction field for more details and requirements
         of the unified memory. This option is useful to oversubscribe memory if
         multiple processes are sharing a single GPU while individually using less
         than 1.0 per process memory fraction.
         
        bool use_unified_memory = 2;
        Returns:
        The useUnifiedMemory.