Package onnx

Interface OnnxMl.TrainingInfoProtoOrBuilder

  • All Superinterfaces:
    org.nd4j.shade.protobuf.MessageLiteOrBuilder, org.nd4j.shade.protobuf.MessageOrBuilder
    All Known Implementing Classes:
    OnnxMl.TrainingInfoProto, OnnxMl.TrainingInfoProto.Builder
    Enclosing class:
    OnnxMl

    public static interface OnnxMl.TrainingInfoProtoOrBuilder
    extends org.nd4j.shade.protobuf.MessageOrBuilder
    • Method Detail

      • hasInitialization

        boolean hasInitialization()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
        Returns:
        Whether the initialization field is set.
      • getInitialization

        OnnxMl.GraphProto getInitialization()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
        Returns:
        The initialization.
      • getInitializationOrBuilder

        OnnxMl.GraphProtoOrBuilder getInitializationOrBuilder()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • hasAlgorithm

        boolean hasAlgorithm()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
        Returns:
        Whether the algorithm field is set.
      • getAlgorithm

        OnnxMl.GraphProto getAlgorithm()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
        Returns:
        The algorithm.
      • getAlgorithmOrBuilder

        OnnxMl.GraphProtoOrBuilder getAlgorithmOrBuilder()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • getInitializationBindingList

        List<OnnxMl.StringStringEntryProto> getInitializationBindingList()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBinding

        OnnxMl.StringStringEntryProto getInitializationBinding​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingCount

        int getInitializationBindingCount()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingOrBuilderList

        List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getInitializationBindingOrBuilderList()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingOrBuilder

        OnnxMl.StringStringEntryProtoOrBuilder getInitializationBindingOrBuilder​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getUpdateBindingList

        List<OnnxMl.StringStringEntryProto> getUpdateBindingList()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBinding

        OnnxMl.StringStringEntryProto getUpdateBinding​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingCount

        int getUpdateBindingCount()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingOrBuilderList

        List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getUpdateBindingOrBuilderList()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingOrBuilder

        OnnxMl.StringStringEntryProtoOrBuilder getUpdateBindingOrBuilder​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;