Package org.deeplearning4j.parallelism
Class ParallelInference
- java.lang.Object
-
- org.deeplearning4j.parallelism.ParallelInference
-
- Direct Known Subclasses:
InplaceParallelInference
public class ParallelInference extends Object
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
ParallelInference.Builder
protected static class
ParallelInference.ObservablesProvider
-
Field Summary
Fields Modifier and Type Field Description protected int
batchLimit
static int
DEFAULT_BATCH_LIMIT
static InferenceMode
DEFAULT_INFERENCE_MODE
static int
DEFAULT_NUM_WORKERS
static int
DEFAULT_QUEUE_LIMIT
protected InferenceMode
inferenceMode
protected LoadBalanceMode
loadBalanceMode
protected org.deeplearning4j.nn.api.Model
model
protected long
nanos
protected int
queueLimit
protected int
workers
-
Constructor Summary
Constructors Modifier Constructor Description protected
ParallelInference()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected org.deeplearning4j.nn.api.Model[]
getCurrentModelsFromWorkers()
This method returns Models used in workers at this moment PLEASE NOTE: This method is NOT thread safe, and should NOT be used anywhere but testsprotected long
getWorkerCounter(int workerIdx)
protected void
init()
org.nd4j.linalg.api.ndarray.INDArray
output(double[] input)
org.nd4j.linalg.api.ndarray.INDArray
output(float[] input)
<T> T
output(@NonNull org.deeplearning4j.nn.api.ModelAdapter<T> adapter, org.nd4j.linalg.api.ndarray.INDArray... inputs)
This method does forward pass and returns output provided by OutputAdapter<T> T
output(@NonNull org.deeplearning4j.nn.api.ModelAdapter<T> adapter, org.nd4j.linalg.api.ndarray.INDArray[] input, org.nd4j.linalg.api.ndarray.INDArray[] inputMasks)
This method does forward pass and returns output provided by OutputAdapterorg.nd4j.linalg.api.ndarray.INDArray
output(org.nd4j.linalg.api.ndarray.INDArray input)
org.nd4j.linalg.api.ndarray.INDArray[]
output(org.nd4j.linalg.api.ndarray.INDArray... input)
Generate predictions/output from the netwonkorg.nd4j.linalg.api.ndarray.INDArray[]
output(org.nd4j.linalg.api.ndarray.INDArray[] input, org.nd4j.linalg.api.ndarray.INDArray[] inputMasks)
Generate predictions/outputs from the network, optionally using input masks for predictionsorg.nd4j.linalg.api.ndarray.INDArray
output(org.nd4j.linalg.api.ndarray.INDArray input, org.nd4j.linalg.api.ndarray.INDArray inputMask)
org.nd4j.linalg.api.ndarray.INDArray
output(org.nd4j.linalg.dataset.DataSet dataSet)
void
shutdown()
This method gracefully shuts down ParallelInference instancevoid
updateModel(@NonNull org.deeplearning4j.nn.api.Model model)
This method allows to update Model used for inference in runtime, without queue reset
-
-
-
Field Detail
-
model
protected org.deeplearning4j.nn.api.Model model
-
nanos
protected long nanos
-
workers
protected int workers
-
batchLimit
protected int batchLimit
-
inferenceMode
protected InferenceMode inferenceMode
-
queueLimit
protected int queueLimit
-
loadBalanceMode
protected LoadBalanceMode loadBalanceMode
-
DEFAULT_NUM_WORKERS
public static final int DEFAULT_NUM_WORKERS
-
DEFAULT_BATCH_LIMIT
public static final int DEFAULT_BATCH_LIMIT
- See Also:
- Constant Field Values
-
DEFAULT_INFERENCE_MODE
public static final InferenceMode DEFAULT_INFERENCE_MODE
-
DEFAULT_QUEUE_LIMIT
public static final int DEFAULT_QUEUE_LIMIT
- See Also:
- Constant Field Values
-
-
Method Detail
-
updateModel
public void updateModel(@NonNull @NonNull org.deeplearning4j.nn.api.Model model)
This method allows to update Model used for inference in runtime, without queue reset- Parameters:
model
-
-
getCurrentModelsFromWorkers
protected org.deeplearning4j.nn.api.Model[] getCurrentModelsFromWorkers()
This method returns Models used in workers at this moment PLEASE NOTE: This method is NOT thread safe, and should NOT be used anywhere but tests- Returns:
-
init
protected void init()
-
getWorkerCounter
protected long getWorkerCounter(int workerIdx)
-
shutdown
public void shutdown()
This method gracefully shuts down ParallelInference instance
-
output
public org.nd4j.linalg.api.ndarray.INDArray output(double[] input)
- Parameters:
input
-- Returns:
-
output
public org.nd4j.linalg.api.ndarray.INDArray output(float[] input)
- Parameters:
input
-- Returns:
-
output
public org.nd4j.linalg.api.ndarray.INDArray output(org.nd4j.linalg.api.ndarray.INDArray input)
-
output
public org.nd4j.linalg.api.ndarray.INDArray output(org.nd4j.linalg.api.ndarray.INDArray input, org.nd4j.linalg.api.ndarray.INDArray inputMask)
-
output
public org.nd4j.linalg.api.ndarray.INDArray output(org.nd4j.linalg.dataset.DataSet dataSet)
- Parameters:
dataSet
-- Returns:
-
output
public org.nd4j.linalg.api.ndarray.INDArray[] output(org.nd4j.linalg.api.ndarray.INDArray... input)
Generate predictions/output from the netwonk- Parameters:
input
- Input to the network- Returns:
- Output from the network
-
output
public org.nd4j.linalg.api.ndarray.INDArray[] output(org.nd4j.linalg.api.ndarray.INDArray[] input, org.nd4j.linalg.api.ndarray.INDArray[] inputMasks)
Generate predictions/outputs from the network, optionally using input masks for predictions- Parameters:
input
- Input to the networkinputMasks
- Input masks for the network. May be null.- Returns:
- Output from the network
-
output
public <T> T output(@NonNull @NonNull org.deeplearning4j.nn.api.ModelAdapter<T> adapter, org.nd4j.linalg.api.ndarray.INDArray... inputs)
This method does forward pass and returns output provided by OutputAdapter- Parameters:
adapter
-inputs
-- Returns:
-
output
public <T> T output(@NonNull @NonNull org.deeplearning4j.nn.api.ModelAdapter<T> adapter, org.nd4j.linalg.api.ndarray.INDArray[] input, org.nd4j.linalg.api.ndarray.INDArray[] inputMasks)
This method does forward pass and returns output provided by OutputAdapter- Type Parameters:
T
-- Parameters:
adapter
-input
-inputMasks
-- Returns:
-
-