Returns a Layer that accepts another layer's output as input of this layer
Returns a Layer that accepts another layer's output as input of this layer
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit thisLayer: INDArray @Symbolic)(anotherLayer: INDArray @Symbolic) = { thisLayer.compose(anotherLayer)
Return a Layer that accepts input and will only forward.
Return a Layer that accepts input and will only forward.
If you want to test the accuracy of network assertions, you can not let your network backward, then you need to use predict
.
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit input: INDArray @Symbolic) =??? val predictor=composeNetwork predictor.predict(testData)
Return a Layer that accepts input and will forward & backward.
Return a Layer that accepts input and will forward & backward.
If you want to train your network,you need your network backward, then you need to use train
.
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit input: INDArray @Symbolic) =??? val yourNetwork=composeNetwork yourNetwork.train(testData)
In DeepLearning.
In DeepLearning.Scala,operation is not immediately run,
but first filled with placeholders, the entire network will be running ,then the real data will come into networks.
So if you want to see some vars's intermediate state,you need to use withOutputDataHook
.
import com.thoughtworks.deeplearning.DifferentiableAny._ (var:From[INDArray]##`@`).withOutputDataHook{ data => println(data) }
(compose: StringAdd).self
(compose: StringFormat).self
(compose: ArrowAssoc[Compose[Input0, Temporary, Output0]]).x
(Since version 2.10.0) Use leftOfArrow
instead
(compose: Ensuring[Compose[Input0, Temporary, Output0]]).x
(Since version 2.10.0) Use resultOfEnsuring
instead