Struct juice::layer::Layer[][src]

pub struct Layer<B: IBackend> {
    pub name: String,
    pub config: Box<LayerConfig>,
    pub worker: Box<dyn ILayer<B>>,
    pub weights_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub weights_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blob_names: Vec<String>,
    pub output_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub output_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub blob_names: HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>)>,
    // some fields omitted
}
Expand description

The generic Layer

Fields

name: String

Identifies the Network

The name is mainly used for logging purposes.

config: Box<LayerConfig>

The configuration of the Layer

worker: Box<dyn ILayer<B>>

The [implementation][1] of the Layer. [1]: ../layers/index.html

This is the part that does most of the work ([forward][2]/[backward][3]). [2]: ./trait.ILayer.html#method.forward [3]: ./trait.ILayer.html#method.backward

weights_data: Vec<ArcLock<SharedTensor<f32>>>

The vector that stores shared references to the weights in the form of blobs.

weights_gradient: Vec<ArcLock<SharedTensor<f32>>>

The vector that stores shared references to the weights in the form of blobs.

input_blobs_data: Vec<ArcLock<SharedTensor<f32>>>

References to all the input blobs of the layer.

input_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>

References to all the input blobs of the layer.

input_blob_names: Vec<String>

Names for all the input blobs of the layer.

output_blobs_data: Vec<ArcLock<SharedTensor<f32>>>

References to all the output blobs of the layer.

output_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>

References to all the output blobs of the layer.

blob_names: HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>)>

All the blobs of the layer that can be addressed by name.

Does not contain anonymous blobs.

Implementations

Connect the layer to another layers and set up tensors for intermediate results and weights.

Connects to the outputs provided by other layers via the registry. Adds output blobs to the layer and then adds them to the registry, so the next layers can connect them as their inputs. In the end it initializes the underlying layer implementation.

Called during initialization of containter layers.

Initializes layer for [backpropagation][1] [1]: https://en.wikipedia.org/wiki/Backpropagation

Go through all the blobs of a layer to determine which blobs contribute to the loss of the next layer. We can skip backward computation for blobs that don’t contribute to the loss. If all of the blobs skip backpropagation we set a flag to skip backpropagation of the whole layer.

Set [backpropagation][1] flags to force this layer to backpropagate. [1]: https://en.wikipedia.org/wiki/Backpropagation

Is executed during Network initalization if [NetworkConfig][2].force_backward is true. Forcing backpropagation is useful for debugging.

Uses the underlying layer implementation to compute a forward step.

See ILayer.forward

Uses the underlying layer implementation to compute a backward step.

See ILayer.backward

Calculate the gradient w.r.t. input.

This method is mostly used when doing backpropagation.

Calculate the gradient w.r.t. parameters.

“Parameters” here refers to weights and also possibly bias, depending on the layer.

This method is mostly used when doing backpropagation.

Synchronize the layers backend.

Updates the [weights][1] with the weight update computed by the [Solver][2]. [1]: https://en.wikipedia.org/wiki/Synaptic_weight [2]: ../solver/struct.Solver.html

Updating the weights is the last step of computing a [Solver][2] minibatch. The update value is computed in previous steps according to the learning rate policy

Clears the [weights][1] gradients and zero-inits them. [1]: https://en.wikipedia.org/wiki/Synaptic_weight

The gradients for the weights accumulate over the backpropagation steps of a Solver minibatch and are cleared between each minibatch to start over with a clean slate.

Serialize the Layer and it’s weights to a Cap’n Proto file at the specified path.

You can find the capnp schema here.

let mut net_cfg = SequentialConfig::default();
// ... set up network ...
let cfg = LayerConfig::new("network", net_cfg);

let native_backend = Rc::new(util::native_backend());
let mut layer = Layer::from_config(native_backend, &cfg);
// ... do stuff with the layer ...
// ... and save it
layer.save("mynetwork").unwrap();

Read a Cap’n Proto file at the specified path and deserialize the Layer inside it.

You can find the capnp schema here.

use coaster::prelude::*;

let native_backend = Rc::new(util::native_backend());
// Load layer from file "mynetwork"
let layer = Layer::<Backend<Native>>::load(native_backend, "mynetwork").unwrap();

Sets whether the layer should compute gradients w.r.t. a weight at a particular index given by weight_id.

See [weight_propagate_down][1] ./struct.Layer.html

Returns true when the layer is using in-place computation.

For a layer to use in-place computation it needs to support it via compute_in_place and the names of the first input and output tensor have to match.

Returns the names of all the input blobs.

Returns the [loss weight][1] associated with the weight blob with id weight_id. [1]: http://caffe.berkeleyvision.org/tutorial/loss.html

Returns all the learnable weights in the layer.

If the layer is a container layer it will return all the weights of the layers inside it.

Returns the gradients for all the learnable weights in the layer.

If the layer is a container layer it will return all the gradients of the layers inside it.

Returns the names of all the learnable weights in the layer.

If the layer is a container layer it will return all the names of the layers inside it.

Returns the learning rate for all the learnable weights in the layer.

If the layer is a container layer it will return all learning rates of the layers inside it.

Creates a new Layer from a [LayerConfig][1]. [1]: ./struct.LayerConfig.html

Trait Implementations

Formats the value using the given formatter. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Performs the conversion.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.