Layer Types
Currently, a basic Merger
layer and a Splitter
layer are implemented. In addition, there are several convenience layers for the preimplemented models.
Mergers
FeedbackNets.Mergers.Merger
— Type.Merger{F,O}
An element in a FeedbackChain
in which the forward stream and feedback stream are combined according to an operation op
.
Fields
splitname::String
: name of theSplitter
node from which the feedback is takenfb::F
: feedback branchop::O
: operation to combine forward and feedback branches
Details
fb
typically takes the form of a Flux operation or chain. When a FeedbackChain
encounters a Merger
, it will look up the state s
of the Splitter
given by forkname
from the previous timestep, apply fb
to it and combine it with the forward input x
according to op(x, fb(s))
FeedbackNets.Mergers.inputname
— Method.inputname(m::Merger)
Return the name of the Splitter
from which m
gets its input.
Splitters
FeedbackNets.Splitters.Splitter
— Type.Splitter
An element in a FeedbackChain
that marks locations where a feedback branch forks off from the forward branch.
Fields
name::String
: unique name used to identify the fork for the backward pass.
Details
In the forward stream, a is essentially an identity operation. It only alerts the FeedbackChain
to add the current Array to the chain's state and mark it with the Splitters name
so that Merger
s can access it for feedback to the next timestep.
FeedbackNets.Splitters.splitname
— Method.splitname(s::Splitter)
Return name of s
.
Other layers
The preimplemented models use a flattening layer and local response normalization.
flatten(x)
Turns a high-dimensional array (e.g., a batch of feature maps) into a 2-d array, linearizing all except the last (batch) dimension.
FeedbackNets.ModelFactory.LRNs
— Module.Implementation of local response normalization.
LRN{T,I}
Local response normalization layer. Input i
is processed according to out(x,y,f,b) = x * [b + α * sum( i(x, y, f-k÷2:f+k÷2, b)^2 )]^(-β)
Todo: β is currently ignored (always set to 0.5)
FeedbackNets.ModelFactory.LRNs.LRN
— Method.(l::LRN)(i)
Applies a local response normalization layer according to: out(x,y,f,b) = x * [c + α * sum( i(x, y, f-k÷2:f+k÷2, b)^2 )]^(-β)
Todo: β is currently ignored (always set to 0.5)