- This topic has 1 reply, 2 voices, and was last updated 1 year, 4 months ago by .
Viewing 1 reply thread
Viewing 1 reply thread
- You must be logged in to reply to this topic.
› Forums › Speech Synthesis › DNN synthesis › When are two layers NOT fully connected?
In the videos its mentioned that if you have some domain knowledge about a problem you might choose to not fully connect two layers to represent that pattern. I haven’t seen a lot of DNNs but in the ones I have seen the layers are designed (i.e. before pruning) to be fully connected.
So that left me wondering, what’s an example when you wouldn’t do that? If we are putting domain knowledge into designing the connections of two layers is it really hidden anymore?
One example would be a convolutional layer. This has a very specific pattern of connections that express the operation of convolution between the activations output by a layer and a “kernel” (which is expressed by weight sharing).
We might use a convolutional layer when we wish to apply the same operation to all parts of some representation (potentially of varying size). They are very commonly used in image processing, but have their uses in speech processing too. For example, we might use them to create a learnable feature extractor for waveform-input ASR.
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in