› Forums › Automatic speech recognition › Hidden Markov Models (HMMs) › Pruning
- This topic has 3 replies, 3 voices, and was last updated 4 years, 8 months ago by Simon.
-
AuthorPosts
-
-
December 10, 2019 at 21:29 #10548
Regarding DTW, it is mentioned that pruning will not introduce any errors. But when talk about the Viterbi algorithm, it is said that pruning may cause slight increase in error rates. Could you please explain how these two differ from each other? Thanks!
-
December 10, 2019 at 22:18 #10549
I noticed that too.
It seems like it is related to how the pruning stretegy is designed and implemented as show in this PrunedDTW paper.
-
December 10, 2019 at 23:32 #10550
It just occurred to me, is it because the ‘particular’ type of pruning we talked about for DTW is actually the same as the Viterbi Criterion (i.e. throw away the bad ones when they meet)?
And for the Viterbi algorithm, the real pruning is implemented by beam search, so we pretty much ‘throw away bad ones before they even meet’. -
December 11, 2019 at 09:54 #10553
You need to carefully distinguish two very different ways in which we save computation in both DTW and the Viterbi algorithm for HMMs.
Dynamic Programming: this algorithm efficiently evaluates all possible paths (= state sequences for HMMs). All paths are evaluated, and none are disregarded. This algorithm is exact and introduces no errors compared to a naive exhaustive search of the paths one at a time.
Pruning: this involves not exploring some paths (state sequences) at all. In DTW, this means that we will not visit every single point in the grid. In the Viterbi algorithm for HMMs implemented as token passing, it means that not all states will have tokens at all time steps. Pruning introduces errors whenever an unexplored part of the grid would have been on the globally most likely path.
In Dynamic Programming, we talk about “throwing away” all but the locally best path when two or more paths meet. The paths that are “thrown away” have already been evaluated up to that point. Extending those paths further would involve exactly the same computations as extending the best path. So we are effectively still evaluating all paths. We save computation without introducing any error: that’s the magic of Dynamic Programming.
This is not the same as pruning, in which we stop exploring some of the locally best paths, because there is another path (into another point on the DTW grid, or arriving at a different state in the HMM) that is much better.
-
-
AuthorPosts
- You must be logged in to reply to this topic.