Pruning

Even with the approximation to P(O|W) made by the Viterbi algorithm, recognition might be too slow. So, we must speed things up further.

This video just has a plain transcript, not time-aligned to the videoTHIS IS AN UNCORRECTED AUTOMATIC TRANSCRIPT. IT MAY BE CORRECTED LATER IF TIME PERMITS
Now we can make this thing go arbitrarily fast.
So if someone tells you that speech recognises really fast, this is not impressive.
It's impressive.
It's still accurate when it's fast.
Think about what's happening in token passing here, there are only two tokens alive.
What happens at the next stage? This one clones around here and here.
This one goes on here on here.
This one goes around here on DH tokens arriving here eventually.
If there's a big model on the long enough observation sequence that might be tens or hundreds or thousands of tokens alive in this big network of hidden Markov model and some of them are gonna be really unlikely.
Imagine we've got a really long observation sequence hundreds of frames on.
One of the tokens just went here.
Just went round round round here, just trying to do everything from the state on and on and on.
Hey, that's going to probabilities.
Going pretty unlikely.
It's pretty unlikely, then suddenly makes a dash for the end of winds.
Thanks.
That's like saying that all of these sounds like the beginning of the word.
Okay, so we're saying the word one which is going all the way and at the end very quickly.
That's really unlikely.
And so we can make things go fast by doing additional discarding of tokens.
There's two distinct different things happen when two tokens meat, we can eliminate all but the winner, and that's perfect.
It's not an approximation.
There's absolutely no point keeping the second best.
They will never win.
That's ever free.
She will always do that.
We could go further even when we get a winner.
So imagine this competed with all that opens arriving there, and it was the winner, but it still looks really unlikely.
By some measure, we could just eliminate it outright.
Okay, I must go pruning on school pruning.
Think of gardening are plants.
It's got two big cutting branches off.
It's like all these paths going through here.
When we look at some of them, we say OK, locally or the best you be ever arrived here, but it looks pretty unlikely that you're going to win prudent right, and by pruning those paths, there's unlikely pass.
We could reduce the number of tokens and therefore reduce the amount of computation.
We could make everything go faster.
I need less memory because there's two of us have to live in memory.
Hey, so Peter B.
Is not really pruning.
Think of pruning as an additional thing on top of Viterbi.
That's involves discarding tokens that even were locally the best.
But they're not as good as the token somewhere else in the network at that time.
Okay, not something you can explore in the practical use short you could do pruning.
You could make a system go faster.
Eventually, you'll make the accuracy worse because sometimes accidentally, you'll prove the thing that would have won.

Log in if you want to mark this as completed
This video covers topics:
Excellent 42
Very helpful 5
Quite helpful 4
Slightly helpful 3
Confusing 1
No rating 0
My brain hurts 0
Really quite difficult 1
Getting harder 3
Just right 50
Pretty simple 1
No rating 0