- This topic has 3 replies, 3 voices, and was last updated 3 years, 8 months ago by .
Viewing 3 reply threads
Viewing 3 reply threads
- You must be logged in to reply to this topic.
› Forums › Automatic speech recognition › Hidden Markov Models (HMMs) › Bayes Theorem + HMMs
I am getting really confused about which part of Bayes theorem relates to which part of the HMM.
I know emission probabilities come from the Gaussian. However I am extremely confused about transmission probabilities and how they are calculated. Further I don’t understand what forward probabilities means in this sentence.
By definition, an HMM computes P(O|W) by adding up the total (forward)
probability over all possible state sequences that could have generated O
I feel like I am being a total idiot here – just really struggling to map Bayes theorem to the models
You are correct: P(O|W) is computed by the HMM. The probability is the product of the emission and transition probabilities.
If we compute that by summing over all state sequences (which is the correct thing to do, by definition – we marginalise or “sum away” the state sequence) then we call that the “forward probability”.
But, normally during recognition, we approximate that sum by only considering the most likely state sequence.
Then, is it correct to understand that there are two ways to compute the likelihood (P(O|W)): forward and viterbi algorithm?
And we don’t use forward algorithm at the moment?
The forward algorithm computes P(O|W) correctly by summing all terms. The Viterbi algorithm computes an approximation of P(O|W) by finding only the largest term (= most likely path) in the sum.
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in