- This topic has 1 reply, 2 voices, and was last updated 7 years ago by .
Viewing 1 reply thread
Viewing 1 reply thread
- You must be logged in to reply to this topic.
› Forums › Automatic speech recognition › Hidden Markov Models (HMMs) › Transition Matrix in training a model
In initialising a model, the Gaussian parameter is estimated iteratively using Viterbi algorithm, but what about the transition matrix? Does it get updated? In Jurafsky – page 362 it is mentioned that it has a flat start but never mentioned whether this is updated in Viterbi and Bauch-Welch training.
Yes, the transition matrix is also updated – you can verify this for yourself by inspecting it in the prototype models, the intermediate models after HInit
and the final models after HRest
.
Conceptually, training the transition probabilities is straightforward: we just count how often each transition is used, and then normalise the counts to probabilities (to make the probabilities across all transitions leaving any given state sum to 1). This counting is very easy for the Viterbi training – we literally count how often each transition was used in the single best alignment for each training example, and sum across all training examples. For Baum-Welch it’s conceptually the same, but we use “soft counting” when summing across all alignments and all training examples.
To see for yourself how much contribution the transition matrix makes to the model, you could even do an experiment (optional!), such as manually editing the transition matrices of the final models to reset them to the values from the prototype model, but leaving the Gaussians untouched.
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in