- This topic has 1 reply, 2 voices, and was last updated 8 years, 8 months ago by .
Viewing 1 reply thread
Viewing 1 reply thread
- You must be logged in to reply to this topic.
› Forums › Automatic speech recognition › Hidden Markov Models (HMMs) › Joining HMMs together
In connected word recognition, I noticed that while combining individual HMMs to make larger models we keep the dummy start and end state of each individual model. As far as I can remember, we created those dummy states at the first place in order to be able to enter and exit the individual models with probability 1. Will the new super HMM now need to have all those nested dummy states (since we will have already entered the large model)?
Thanks
The non-emitting (also known as ‘dummy’) start and end states are there to avoid having to separately specify two specific parameters: the probability of starting in each state, and the probability of ending in each state. Using the non-emitting states allows us to write those parameters on transitions out of the starting non-emitting state, and into the ending non-emitting state. In some text books, they do not use the non-emitting states, and therefore the parameters of the model have to also include these starting and ending state distributions. That’s messy, and easily avoided by doing things ‘the HTK way’.
In a left-to-right model, there will typically be only one transition from the starting non-emitting state: it goes to the first emitting state, with probability 1. But we could have additional transitions if we wished: this would allow the model to emit the first observation from one of the other states.
All those dummy states are still there in the compiled model (language model + acoustic models) – they are joined up with arcs, and on those arcs are the language model probabilities.
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in