- This topic has 1 reply, 2 voices, and was last updated 9 years, 2 months ago by .
Viewing 1 reply thread
Viewing 1 reply thread
- You must be logged in to reply to this topic.
› Forums › Speech Synthesis › The front end › HMM Algorithm for POS tagging
In POS tagging, we are trying to find the transition [latex]t_i[/latex] which maximize the product of [latex]P(W_i | t_i)[/latex] and [latex]P(t_i|t_{i-1})[/latex]. I understand that [latex]P(W_i|t_i)[/latex] is the probability of a word given a tag, but what is [latex]P(t_i|t_{i-1})[/latex]? Is it the word’s current tag given the previous word’s tag, or the word’s current tag given the previous word?
For example,in J&M towards the end of 5.5.1,for the sentence “Secretariat is expected to race tomorrow”, P(VB|TO)P(NR|VB)P(race|VB) and P(NN|TO)P(NR|NN)P(race|NN) are both calculated to compare the probabilities of “race” as a noun and as a verb. I suppose P(race|VB) and P(race|NN) are [latex]P(w_i|t_i)[/latex], but which one is the transition probability? Why both of them are calculated here?
This model is called a generative model. It generates a word sequence, given a tag sequence. In POS tagging we use it to infer the most likely tag sequence that generated to observed word sequence.
[latex]P(t_i | t_{i-1})[/latex] is the transition probability of tag [latex]t_i[/latex] following [latex]t_{i-1}[/latex]. It’s a language model that injects prior knowledge about what tag sequences are likely.
[latex]P(W_i | t_{i})[/latex] is the emission probability and models how likely that word is, given the tag.
The speech recognition part of the course will help you understand the concept of generative models.
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in