Forum Replies Created
-
AuthorPosts
-
c) an d) are definitely not true but I can’t decide between a) and b).
Oops, you are right. Thanks for correcting me! 🙂
Pseudo-pitch marks are definitely not passed on to the waveform generator as they should be estimated by TD-PSOLA. I don’t think the phone durations make sense to be passed on by the text processor as the duration will be altered in waveform generation if TD-PSOLA is to adjust the pitch or duplicate pitch periods. The LPCs should be also computed by the waveform generator as this is where the spectral smoothing is done. I think the correct one is b).
Growing a classification tree occurs at training where the training algorithm decides how to split each node by using entropy. Each node here represents a question we are asking about the data. As you said – we have to decide on the stopping criterion that would stop the algorithm from splitting when it thinks it has done a good job. The first option “i. when the depth of the tree reaches a maximum” seems reasonable as it pretty much says – stop the training after splitting the data on 5 (for example) consecutive questions. The other option I think is correct is “iii. when the number of data points in every partition is below a minimum”. An appropriate example of such minimum is 1 – say you are at a node further down a tree after splitting a number of times. If you have just 1 data point in that node, it doesn’t make much sense to split any further. So for this question, my answer would be d).
The total number of words in the hypothesis is the number of words you predicted. So you are right – the answer should be d).
i. ii. and iii. are certainly not true. As for iv., by globally optimal it’s probably meant that they are not the best solution to use. Whatever it is, I’d personally go with a) on this one.
-
AuthorPosts