Forum Replies Created
-
AuthorPosts
-
Thanks for pointing me in the right direction.
I looked at section 4.2.3 of Clark, Richmond, & King (2007) as suggested in the first thread, but am still not sure how to proceed. It does seem that my speaking rate might be faster and more prone to elided/deleted segments than average but is that really likely to affect the alignment of every utterance? I don’t see any other glaring issues with the automatic labeling other than a handful of cases like this, where multiple phone labels are squeezed together on to a portion of the waveform that looks more like a single articulation.
(Example attached with no visible /v/ in “of”)
-Noe
Attachments:
You must be logged in to view attached files.Hi Simon/Korin,
I also seem to be having trouble with pitch marking in do_alignment. For context, I’m building a voice with all arctic utterances. I get lines like this printed for every line during this step:
Bad pitchmarking: r 2.026.
Bad pitchmarking: e 2.076.
…This then seems to lead to problems later on with creating the lpc files…
I’ve also looked at some of the waveforms and have found that for every utterance there are instances of “sp” aligned to clear segments of speech (and verified by listening). Example attached. I’ve also redone the entire pipeline from scratch and the error reproduces, so I’m a bit stuck as to why this is happening.
Thanks,
-NoeAttachments:
You must be logged in to view attached files. -
AuthorPosts