-
AuthorSearch Results
-
March 7, 2024 at 16:55 #17581
In reply to: Gibberish: Bad pitch marking or do_alignment?
For warning 8232, search the forums.
March 7, 2024 at 16:12 #17578In reply to: Gibberish: Bad pitch marking or do_alignment?
It has to do with alignment. I looked at a few of my utts + their .lab files on wavesurfer and some labels for words are missing entirely. The time stamps for when I say “Documents” in one utt, for example, has no non-sp/non-sil labels (just ‘sil’; see attached image).
I spoke to two others who had similar issues and what solved it for them was redoing the whole exercise from scratch (from step 1: downloading ss.zip until the final ‘run the voice’ step). I just did that and am still having this issue – I checked my utts.data file to make sure it’s formatted the same way as arctic’s. I’ll try redoing it but before I do, I think it’d help to know why or which step in the ‘building the voice’ process before do_alignment is causing alignment to go badly.
What could cause some words to not be labelled at all in the do_alignment and the next break_mlf alignment/aligned.3.mlf lab steps? (for words like “documents”, which isn’t oov)
What does this line mean (while running do_alignment, because I see it a lot):
WARNING [-8232] ExpandWordNet: Pronunciation 1 of sp is ‘tee’ word in HViteThanks!
Attachments:
You must be logged in to view attached files.June 29, 2016 at 14:42 #3311In reply to: Input and Output frames
I’ve trained a model successfully with 8khz data, but am now having several issues with 16khz data.
I’ve changed the parameters in the config file accordingly and have managed to atleast start training, but this time I’m getting the above error but on the 2nd epoch, after which it breaks…
2016-06-29 14:26:17,158 DEBUG main.train_DNN: calculating validation loss
2016-06-29 14:26:27,741 INFO main.train_DNN: epoch 1, validation error 628964418174434265008261326597378733722224197484642355226122602583441150074860628769722468455296618923236287798542215123891810860238841699789697149276208814735544520672472477079448467306333648648592746968864784384.000000, train error 106279287171980071212391732921729749563599509286910907520861045585590353898646213523071851776878272693510050177262482329438234528095902286440442106534226060885518322284601006242847044870755156682712675438821376.000000 time spent 180.00
2016-06-29 14:29:08,963 DEBUG main.train_DNN: calculating validation loss
2016-06-29 14:29:17,823 INFO main.train_DNN: epoch 2, validation error nan, train error nan time spent 170.08
2016-06-29 14:29:17,823 INFO main.train_DNN: overall training time: 5.83m validation error 628964418174434265008261326597378733722224197484642355226122602583441150074860628769722468455296618923236287798542215123891810860238841699789697149276208814735544520672472477079448467306333648648592746968864784384.000000Is this because the error is so large? (I’m only running on a training set of 60 and validation and test sets of 15).
-
AuthorSearch Results
Search Results for '8232'
Viewing 3 results - 1 through 3 (of 3 total)
-
Search Results
Viewing 3 results - 1 through 3 (of 3 total)