You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to make some new models for other languages, and I'm hitting this error on the generation step.
I've trained on a FT of English that I had, but with a new ASR model for my dataset and the multilingual PL-BERT along with associated configurations.
All the outputs look OK up until this step in the inference code:
s_pred = sampler(noise = torch.randn((1, 256)).unsqueeze(1).to(device),
embedding=bert_dur,
embedding_scale=embedding_scale,
features=ref_s, # reference from the same speaker as the embedding
num_steps=diffusion_steps ).squeeze(1)
which only returns NaN using these models. Is there something unworkable about this configuration?
The text was updated successfully, but these errors were encountered:
I'm trying to make some new models for other languages, and I'm hitting this error on the generation step.
I've trained on a FT of English that I had, but with a new ASR model for my dataset and the multilingual PL-BERT along with associated configurations.
All the outputs look OK up until this step in the inference code:
which only returns NaN using these models. Is there something unworkable about this configuration?
The text was updated successfully, but these errors were encountered: