Using (inference) checkpoint model of SpeechBrain ASR

My training is taking a long time. I’d like to verify it before it ends.
Is there any way to validate intermediate (temporary) models?
Those created during the training - a checkpoint or after each iteration.

Trying this naïve approach below doesn’t work (I’m new do SpeechBrain):
asr_model = EncoderDecoderASR.from_hparams(source=“model.ckpt”)
Do you have example how to test intermediate models?

Hey, have a look at the ASR from scratch tutorial (part on inference).

Thanks, I’m one step ahead in loading the temporary model with:
EncoderDecoderASR.from_hparams(source=“save/CKPT+2021-05-07+21-38-57+00/”, …

Now, my major challenge is how to setup pretrainer : of course it crash without it.
I don’t know how to do it for seq2seq/train_with_wav2vec.py, example from tutorial does not applies.
Any help?

Minor challenges:

  1. In templates/speech_recognition/, emb_size has two different values in LM & ASR train (128 & 256).
    Looks like the same variable name means different things depending on context.
  2. Any chance on complete example of using (ASR train & test) the:
  • Fairseq multilingual xlsr_53_56k.pt ?
  • HuggingFace multilingual facebook/wav2vec2-large-xlsr-53 ?
1 Like

Right, I don’t know if you already solved your problem. For the pretrainer, it is true that we do not provide an extended tutorial on it …


From this example, you can see that you have two levels: loadables and path.

loadables define the elements in your yaml for each you want to replace the parameters (load them). The formalism is:
loadables:
whatevername: !ref <whatever_element_declared_above>

Then in the path level, you connect whatevername to the path where the file is stored :slight_smile: If you do not provide a path, the pretrainer will look for the whatevername.ckpt by default.

  1. Yes, YAML files are independent unless you include them. THe embedding size is different for the LM and the ASR. They are independent on the pipeline.

  2. Not sure to understand what you would like to see here?