Loading and using LM pretrained models (LibriSpeech)

Hello,
I am trying to replicate the LibriSpeech recipe before making some changes.
I trained the Tokenzier successfully and moved to the LM part.
There is a note in the README.md file saying:
“Training an LM takes a lot of time. In our case, it takes 3/4 weeks on 4 TESLA V100. Use the pre-trained model to avoid training it from scratch

I couldn’t find a way of using the pre-trained models that are shared via Google drive.
Where should I place the files?

For example,
the RNNLM.yaml (1k BPE) , in drive there is a folder named “1234”
and inside that directory, there are many files and a sub-directory named save.

Which file do I need?

In addition, the next step is training the ASR itself. How do I pick the desired LM pretrained model
to be used with the ASR engine?

Thanks

@aviadb, if you are interested in commercial support, then we can chat further on this.

We are an offshore team and have some very bright minds in our data science and engineering team. speak to you soon!

Regards,
Sandeep from Canopus team

Hi @aviadb, starts with the ASR from Scratch tutorial. If it doesn’t help you, you can go to the HuggingFace tutorial (SB Advanced). It this still doesn’t answer then go to the ASR template with detailed comments on the YAML and .py: speechbrain/templates/speech_recognition/ASR at develop · speechbrain/speechbrain · GitHub

If after all this you still are usure, I can help.