Skip using pre-trained ASR model

Hello,

I was able to replicate the templates provided for ASR (“speechbrain/templates”) for my own data, but got a very high WER% - basically huge insertions. I presume it maybe due to the pre-trained Librispeech ASR model. I looked over few of the topics in the forum on how to change the path for the pre-trained tokenizer & LM (ASR from scratch).

How can I avoid using the pre-trained ASR model? Will commenting the “model” variables in “pretrainer” object in the yaml file help?

Regards,
SD

Indeed, it should do! If you remove the model: from the pretrainer, it will be simply be instantiated with random weights.

1 Like

Thanks @titouan.parcollet :+1: