Speech Recognition From Scratch : CUDA out of Memory Error

Hi there!

I am novice to ASR, DL and SpeechBrain and have a sort of “Learn by Doing” type of mindset.

I am following Speech Recognition From Scratch tutorial and got stuck with ‘RuntimeError: CUDA out of memory’ in Speech Recognizer section.

I have tried Nvidia GeForce 1660ti (6GB) and even a 1080ti (11GB) and have reduced the batch size to 2 from 8 but it failed on both GPUs during its first epoch.

To my understanding a 1080Ti is a decent GPU for DL, and i want to use 1660ti too off and on.
please suggest, how to address this error and what am i doing wrong?

regards

1 Like

Hi,

Unfortunately, a single 1080 Ti isn’t “decent” for attentional and autoregressive ASR (which is the case of this recipe). For instance, in Google Colab, K80 are equipped with 24GB of VRAM … You can: Reduce the size of your architecture (go from 1024 neurone to 768, [128,256] filters to [128,128] etc …) Lowering the batch size is also a good idea.

As you may know, when you will switch to a custom dataset, you will need to be extremely careful with the length of your sequences as it will impact strongly the VRAM consumption.

2 Likes

Thankyou @titouan.parcollet for quick and well directed response.
You made it a lot clear and easy to follow.
Hopefully I will get back with some progress and may be any other issue along the way.

And by the way SpeechBrain is really a good initiative and you folks are doing a great job.

Regards

1 Like

Use Batch Size of 2 .I tested it out on Titan Xp.

If you are using Jupyter notebook try to stop the running ipynb which will free up the Gpu .
Test with nvidia-smi

Then re open the notebook and try with batch size 2 .