How to get model trained on GPU

The speech separation template model runs with no problems on my PC. But when I tried to run it on Colab with GPU, it reports the following error, as shown in the screenshot. It looks like the training runs with no problems. But evaluation reports problems because the model is on CPU whereas the data are on cuda.

I tried to set run_opts to ‘cuda’, based on a similar post from @ckck But it reports errors as below:

Any help would be greatly appreciated!

Hey ! I don’t see the run_opts on your second screen :frowning:

I set it in the scrip instead, as shown in line 297 in the figure below:

I just tried to pass the setting in the command line as below. But there is still the same error:

When I change ‘cuda’ into ‘cpu’, the model runs without an error:

Could you try cuda:0 instead ? or (cuda:GPU_ID).


That worked. Thank you! I also had to send the input to cpu device during the evaluation step to resolve the runtime error.

1 Like