Raspberry pi4 + ubuntu 20.10 test SpeechBrain

Is it possible that on a raspberry pi4 with Ubuntu 20.10 the software does not install due to problems with the pytorch audio extensions?
Thanks for any possible contribution

Could you be more precise on the error that you face ?

ERROR: Could not find a version that satisfies the requirement torchaudio (from speechbrain) (from versions: none)
ERROR: No matching distribution found for torchaudio (from speechbrain)

I tried an installation with venv and these are the errors

Hum, I don’t have experience with PyTorch on constrained hardware. But maybe there are some tutorials on making pytorch compatible with Raspberry ? @Gastron @pplantinga any idea ?

Looks like there’s no torchaudio versions available on pip that match your architecture. Maybe you can compile it yourself?

This looks like a torchaudio error though. Could you please raise an issue into the torchaudio project for that as well?

I try to read up and if I can solve the problem I will inform you.

I managed to install SpeechBrain on the raspberry Pi4 -2G ram with S.O. Ububtu 20.10 - (python 3.8.6) VScode with venv. The installation problem was with torchaudio. So I solved it by manually installing the dependencies indicated by the requirements.txt file … for the installation of torchaudio I referred to the link https://github.com/KumaTea/pytorch-aarch64/releases/download/v1.8.1/ torchaudio-0.8.1-cp38-cp38-linux_aarch64.whl
the test gave the following results: - Docs: https://docs.pytest.org/en/latest/warnings.html
=============================================== short test summary info =============================================== =
FAILED tests / integration / neural_networks / ASR_CTC / example_asr_ctc_experiment.py :: test_error - RuntimeError: fft: ATen no …
FAILED tests / integration / neural_networks / ASR_CTC / example_asr_ctc_experiment_complex_net.py :: test_error - RuntimeError: …
FAILED tests / integration / neural_networks / ASR_CTC / example_asr_ctc_experiment_quaternion_net.py :: test_error - RuntimeErro …
FAILED tests / integration / neural_networks / ASR_DNN_HMM / example_asr_dnn_hmm_experiment.py :: test_error - RuntimeError: fft: …
FAILED tests / integration / neural_networks / ASR_alignment_forward / example_asr_alignment_forward_experiment.py :: test_error
FAILED tests / integration / neural_networks / ASR_alignment_viterbi / example_asr_alignment_viterbi_experiment.py :: test_error
FAILED tests / integration / neural_networks / ASR_seq2seq / example_asr_seq2seq_experiment.py :: test_error - RuntimeError: fft: …
FAILED tests / integration / neural_networks / VAD / example_vad.py :: test_error - RuntimeError: fft: ATen not compiled with MKL …
FAILED tests / integration / neural_networks / autoencoder / example_auto_experiment.py :: test_error - RuntimeError: fft: ATen n …
FAILED tests / integration / neural_networks / speaker_id / example_xvector_experiment.py :: test_error - RuntimeError: fft: ATen …
FAILED tests / integration / signal_processing / nmf_sourcesep / example_experiment.py :: test_NMF - RuntimeError: fft: ATen not …
FAILED tests / unittests / test_augment.py :: test_add_reverb - RuntimeError: fft: ATen not compiled with MKL support
FAILED tests / unittests / test_features.py :: test_istft - RuntimeError: fft: ATen not compiled with MKL support
FAILED tests / unittests / test_multi_mic.py :: test_gccphat - RuntimeError: fft: ATen not compiled with MKL support
============================ 14 failed, 92 passed, 3 skipped, 6 warnings in 793.10s (0:13:13) = ==========================

The install provided by https://mathinf.eu/pytorch/arm64/2021-01/ gets all installed and provides wheels for all.

But under testing just the beamforming once more ATen complains its not compiled with vendor specific MKL support.

I don’t mind a central Intel brain as in many ways it makes much sense, but for ears I need many in zones/rooms and it would be so great if many of the beam forming and processing libs where companion libs without pytorch dependencies that seem to enforce Intel_MKL so that more suitable ears can websocket streams to a central brain.

The above is a RaspiOS install provided by someone far more knowledgeable than myself, but still fails.
I am still bemused that Pytorch seems to enforce vendor provided libs over what I consider the norm of opensource aka (Openblas…) but hey ?!?

/home/pi/speech-brain/venv/lib/python3.7/site-packages/torch/functional.py:585: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at  ../aten/src/ATen/native/SpectralOps.cpp:483.)
  normalized, onesided, return_complex)
Traceback (most recent call last):
  File "delay-sum.py", line 38, in <module>
    Xs = stft(xs)
  File "/home/pi/speech-brain/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 880, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/pi/speech-brain/speechbrain-0.5.7/speechbrain/processing/features.py", line 171, in forward
    return_complex=False,
  File "/home/pi/speech-brain/venv/lib/python3.7/site-packages/torch/functional.py", line 585, in stft
    normalized, onesided, return_complex)
RuntimeError: fft: ATen not compiled with MKL support
import torch

from speechbrain.dataio.dataio import read_audio

from speechbrain.processing.features import STFT, ISTFT

from speechbrain.processing.multi_mic import Covariance

from speechbrain.processing.multi_mic import GccPhat, DelaySum


xs_speech = read_audio(

   'samples/audio_samples/multi_mic/speech_-0.82918_0.55279_-0.082918.flac'

)

xs_speech = xs_speech. unsqueeze(0) # [batch, time, channel]

xs_noise  = read_audio('samples/audio_samples/multi_mic/noise_diffuse.flac')

xs_noise = xs_noise.unsqueeze(0) #[batch, time, channels]

fs = 16000

xs = xs_speech + 0.05 * xs_noise

stft = STFT(sample_rate=fs)

cov = Covariance()