Integrating leaf-audio with Speechbrain

Hi All,

Posting for the first time on the Speechbrain discourse.

I’m curious would it be possible to integrate the leaf-audio front-end with Speechbrain. There’s a version that was put out with Pytorch integration here GitHub - denfed/leaf-audio-pytorch: Pytorch port of Google Research's LEAF Audio paper

I was curious what it would take to do the integration and what steps would be needed in terms of process and feature requests as part of the main repo. I realize that Speechbrain was designed with customizability in mind so if it were more on the user to customize where would they start.

Thanks again and any feedback or help is very welcome and much appreciated.

Thanks again,


Hi !

This would be an extremely good idea. From what I see, if you want to do it properly, you would need two steps:

  1. add in nnet/ (and maybe other classes) the needed torch.nn.Module (like the GaborConstraint).
  2. Add in lobes the Leaf Class. This will allow the users to simply create a Leaf element directly in the Yaml :).

Of course, you will certainly encounter others difficulties, do not hesitate to ask for help!

1 Like

Awesome @titouan.parcollet , thanks again for the direction I appreciate it! I’ll start working on this now and I’ll submit a pull requests when it’s finished. I’ll also let you know when I need help as I’m sure I will. Thanks again and much appreciated!