Teacher forcing in test stage in AttentionalRNNDecoder?

Hello speechbrain community,
I was looking into the code of the librispeech seq2seq ASR recipe (speechbrain/train.py at develop · speechbrain/speechbrain · GitHub) and saw something I don’t understand. The model use teacher forcing by giving the decoder the embedding of the target sentence, I have no problem with this in train time, but it does not seem to be disabled in test or inference time.
Moreover, in the code of the AttentionalRNNDecoder (speechbrain/RNN.py at develop · speechbrain/speechbrain · GitHub), the output length directly depend on the length of the inp_tensor which in the case of the librispeech recipe is the target embedding sentence, making the forward code impossible to use without this information.
If we need to use the target sentence information in test stage, I do not think the result will be reliable in a real-world use case of the model.
Am I missing/misunderstanding something ? Or is there something else to use for testing without this teacher forcing ?