Batch Inference
#8
by
nononameneeded2001
- opened
Does the model support batches inference ? because i think when i gave a very long input, it affects the quality of the output and a voice sappears suddenly that is not my ref audio, because of the context window. when i tried to increase the context window i got very cold and emotionless output.
Support, but didn't open source.
Hi everyone! I’ve added batch-inference support to FishSpeech.:
https://github.com/mkgs210/batch_fish_speech
Your ⭐️ mean a lot to me.
Enjoy!
https://github.com/Shreyxpatil/fish-speech_inference_without_cheakpoints
here is full inference code that you can customise however you want!