This is a naive example of performing real-time inference on audio from your microphone.
The stream
tool samples the audio every half a second and runs the transcription continously.
More info is available in issue #10.
./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
rt_esl_csgo_2.mp4
Setting the --step
argument to 0
enables the sliding window mode:
./stream -m ./models/ggml-small.en.bin -t 6 --step 0 --length 30000 -vth 0.6
In this mode, the tool will transcribe only after some speech activity is detected. A very
basic VAD detector is used, but in theory a more sophisticated approach can be added. The
-vth
argument determines the VAD threshold - higher values will make it detect silence more often.
It's best to tune it to the specific use case, but a value around 0.6
should be OK in general.
When silence is detected, it will transcribe the last --length
milliseconds of audio and output
a transcription block that is suitable for parsing.
The stream
tool depends on SDL2 library to capture audio from the microphone. You can build it like this:
# Install SDL2 on Linux
sudo apt-get install libsdl2-dev
# Install SDL2 on Mac OS
brew install sdl2
make stream
This tool can also run in the browser: examples/stream.wasm