by mungoman2 4 hours ago

I think you’re implying that it would be useful to have the LLM predict the end of the speaker’s speech, and continue with its reply based on that.

If, when the speaker actually stops speaking, there is a match vs predicted, the response can be played without any latency.

Seems like an awesome approach! One could imagine doing this prediction for the K most likely threads simultaneously, subject by computer power available, and prune/branch as some threads become inaccurate.