by nicktikhonov 10 hours ago

Very cool! starred and on my reading list. Would love to chat and share notes, if you'd like

riquito 3 hours ago | [-0 more]
alfalfasprout 8 hours ago | [-1 more]

Also consider using Cerebras' inference APIs. They released a voice demo a while back and the latency of their model inference is insane.

ilaksh 2 hours ago | [-0 more]

I tried to use Cerebras and it was unbeatable at first, but the client didn't want to pay $1300 a month and the $50/month or pay as you go was just not reliable. It would give service unavailable errors or falsely claim we were over our rate limit.

Also Groq is very fast, but the latency wasn't always consistent and I saw some very strange responses on a few calls that I had to attribute to quantization.