by vdivyanshu 6 hours ago

I went digging down the rabbit hole over the last 6 hours on what compute around training can be extracted from M4/M5 Neural Engine chips: - was able to offload @karpathy's NanoGpt training run(partially) on Apple Neural Engine. - moved the Classifier & Softmax layers directly onto the ANE - Classifier is 10x faster, and Softmax is 34x faster - fixed memory exhaustion: original repo had an ARC memory leak that capped training at ~119 compile loads per process. - patched the C-bridge, allowing continuous, stable training

Repo - https://github.com/vipuldivyanshu92/ANEgpt

3abiton 2 hours ago | [-0 more]

That's the best kind of "benders"

bytesandbits 3 hours ago | [-0 more]

incredible work