Hypura – A storage-tier-aware LLM inference scheduler for Apple Silicon
- simonw - 1103 sekunder sedanSuggestion for the maintainers: the comparison table currently lists some pretty old models, Qwen 2.5 14B and Mixtral 8x7B and Llama 3.3 70B.
A lot of people are reporting incredible results with the Qwen 3.5 MoE models on Apple hardware right now (streaming experts - see https://simonwillison.net/2026/Mar/24/streaming-experts/) - it would be great to get some of those models into that table.
Maybe the 1T parameter Kimi K2.5 too if you can get that to work, see https://twitter.com/seikixtc/status/2036246162936910322 and https://twitter.com/danpacary/status/2036480556045836603
- vanyaland - 6388 sekunder sedanFor a lot of local workloads, sub-1 tok/s is useless in foreground and perfectly acceptable in background. If the choice is “this crashes” vs “this finishes overnight,” that’s still a meaningful capability jump.
- vicchenai - 10370 sekunder sedanthe practical question is whether the read pattern is sequential enough to actually saturate nvme bandwidth or if the attention layer access pattern ends up being random enough to kill throughput. sequential reads on a decent nvme get you 5-7 GB/s, random reads drop to maybe 500 MB/s depending on queue depth.
for a 1T model youd need to stream something like 2TB of weights per forward pass at fp16. even at peak sequential thats 300+ seconds per token which is... not great for interactive use but maybe fine for batch inference where you dont care about latency.
still a cool proof of concept though. the gap between 'can run' and 'runs usefully' is where things get interesting.
- shubhamintech - 2213 sekunder sedanThe MoE point matters here ie sparse activation means you're not reading all 2TB per forward pass, but the access pattern flips from sequential to random which is exactly the worst case for NVMe. Been thinking about this a lot for agent inference workloads where you want consistent latency more than peak throughput.
- marksully - 13482 sekunder sedanWhere does "1T parameter model" come from? I can only see models with 70B params or less mentioned in the repo.
- baq - 11868 sekunder sedanIntel Optane rolling in its grave.
- Insanity - 12945 sekunder sedanThis is a pretty cool project! Essentially this is like using Swap memory to extend your RAM, but in a 'smart' way so you don't overload the NVMe unnecessarily.
I do wonder in practice how the 'smarts' pan out, because putting a ton of stress on your NVMe during generation is probably not the best choice for it's longevity.
- zozbot234 - 13236 sekunder sedanIt will be interesting to compare this to https://news.ycombinator.com/item?id=47476422 and https://news.ycombinator.com/item?id=47490070 . Very similar design except that this is apparently using mmap, which according to the earlier experiment incurs significant overhead.
- root_axis - 9429 sekunder sedanAre there any 1T parameter open source models?
- nullbyte - 11322 sekunder sedanI am curious how the TPS compares vs default OS virtual memory paging
- speedgoose - 10402 sekunder sedanI wonder how many minutes per token on GLM 5.
- amelius - 10496 sekunder sedanThis is <1 tok/s for the 40GB model.
Come on, "Run" is not the right word. "Crawl" is.
Headlines like that are misleading.
- monksy - 12172 sekunder sedanThere needs to be something like this from Ollama. At the moment Ollama has a lot of flaws that prevent it from getting great performance. (My understanding is better GPU/CPU splits, etc). But Ollama is the only way to host an LLM and have it switch out on demand. Sigh.
- EnPissant - 10767 sekunder sedanYou do not provide any comparison to llama.cpp with mmap.
You do not explain how any kind of predictor can work for MoE experts.
You do not explain how prediction can even be useful. I can predict the layers used in a dense model (all of them are used in order), but that doesn't help me much. It's still bottlenecked on bandwidth (hint: MoE doesn't change this).
- anshulbasia27 - 10937 sekunder sedanOS paging would be significantly worse here. The kernel's page fault handler is reactive — it doesn't know you're about to read layer 47's FFN weights, so it can't prefetch. You stall on every fault, wait for the 4KB/16KB page to load, then resume. With 80 layers of dense FFN streaming, that's thousands of cold faults per token.
What makes this approach faster is that the model's access pattern is completely deterministic during inference. You know exactly which tensors are needed next because transformer layers execute sequentially. So you can issue large sequential reads and prefetch the next layer while the current one is computing on Metal. The OS page cache can't do that — it has no concept of "layer N+1 comes after layer N." For MoE it's even more stark. The OS would page in all 8 experts on the first token that routes to each one, then evict them under memory pressure with LRU, which has no idea that expert 3 fires 10x more often than expert 7. The neuron cache here is basically a domain-specific replacement policy. - Yanko_11 - 8788 sekunder sedan[dead]
- anshulbasia27 - 10996 sekunder sedan[dead]
- jee599 - 7242 sekunder sedan[dead]
- tatef - 15806 sekunder sedan[flagged]
- erikcw - 10201 sekunder sedanSimon Willison wrote a good post about Dan Woods’ work on “Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally”.
Nördnytt! 🤓