Ollama is now powered by MLX on Apple Silicon in preview
- abu_ameena - 21673 sekunder sedanOn-device models are the future. Users prefer them. No privacy issues. No dealing with connectivity, tokens, or changes to vendors implementations. I have an app using Foundation Model, and it works great. I only wish I could backport it to pre macOS 26 versions.
- franze - 43661 sekunder sedanI created "apfel" https://github.com/Arthur-Ficial/apfel a CLI for the apple on-device local foundation model (Apple intelligence) yeah its super limited with its 4k context window and super common false positives guardrails (just ask it to describe a color) ... bit still ... using it in bash scripts that just work without calling home / out or incurring extra costs feels super powerful.
- jiehong - 3773 sekunder sedanThis is excellent news!
What I'm waiting for next is MLX supported speech recognition directly from Ollama. I don’t understand why it should be a separate thing entirely.
- babblingfish - 55380 sekunder sedanLLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.
- Yukonv - 46764 sekunder sedanGood to see Ollama is catching up with the times for inference on Mac. MLX powered inference makes a big difference, especially on M5 as their graphs point out. What really has been a game changer for my workflow is using https://omlx.ai/ that has SSD KV cold caching. No longer have to worry about a session falling out of memory and needing to prefill again. Combine that with the M5 Max prefill speed means more time is spend on generation than waiting for 50k+ content window to process.
- robotswantdata - 44076 sekunder sedanWhy are people still using Ollama? Serious.
Lemonade or even llama.cpp are much better optimised and arguably just as easy to use.
- bwfan123 - 17639 sekunder sedanWhat is the cheapest usable local rig for coding ? I dont want fancy agents and such, but something purpose built for coders, and fast-enough for my use, and open-source, so I can tweak it to my liking. Things are moving fast, and I am hesitant to put in 3-4K now in the hope that it would be cheaper if i wait.
- domh - 32174 sekunder sedanI have an M4 Max with 48GB RAM. Anyone have any tips for good local models? Context length? Using the model recommended in the blog post (qwen3.5:35b-a3b-coding-nvfp4) with Ollama 0.19.0 and it can take anywhere between 6-25 seconds for a response (after lots of thinking) from me asking "Hello world". Is this the best that's currently achievable with my hardware or is there something that can be configured to get better results?
- LuxBennu - 54129 sekunder sedanAlready running qwen 70b 4-bit on m2 max 96gb through llama.cpp and it's pretty solid for day to day stuff. The mlx switch is interesting because ollama was basically shelling out to llama.cpp on mac before, so native mlx should mean better memory handling on apple silicon. Curious to see how it compares on the bigger models vs the gguf path
- codelion - 54776 sekunder sedanHow does it compare to some of the newer mlx inference engines like optiq that support turboquantization - https://mlx-optiq.pages.dev/
- xmddmx - 18713 sekunder sedanOn a M4 Pro MacBook Pro with 48GB RAM I did this test:
ollama run $model "calculate fibonacci numbers in a one-line bash script" --verbose
I can't comment on the quality differences (if any) between these three.Model PromptEvalRate EvalRate ------------------------------------------------------ qwen3.5:35b-a3b-q4_K_M 6.6 30.0 qwen3.5:35b-a3b-nvfp4 13.2 66.5 qwen3.5:35b-a3b-int4 59.4 84.4 - a-dub - 22764 sekunder sedanis local llm inference on modern macbook pros comfortable yet? when i played with it a year or so ago, it worked fairly ok but definitely produced uncomfortable levels of heat.
(regarding mlx, there were toolkits built on mlx that supported qlora fine tuning and inference, but also produced a bunch of heat)
- jwr - 14985 sekunder sedanTwo things: 1) MLX has been available in LM Studio for a long time now, 2) I found that GGUF produced consistently better results in my benchmarking. The difference isn't big, but it's there.
- dial9-1 - 54720 sekunder sedanstill waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram
- braum - 19384 sekunder sedanHow does Ollama help with Claude Code? Claude code runs in terminal but AFAIK connects back to anthropic directly and cannot run locally. I hope I'm missing something obvious.
- mfa1999 - 51732 sekunder sedanHow does this compare to llama.cpp in terms of performance?
- daveorzach - 35379 sekunder sedanWhat are significant differences between Ollama and LM Studio now? I haven’t used Ollama because it was missing MLX when I started using LLM GUIs.
- - 25954 sekunder sedan
- harel - 42469 sekunder sedanWhat would be the non Mac computer to run these models locally at the same performance profile? Any similar linux ARM based computers that can reach the same level?
- rurban - 17959 sekunder sedanDoes that mean they are now finally a bit faster than llama.cpp? Cannot believe that.
- dev_l1x_be - 30445 sekunder sedan> Please make sure you have a Mac with more than 32GB of unified memory. Time for an upgrade I guess. If I can run Qwen3.5 locally than it is time to switch over to local first LLM usage.
- adolph - 10590 sekunder sedanMuch of the discussion here is local versus remote. I like seeing things as "and" and "or." There will be small things I don't want to burn my Claude tokens on and other things that I want to access larger compute resources. And along the way checking results from both to understand comparative advantage on an ongoing basis.
- androiddrew - 31751 sekunder sedanGet turboquant 4 bit implemented and this would be game changer.
- ranjeethacker - 21500 sekunder sedanI used today, working nicely.
- - 48189 sekunder sedan
- janandonly - 37513 sekunder sedan> Please make sure you have a Mac with more than 32GB of unified memory.
Yeah, I can still save money by buying a cheaper device with less RAM and just paying my PPQ.AI or OpenRouter.com fees .
- harrouet - 28456 sekunder sedanAs being on the market for a new mac and comparing refub M4 Max vs M5 _Pro_, I am interested in how much faster the neural engines are -- compared to marketing claims.
- puskuruk - 47678 sekunder sedanFinally! My local infra is waiting for it for months!
- jedisct1 - 29488 sekunder sedanWorks really great with https://swival.dev and qwen3.5.
- darshanmakwana - 43696 sekunder sedanReally nice to see this!
- skwon816 - 3606 sekunder sedan[dead]
- brcmthrowaway - 52368 sekunder sedanWhat is the difference between Ollama, llama.cpp, ggml and gguf?
- techpulselab - 13936 sekunder sedan[dead]
- obelai - 26787 sekunder sedan[dead]
- techpulselab - 42731 sekunder sedan[dead]
- charlotte12345 - 46045 sekunder sedan[dead]
- firekey_browser - 48517 sekunder sedan[dead]
- charlotte12345 - 45551 sekunder sedan[flagged]
- noritaka88 - 24880 sekunder sedan[flagged]
- AugSun - 53974 sekunder sedan"We can run your dumbed down models faster":
#The use of NVFP4 results in a 3.5x reduction in model memory footprint relative to FP16 and a 1.8x reduction compared to FP8, while maintaining model accuracy with less than 1% degradation on key language modeling tasks for some models.
- DevKoan - 12956 sekunder sedanThe Foundation Model point is real. As an iOS developer, what excites me most isn't the performance — it's what on-device inference does to the app architecture.
When you're not making network calls, you stop thinking in "loading states" and start thinking in "local state machines." The UX design space opens up completely. Interactions that felt too fast to justify a server round-trip are suddenly viable.
The backporting issue is painful though. I've been shipping features wrapped in #available(iOS 26, *) and the fallback UX is basically a different product. It forces you to essentially maintain two app experiences.
Still think this is the right direction — especially for junior devs just learning to ship. Fewer moving parts, less infrastructure to debug.
Nördnytt! 🤓