Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model
We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.
Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).
Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)
You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle
The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...
We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.
While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.
This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544
Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle
- nl - 40025 sekunder sedanDo you have any examples or data on the discriminatory power of the model for tool use?
The examples are things like "What is the weather in San Francisco", where you are only passed a tool like
I had a thing[1] over 10 years ago that could handle this kind of problem using SPARQL and knowledge graphs.tools='[{"name":"get_weather","parameters":{"location":"string"}}]',My question is how effective is it at handling ambiguity.
Can I send it something like a text message "lets catch up at coffee tomorrow 10:00" and a command like "save this" and have it choose a "add appointment" action from hundreds (or even tens) of possible tools?
- ilaksh - 57835 sekunder sedanHmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.
But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.
E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`
- simonw - 56773 sekunder sedanSuggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
- varenc - 23792 sekunder sedanAre you worried about Google's response to this? Google reportedly reacts to distillation attempts "with real-time proactive defenses that can degrade student model performance". So if they detected you, they could have intentionally fed you a dumber but plausible variant of Gemini: https://cloud.google.com/blog/topics/threat-intelligence/dis...
But also, this model is small and just focusing on the tool use. In terms of token usage, you're probably not anywhere near the people that are trying to distill the entire model.
- kgeist - 39094 sekunder sedan>Experiments at Cactus showed that MLPs can be completely dropped from transformer networks, as long as the model relies on external knowledge source.
Heh, what a coincidence, just today one of my students presented research results which also confirmed this. He removed MLP from Qwen and the model still could do transformation tasks on input but lost knowledge.
- jumploops - 23930 sekunder sedanThis is neat, and matches an observation I saw with early Claude Code usage:
Sonnet would often call tools quickly to gather more context, whereas Opus would spend more time reasoning and trying to solve a problem with the context it had.
This led to lots of duplicated functions and slower development, though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less.
My takeaway was that “dumber” (i.e. smaller) models might be better as an agentic harness, or at least feasibly cheaper/faster to run for a large swath of problems.
I haven’t found Gemini to be particularly good at long horizon tool calling though. It might be interesting to distill traces from real Codex or Claude code sessions, where there’s long chains of tool calls between each user query.
Personally, I’d love a slightly larger model that runs easily on an e.g. 32GB M2 MBP, but with tool calling RL as the primary focus.
Some of the open weight models are getting close (Kimi, Qwen), but the quantization required to fit them on smaller machines seems to drop performance substantially.
- kristopolous - 54731 sekunder sedanThat M versus B is way too subtle. 0.026B is my suggestion
- tomaskafka - 47158 sekunder sedanAwesome! I just tried to set an alarm and add some groceries to the shopping list, and it outperformed Siri.
- brainless - 42747 sekunder sedanLovely to see the push for tiny models.
I have been building for small (20B or less) models for quite a while. Highly focused/constrained agents, many of them running together in some kind of task orchestration mode to achieve what feels like one "agent".
I build (privacy first) desktop apps this way and I want to get into mobile apps with similar ideas but tiny models.
- meander_water - 20970 sekunder sedanI'm so excited for this, nice work!
Gemma4 edge models were promised to be great for agentic use, but have been really disappointing in all my tests. They fail at the most basic tool use scenarios.
Have you run any tool-use benchmarks for Needle, or do you plan to? Would be great if you could add results to the repo if so.
- exabrial - 46020 sekunder sedanDumb questions, from someone not in the field...
What is a distilled model?
Why doesn't Google do this (to make their models smaller)?
Seems like you could make a competitor to Gemini?
- binyang_qiu - 21115 sekunder sedanA lot of agent workflows really are just tool selection + argument extraction + structured output. How does this behave once workflows become multi-step and state starts accumulating across calls?
- Liam_Simpkin - 8224 sekunder sedanHow could you use this for composability? I.e. chaining together multiple tools. For example web_search → summarize_url → send_email
- simonw - 58763 sekunder sedanLooks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:
> Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.
- Havoc - 53812 sekunder sedanSounds interesting.
Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.
- bityard - 50550 sekunder sedanThis is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)
I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?
I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...
- rsolva - 51174 sekunder sedanCan it summarize text it fetches?
Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.
I will defiantly play around with this!
- alex7o - 47766 sekunder sedanFrom all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?
- murkt - 55788 sekunder sedanCan this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
- z3ugma - 48181 sekunder sedanI don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?
Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?
- syntaxing - 46236 sekunder sedanThis would be amazing for home assistant.
- logdahl - 53493 sekunder sedanI find this stuff super fascinating and been thinking about it myself. Maybe one could bootstrap tiny models on a rather 'pure' procedural data set. Neglecting [0] of course...
[0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
- zamalek - 52384 sekunder sedanIs the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?
- efskap - 39637 sekunder sedanNo FFN is blowing my mind. This is pretty much "Attention Is ACTUALLY All You Need". Reminds me of BERT Q&A which would return indices into the input context, but even that had a FFN. Really exciting work.
- - 52721 sekunder sedan
- isaisabella - 21640 sekunder sedanNice catch. Using agent for simple tasks is inefficient and wasteful, Needle really resolves this. Looking forward to future upgrades!
- quadrature - 52262 sekunder sedanDoes the model have capacity for in context learning ?, if we give it examples of patterns can it follow them ?.
- dangoodmanUT - 48841 sekunder sedanWhy pick Gemini? It's probably the worst tool calling model of the major labs.
- sroussey - 41283 sekunder sedanCan this be converted to onnx or otherwise be used in a browser?
- casey2 - 24818 sekunder sedanQuery: set a timer for 1 hour
Result: [{"name":"set_timer","arguments":{"time_human":"1 hour"}}]
Query: in 1 hour set a timer for 1 hour
Result: [{"name":"set_timer","arguments":{"time_human":"1 hour"}}]
I'd expect either a chain load or just a 2 hour timer. Further attempts humorously give two separate 1-hour-timers.
- roggenbuck - 47477 sekunder sedanThis is some excellent work Henry! Very excited to try it out.
- cmrdporcupine - 56883 sekunder sedanThis is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.
- deepsquirrelnet - 53579 sekunder sedanThis is really cool. Any plans to release the dataset?
- theykk - 43143 sekunder sedanhey nice work, is it possible to release the datasets?
- halyconWays - 28927 sekunder sedanI assume this would only be useful as the second stage after a model like Whisper, as it can't understand speech where you'd want it, like on a phone or small device?
- varispeed - 48594 sekunder sedanWhat is the use case for this?
- BoredPositron - 49980 sekunder sedanI source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
- raymondchau - 15072 sekunder sedan[flagged]
- JoheyDev888 - 11422 sekunder sedan[dead]
- marsulta - 25789 sekunder sedan[flagged]
- armada1122 - 16919 sekunder sedan[flagged]
- nhattruongadm - 57004 sekunder sedan[flagged]
- mnvibe26x7 - 26868 sekunder sedan[flagged]
- BuyG1n - 40603 sekunder sedan[dead]
- danelliot - 46543 sekunder sedan[dead]
- ElenaDaibunny - 27999 sekunder sedan[dead]
- abhijithbabu - 61693 sekunder sedan[flagged]
- fizza_pizza - 16014 sekunder sedan[flagged]
- ac29 - 55846 sekunder sedanFYI, distilling Gemini is explicitly against the ToS:
"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."
Nördnytt! 🤓