Show HN: GlycemicGPT – Open-source AI-powered diabetes management
Dexcom G7 (cloud API) Tandem t:slim X2 and Mobi pumps (direct BLE) Nightscout (point it at your existing instance and you're running in minutes)
What the AI layer does:
Daily briefs summarizing overnight and 24-hour patterns Meal response analysis Conversational chat with RAG-backed clinical knowledge Predictive alerting with configurable thresholds and caregiver escalation
Important: this is monitoring and analysis only. GlycemicGPT does not deliver insulin, does not control your pump, and is not a closed-loop system. It reads your data and gives you insight on top of it. Your clinical decisions stay between you and your care team. Architecture:
Self-hosted via Docker or K8S — the GlycemicGPT stack runs entirely on your hardware BYOAI — bring your own AI provider. Use Ollama for fully local operation (no data leaves your hardware), or point it at Claude, OpenAI, or any OpenAI-compatible endpoint if you prefer a hosted model. Data flows directly from your instance to the provider you choose; nothing is routed through any centralized service operated by the project. GPL-3.0, no subscriptions, no vendor lock-in
Stack:
Backend API: FastAPI, Python 3.12, PostgreSQL 16, Redis 7 Web Dashboard: Next.js 15, React 19, Tailwind CSS, shadcn/ui AI Sidecar: TypeScript, Express, multi-provider proxy Android App: Kotlin, Jetpack Compose, BLE Wear OS: Kotlin, Wear Compose, Watch Face Push API Plugin SDK: Kotlin interfaces, capability-based, sandboxed
Looking for contributors — especially folks with BLE/Android experience or anyone in the diabetes tech space. Plugin SDK is documented if you want to add support for new devices. GitHub: https://github.com/GlycemicGPT/GlycemicGPT
- mhovd - 9061 sekunder sedanThe risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
- surgicalcoder - 10912 sekunder sedanI'm a T1D who has an insulin pump looping with AndroidAPS and NightScout, what does this give you that Nightscout and Autotune doesn't give you?
And how do you deal with AI hallucinations?
- sexylinux - 1330 sekunder sedanYou know that current AI systems are not reliable and produce errors?
How do you protect your life and the life of others using your software against potential lethal errors?
- throwatdem12311 - 1409 sekunder sedanI mean this in the nicest way possible.
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this. Sure, tools make mistakes or people misinterpret reports without the help of LLMs - but LLMs are just on a whole other level where the mistakes are just part of how these things work from a fundamental level.
I don’t even trust AI scribes at my doctors office to transcribe my appointment due to errors. There is no way in hell I would ever use something like this that could just straight up lie about something that kills me if I get it wrong.
- darkhorse13 - 3224 sekunder sedanThis is quite possibly a horrible idea. Personal anecdote: ChatGPT once read a blood work report value as 40, when the actual report said 4.
- vsaravind007 - 4639 sekunder sedanLooks interesting, being a Whoop user for the last few years, I have seen for myself that their AI Coach/AI based suggestions are a hit or miss 3 out of 10 times, slightly concerned about how accurate this will. Not a diabetic patient, but I do monitor my levels with a CGM from time to time, will definitely check it out!
- tornadofart - 8840 sekunder sedanI'm a T1D and tbh it's not that hard to manage, I just wouldn't need that. But for kids or the elderly, I see a use case.
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
- AnthonBerg - 5994 sekunder sedanWent through pregnancy with the mother having recently-diagnosed T1 diabetes – just barely not killed by grave neglect on behalf of healthcare due to how badly they missed the diagnosis to begin with.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
- axegon_ - 9618 sekunder sedan"This will all end in tears, I just know it"
Marvin
- foo-bar-baz529 - 8836 sekunder sedanWhat’s the limit on badges in a README
- xyzal - 10311 sekunder sedanThis is THE ONE domain where you would want to use classical machine learning and not unreliable LLMs. Unless you want to kill yourself, that is.
- fnands - 12610 sekunder sedanThe alerts system and sharing with caregivers is a solved problem already (e.g. Dexcom's Follow, Abbot's LibreLinkUp).
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?
- mexicocitinluez - 1565 sekunder sedanSo, I'm in the medical field building an EMR and LLMs have obviously been a really important topic in the industry the last few years. We're still not even sure that giving LLM-assisted suggestions TO ACTUAL DOCTORS AND CLINICIANS will be helpful let alone to the patient themselves.
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).
- MassiveOwl - 4439 sekunder sedanI've done this with the Libre 2 sensor. I added Gemini to it. It gets like 2 weeks of readings at once, and the user can "chat to their data". I added a meals tool as well, where the user can photo their meal, and the ai estimates the impact on the readings.
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
- andai - 7273 sekunder sedanLife imitates comedy...
- maleldil - 5757 sekunder sedanI'm just happy to see a GPL project.
- emsign - 5830 sekunder sedanFDA approved?
- Jasssss - 6839 sekunder sedan[flagged]
Nördnytt! 🤓