Google Gemma 4 Runs Natively on iPhone with Full Offline AI Inference
takumi123
278 points
168 comments
April 15, 2026
Related Discussions
Found 5 related stories in 62.1ms across 4,686 title embeddings via pgvector HNSW
- Gemma 4 on iPhone janandonly · 534 pts · April 05, 2026 · 73% similar
- Google releases Gemma 4 open models jeffmcjunkin · 1306 pts · April 02, 2026 · 64% similar
- Gemma 4: Byte for byte, the most capable open models meetpateltech · 21 pts · April 02, 2026 · 62% similar
- Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud ikessler · 39 pts · April 06, 2026 · 60% similar
- Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code vbtechguy · 232 pts · April 05, 2026 · 59% similar
Discussion Highlights (20 comments)
andsoitis
is there a comparison of it running on iPhone vs. Android phones?
mistic92
It runs on Android too, with AI Core or even with llama.cpp
bossyTeacher
Is the output coherent though? I am yet to see a local model working on consumer grade hardware being actually useful.
pabs3
> edge AI deployment Isn't the "edge" meant to be computing near the user, but not on their devices?
codybontecou
Unfortunately Apple appears to be blocking the use of these llms within apps on their app store. I've been trying to ship an app that contains local llms and have hit a brick wall with issue 2.5.2
karimf
Related: Gemma 4 on iPhone (254 comments) - https://news.ycombinator.com/item?id=47652561
logicallee
For those who would like an example of its output, I'm currently working through creating a small, free (cc0, public domain) encyclopedia (just a couple of thousand entries) of core concepts in Biology and Health Sciences, Physical Sciences, and Technology. Each entry is being entirely written by Gemma 4:e4b (the 10 GB model.) I believe that this may be slightly larger than the size of the model that runs locally on phones, so perhaps this model is slightly better, but the output is similar. Here is an example entry: https://pastebin.com/ZfSKmfWp Seems pretty good to me!
usmanshaikh06
ESET is blocking this site saying: Threat found This web page may contain dangerous content that can provide remote access to an infected device, leak sensitive data from the device or harm the targeted device. Threat: JS/Agent.RDW trojan
ValleZ
There are many apps to run local LLMs on both iOS & Android
temp7000
Is it me, or does the article sound like LLM output? The pattern "It's not mere X — it's Y", occurs like 4 times in the text :v
bearjaws
Would love to see a show down of performance on iPhone vs Googles Tensor G5, which in my experience the G5 is 2 full generations behind performance wise.
Chrisszz
I just installed Google Ai Edge Gallery on my iPhone 16 pro, here are the results of the first benchmark with GPU, Prefill Tokens=256, Decode Tokens=256, Number of runs: 3. Prefill Speed=231t/s, Decode Speed=16t/s, Time to First Token=1.16s, First init time=20s
conception
I’m pretty excited about the edge gallery ios app with gemma 4 on it but it seems like they hobbled it, not giving access to intents and you have to write custom plugins for web search, etc. Does anyone have a favorite way to run these usefully? ChatMCP works pretty well but only supports models via api.
the_inspector
You are referring to the edge models, right? E2B and E4B, not the bigger ones (26B, 31B)...
grimmai143
Do you know of a way of running these models on Android? Also, what does the thermal throttling look like?
DoctorOetker
does anyone know of a decent but low memory or low parameter count multilingual model (as many languages as possible), that can faithfully produce the detailed IPA transcription given a word in a sentence in some language? I want to test a hypothesis for "uploading" neural network knowledge to a user's brain, by a reaction-speed game.
mfro
Strangely, it is super fast on my 16 Plus, but with longer messages it can slow down a LOT, and not because of thermal throttling. I wish I could see some diagnostic data.
jimbokun
I feel like UX and API design are very under explored. What are the possibilities of an Android or iOS device where the OS is centered around a locally running LLM with an API for accessing it from apps, along with tools the LLM can call to access data from locally running apps? What’s the equivalent of the original Mac OS? Do apps disappear and there’s just a running dialog with the LLM generating graphical displays as needed on demand?
blixt
I made this offline pocket vibe coder using Gemma 4 (works offline once model is downloaded) on an iPhone. It can technically run the 4B model but it will default to 2B because of memory constraints. https://github.com/blixt/pucky It writes a single TypeScript file (I tried multiple files but embedded Gemma 4 is just not smart enough) and compiles the code with oxc. You need to build it yourself in Xcode because this probably wouldn't survive the App Store review process. Once you run it, there are two starting points included (React Native and Three.js), the UX is a bit obscure but edge-swipe left/right to switch between views.
deckar01
They still don’t render the markdown (or LaTeX) it outputs.