Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud

ikessler 39 points 5 comments April 06, 2026
github.com · View on Hacker News

Gemma Gem is a Chrome extension that loads Google's Gemma 4 (2B) through WebGPU in an offscreen document and gives it tools to interact with any webpage: read content, take screenshots, click elements, type text, scroll, and run JavaScript. You get a small chat overlay on every page. Ask it about the page and it (usually) figures out which tools to call. It has a thinking mode that shows chain-of-thought reasoning as it works. It's a 2B model in a browser. It works for simple page questions and running JavaScript, but multi-step tool chains are unreliable and it sometimes ignores its tools entirely. The agent loop has zero external dependencies and can be extracted as a standalone library if anyone wants to experiment with it.

Discussion Highlights (3 comments)

avaer

There's also the Prompt API, currently in Origin Trial, which supports this api surface for sites: https://developer.chrome.com/docs/ai/prompt-api I just checked the stats: Model Name: v3Nano Version: 2025.06.30.1229 Backend Type: GPU (highest quality) Folder size: 4,072.13 MiB Different use case but a similar approach. I expect that at some point this will become a native web feature, but not anytime soon, since the model download is many multiples the size of the browser itself. Maybe at some point these APIs could use LLMs built into the OS, like we do for graphics drivers.

emregucerr

I would love to see someone build it as some kind of an SDK. App builders could use it as a local LLM plugin when dealing with data involving sensitive information. It's usually too much when an app asks someone to setup a local LLM but this I believe could solve that problem?

montroser

Not sure if I actually want this (pretty sure I don't) -- but very cool that such a thing is now possible...

Semantic search powered by Rivestack pgvector
3,663 stories · 34,065 chunks indexed