Gemma 4: Byte for byte, the most capable open models

meetpateltech 21 points 2 comments April 02, 2026
blog.google · View on Hacker News

Discussion Highlights (1 comments)

virgildotcodes

Downloaded through LM Studio on an M1 Max 32GB, 26B A4B Q4_K_M First message: https://i.postimg.cc/yNZzmGMM/Screenshot-2026-04-03-at-12-44... Not sure if I'm doing something wrong? This more or less reflects my experience with most local models over the last couple years (although admittedly most aren't anywhere near this bad). People keep saying they're useful and yet I can't get them to be consistently useful at all.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed