Gemma 4: Byte for byte, the most capable open models
meetpateltech
21 points
2 comments
April 02, 2026
Related Discussions
Found 5 related stories in 56.0ms across 3,471 title embeddings via pgvector HNSW
- Google releases Gemma 4 open models jeffmcjunkin · 1306 pts · April 02, 2026 · 78% similar
- Gemini 3.1 Flash-Lite: Built for intelligence at scale meetpateltech · 51 pts · March 03, 2026 · 67% similar
- Apple Can Create Smaller On-Device AI Models from Google's Gemini thm · 25 pts · March 25, 2026 · 61% similar
- Gemini 3.1 Flash Live: Making audio AI more natural and reliable meetpateltech · 12 pts · March 26, 2026 · 56% similar
- Gemini Embedding 2: natively multimodal embedding model panarky · 22 pts · March 10, 2026 · 53% similar
Discussion Highlights (1 comments)
virgildotcodes
Downloaded through LM Studio on an M1 Max 32GB, 26B A4B Q4_K_M First message: https://i.postimg.cc/yNZzmGMM/Screenshot-2026-04-03-at-12-44... Not sure if I'm doing something wrong? This more or less reflects my experience with most local models over the last couple years (although admittedly most aren't anywhere near this bad). People keep saying they're useful and yet I can't get them to be consistently useful at all.