MiniMax M2.7 Is Now Open Source
steveharing1
82 points
35 comments
April 12, 2026
Related Discussions
Found 5 related stories in 46.3ms across 4,351 title embeddings via pgvector HNSW
- MiniMax Music 2.5 – AI Music Generation Model for Fast Song Creation cy20251210 · 15 pts · March 09, 2026 · 61% similar
- Small Models Are Smart Enough m-hodges · 15 pts · April 10, 2026 · 50% similar
- Tinybox – Offline AI device 120B parameters albelfio · 414 pts · March 21, 2026 · 48% similar
- OpenCode – Open source AI coding agent rbanffy · 607 pts · March 20, 2026 · 48% similar
- MacBook M5 Pro and Qwen3.5 = Local AI Security System aegis_camera · 158 pts · March 20, 2026 · 47% similar
Discussion Highlights (9 comments)
steveharing1
Nvidia is providing free API to try Minimax M2.7
girvo
GGUFs are out too, well done Unsloth as usual! https://huggingface.co/unsloth/MiniMax-M2.7-GGUF I've been using M2.7 through the Alibaba coding plan for a bit now, and am quite impressed with it's coding ability, and even more impressed when I see how small it is. Fascinating really, makes me wonder how big the frontier models are.
jbergqvist
"Helped build itself" is a bit of a stretch here, it makes it sound as if the model was doing lasting self-improvements. What the article describes is that the model was able to tweak to its own deployment harness (memory, skills, experimental loop etc) to improve performance on benchmarks. While impressive, it's not doing any modifications to its own weights by e.g. modifying the training code.
anonym29
In addition to this conversation already having been started at https://news.ycombinator.com/item?id=47735348 yesterday, MiniMax M2.7 is not open source. The open weights have been released, which is definitely good and follows some of the spirit of open source, but isn't the same thing.
simonw
Absolutely not "open source" - here's the license: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICE... > Non-commercial use permitted based on MIT-style terms; commercial use requires prior written authorization. And calling the non-commercial usage "MIT-style terms" is a stretch - they come with a bunch of extra restrictions about prohibited uses. It's open weights, not open source.
wg0
In my experience, even the MiniMax M2.5 is a very capable model with decent capabilities and with some hand holding, can do good investigation into an issue deep down multiple layers of a software stack given you keep asking right questions. I am pretty sure MiniMax M2.7 would be much better.
fg137
What's people's experience of using MiniMax for coding? I had a really bad time with it. I use (real) Claude Code for work so I know what a good model feels like. MiniMax's token plan is nice but the quality is really far from Claude models. I needed to constantly "remind" it to get things done. Even for a four sentence prompt in a session that is well below the context window, MiniMax would ignore half of it. This happens all the time. (This is Claude Code + MiniMax API, set up using official instructions) Basically, if I say get A, B and C done, it will only do A and B. I say, you still need to do C, so it does C but reverts the code for A. Things that Claude can usually one shot takes 5 iterations with MiniMax. I ended up switching to Claude to get one of my personal projects done.
helix278
> That is not a benchmark result. That is a different way of thinking about how AI models get built. tiresome
mr_johnson123
It’s seems not to be completely open source.