Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7
simonw
363 points
77 comments
April 16, 2026
Related Discussions
Found 5 related stories in 63.0ms across 4,783 title embeddings via pgvector HNSW
- Qwen3.5-Omni meetpateltech · 18 pts · March 30, 2026 · 59% similar
- Claude Opus 4.7 meetpateltech · 1621 pts · April 16, 2026 · 56% similar
- Claude Opus 4.7 AlphaWeaver · 186 pts · April 16, 2026 · 55% similar
- The Qwen 3.5 Small Model Series armcat · 11 pts · March 02, 2026 · 55% similar
- MacBook M5 Pro and Qwen3.5 = Local AI Security System aegis_camera · 158 pts · March 20, 2026 · 53% similar
Discussion Highlights (20 comments)
ericpauley
Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output. I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
comandillos
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
mentalgear
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
jbellis
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
19qUq
How about switching to MechaStalin on a tricycle? It gets kind of boring.
VHRanger
That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
aliljet
I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
lofaszvanitt
That Qwen flamingo on the unicycle is actually quite good. A work of art.
jedisct1
I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews. It's pretty good at finding bugs, but not so good at writing patches to fix them.
throwuxiytayq
I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.
sailingcode
I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?
wood_spirit
Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
JaggerFoo
FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.
bottlepalm
I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.
justinbaker84
I love this benchmark!
refulgentis
I liked both of Opus' better, it was very illuminating, in both cases I didn't see the error's Simon saw and wondered why Simon skipped over the errors I saw. Pelican: saturated!
nba456_
Good reminder that these tests have always been useless, even before they started training on it.
f33d5173
I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos. I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.
yieldcrv
All those models that were just at version 1.x in 2024 That’s so wild
kburman
looks like opus have been nerfed from day1