Qwen3.6-35B-A3B: Agentic coding power, now open to all

cmitsakis 1009 points 438 comments April 16, 2026
qwen.ai · View on Hacker News

Discussion Highlights (20 comments)

incomingpain

Wowzers, we were worried Qwen was going to suffer having lost several high profile people on the team but that's a huge drop. It's better than 27b?

bertili

A relief to see the Qwen team still publishing open weights, after the kneecapping [1] and departures of Junyang Lin and others [2]! [1] https://news.ycombinator.com/item?id=47246746 [2] https://news.ycombinator.com/item?id=47249343

fred_is_fred

How does this compare to the commercial models like Sonnet 4.5 or GPT? Close enough that the price is right (free)?

fooblaster

Honestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!

adrian_b

Available for download: https://huggingface.co/Qwen/Qwen3.6-35B-A3B

abhikul0

I hope the other sizes are coming too(9B for me). Can't fit much context with this on a 36GB mac.

amazingamazing

More benchmaxxing I see. Too bad there’s no rig with 256gb unified ram for under $1000

mtct88

Nice release from the Qwen team. Small openweight coding models are, imho, the way to go for custom agents tailored to the specific needs of dev shops that are restricted from accessing public models. I'm thinking about banking and healthcare sector development agencies, for example. It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.

ghc

how does this compare to gpt-oss-120b? It seems weird to leave it out.

shevy-java

I don't want "Agentic Power". I want to reduce AI to zero. Granted, this is an impossible to win fight, but I feel like Don Quichotte here. Rather than windmill-dragons, it is some skynet 6.0 blob.

bossyTeacher

Does anyone have any experience with Qwen or any non-Western LLMs? It's hard to get a feel out there with all the doomerists and grifters shouting. Only thing I need is reasonable promise that my data won't be used for training or at least some of it won't. Being able to export conversations in bulk would be helpful.

homebrewer

Already quantized/converted into a sane format by Unsloth: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF

armanj

I recall a Qwen exec posted a public poll on Twitter, asking which model from Qwen3.6 you want to see open-sourced; and the 27b variant was by far the most popular choice. Not sure why they ignored it lol.

zoobab

"open source" give me the training data?

jake-coworker

This is surprisingly close to Haiku quality, but open - and Haiku is quite a capable model (many of the Claude Code subagents use it).

rvnx

China won again in terms of openness

kombine

What kind of hardware (preferably non-Apple) can run this model? What about 122B?

dataflow

I'm a newbie here and lost how I'm supposed to use these models for coding. When I use them with Continue in VSCode and start typing basic C: #include <stdio.h> int m I get nonsensical autocompletions like: #include <stdio.h> int m</fim_prefix> What is going on?

btbr403

Planning to deploy Qwen3.6-35B-A3B on NVIDIA Spark DGX for multi-agent coding workflows. The 3B active params should help with concurrent agent density.

zshn25

What do all the numbers 6-35B-A3B mean?

Semantic search powered by Rivestack pgvector
4,783 stories · 45,112 chunks indexed