GLM-5.1: Towards Long-Horizon Tasks

zixuanlimit 481 points 193 comments April 07, 2026
z.ai · View on Hacker News

Discussion Highlights (20 comments)

dang

[stub for offtopicness] [[you guys, please don't post like this to HN - it will just irritate the community and get you flamed]]

bigyabai

It's an okay model. My biggest issue using GLM 5.1 in OpenCode is that it loses coherency over longer contexts. When you crest 128k tokens, there's a high chance that the model will start spouting gibberish until you compact the history. For short-term bugfixing and tweaks though, it does about what I'd expect from Sonnet for a pretty low price.

Yukonv

Unsloth quantizations are available on release as well. [0] The IQ4_XS is a massive 361 GB with the 754B parameters. This is definitely a model your average local LLM enthusiast is not going to be able to run even with high end hardware. [0] https://huggingface.co/unsloth/GLM-5.1-GGUF

RickHull

I am on their "Coding Lite" plan, which I got a lot of use out of for a few months, but it has been seriously gimped now. Obvious quantization issues, going in circles, flipping from X to !X, injecting chinese characters. It is useless now for any serious coding work.

alex7o

To be honest I am a bit sad as, glm5.1 is producing mich better typescript than opus or codex imo, but no matter what it does sometimes go into shizo mode at some point over longer contexts. Not always tho I have had multiple session go over 200k and be fine.

kirby88

I wonder how that compare to harness methods like MAKER https://www.cognizant.com/us/en/ai-lab/blog/maker

DeathArrow

I am already subscribed to their GLM Coding Pro monthly plan and working with GLM 5.1 coupled with Open Code is such a pleasure! I will cancel my Cursor subscription.

jaggs

How does it compare to Kimi 2.5 or Qwen 3.6 Plus?

winterqt

Comments here seem to be talking like they've used this model for longer than a few hours -- is this true, or are y'all just sharing your initial thoughts?

gavinray

I find the "8 hour Linux Desktop" bit disingenuous, in the fine print it's a browser page: > "build a Linux-style desktop environment as a web application" They claim "50 applications from scratch", but "Browser" and a bunch of the other apps are likely all <iframe> elements. We all know that building a spec-compliant browser alone is a herculean task.

johnfn

GLM-5.0 is the real deal as far as open source models go. In our internal benchmarks it consistently outperforms other open source models, and was on par with things like GPT-5.2. Note that we don't use it for coding - we use it for more fuzzy tasks.

minimaxir

The focus on the speed of the agent generated code as a measure of model quality is unusual and interesting. I've been focusing on intentionally benchmaxxing agentic projects (e.g. "create benchmarks, get a baseline, then make the benchmarks 1.4x faster or better without cheating the benchmarks or causing any regression in output quality") and Opus 4.6 does it very well: in Rust, it can find enough low-level optimizations to make already-fast Rust code up to 6x faster while still passing all tests. It's a fun way to quantify the real-world performance between models that's more practical and actionable.

tgtweak

Share the harness for that browser linux OS task :)

maxdo

One of the bench maxed models . Every time I tried it , it’s not on par even with other open source models .

kamranjon

I'm crossing my fingers they release a flash version of this. GLM 4.7 Flash is the main model I use locally for agentic coding work, it's pretty incredible. Didn't find anything in the release about it - but hoping it's on the horizon.

epolanski

I was very satisfied with GLM5, I'm not gonna lie. Excited to test this.

mark_l_watson

I can’t wait to try it. I set up a new system this morning with OpenClaw and GLM-5, and I like GLM-5 as the backend for Claude Code. Excellent results.

simonw

Not only did this one draw me an excellent pelican... it also animated it! https://simonwillison.net/2026/Apr/7/glm-51/

blazespin

Anthropic's reply? A model you can't use.

dryarzeg

A bit off-topic, but for some reason, even though I don't use LLMs for my job or for my hobbies, or in daily life frequently (and when I do, it's mostly some kind of "rubber duck brainstorm"), when I see open-weight releases like this one or the recent Gemma 4 (which is very good for local models); the first time was with DeepSeek-R1 (this one, despite being blamed for "censorship", was heavily censored only via DeepSeek API, the local model - full-weight 685B, not the distilled ones - was pretty much unhinged regarding censorship on any topic)... there's always one song coming to mind and I simply can't get rid of it no matter how hard I try. "I am the storm that is approaching, provoking..." : )

Semantic search powered by Rivestack pgvector
3,871 stories · 36,122 chunks indexed