GPT-5.5
rd
1240 points
839 comments
April 23, 2026
Related Discussions
Found 5 related stories in 70.1ms across 5,406 title embeddings via pgvector HNSW
- GPT-5.4 mudkipdev · 739 pts · March 05, 2026 · 98% similar
- GPT-5.4 meetpateltech · 156 pts · March 05, 2026 · 98% similar
- GPT‑5.4 Mini and Nano meetpateltech · 217 pts · March 17, 2026 · 81% similar
- GPT‑5.3 Instant meetpateltech · 319 pts · March 03, 2026 · 79% similar
- GPT-5.4 Thinking and GPT-5.4 Pro denysvitali · 92 pts · March 05, 2026 · 79% similar
Discussion Highlights (20 comments)
luqtas
they are using ethical training weights this time!!! /j
meetpateltech
GPT-5.5 System Card: https://deploymentsafety.openai.com/gpt-5-5
applfanboysbgon
If there's a bingo card for model releases, "our [superlative] and [superlative] model yet" is surely the free space.
ZeroCool2u
Benchmarks are favorable enough they're comparing to non-OpenAI models again. Interesting that tokens/second is similar to 5.4. Maybe there's some genuine innovation beyond bigger model better this time?
minimaxir
The more interesting part of the announcement than "it's better at benchmarks": > To better utilize GPUs, Codex analyzed weeks’ worth of production traffic patterns and wrote custom heuristic algorithms to optimally partition and balance work. The effort had an outsized impact, increasing token generation speeds by over 20%. The ability for agentic LLMs to improve computational efficiency/speed is a highly impactful domain I wish was more tested than with benchmarks. From my experience Opus is still much better than GPT/Codex in this aspect, but given that OpenAI is getting material gains out of this type of performancemaxxing and they have an increasing incentive to continue doing so given cost/capacity issues, I wonder if OpenAI will continue optimizing for it.
ativzzz
I like that they waited for opus 4.7 to come out first so they had a few days to find the benchmarks that gpt 5.5 is better at
nullbyte
82.7% on Terminal Bench is crazy
astlouis44
A playable 3D dungeon arena prototype built with Codex and GPT models. Codex handled the game architecture, TypeScript/Three.js implementation, combat systems, enemy encounters, HUD feedback, and GPT‑generated environment textures. Character models, character textures, and animations were created with third-party asset-generation tools The game that this prompt generated looks pretty decent visually. A big part of this likely due to the fact the meshes were created using a seperate tool (probably meshy, tripo.ai, or similiar) and not generated by 5.5 itself. It really seems like we could be at the dawn of a new era similiar to flash, where any gamer or hobbyist can generate game concepts quickly and instantly publish them to the web. Three.js in particular is really picking up as the primary way to design games with AI, in spite of the fact it's not even a game engine, just a web rendering library.
objektif
Are there faster mini/nano versions as well?
jdw64
GPT is really great, but I wish the GPT desktop app supported MCP as well. You can kind of use connectors like MCP, but having to use ngrok every time just to expose a local filesystem for file editing is more cumbersome than expected.
jryio
Their 'Preparedness Framework'[1] is 20 pages and looks ChatGPT generated, I don't feel prepared reading it. https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbdde...
cmrdporcupine
Not rolled out to my Codex CLI yet, but some users on Reddit claiming it's on theirs.
cynicalpeace
It's possible that "smarter" AI won't lead to more productivity in the economy. Why? Because software and "information technology" generally didn't increase productivity over the past 30 years. This has been long known as Solow's productivity paradox. There's lots of theories as to why this is observed, one of them being "mismeasurement" of productivity data. But my favorite theory is that information technology is mostly entertainment, and rather than making you more productive, it distracts you and makes you more lazy. AI's main application has been information space so far. If that continues, I doubt you will get more productivity from it. If you give AI a body... well, maybe that changes.
tedsanders
Just as a heads up, even though GPT-5.5 is releasing today, the rollout in ChatGPT and Codex will be gradual over many hours so that we can make sure service remains stable for everyone (same as our previous launches). You may not see it right away, and if you don't, try again later in the day. We usually start with Pro/Enterprise accounts and then work our way down to Plus. We know it's slightly annoying to have to wait a random amount of time, but we do it this way to keep service maximally stable. (I work at OpenAI.)
YmiYugy
So according to the benchmarks somewhere in between Opus 4.7 and Mythos
impulser_
What is the reason behind OpenAI being able to release new models very fast? Since Feb when we got Gemini 3.1, Opus 4.6, and GPT-5.3-Codex we have seen GPT-5.4 and GPT-5.5 but only Opus 4.7 and no new Gemini model. Both of these are pretty decent improvements.
baalimago
Worth the 100% price increase over GPT-5.4?
louiereederson
For a 56.7 score on the Artificial Intelligence Index, GPT 5.5 used 22m output tokens. For a score of 57, Opus 4.7 used 111m output tokens. The efficiency gap is enormous. Maybe it's the difference between GB200 NVL72 and an Amazon Tranium chip?
jumploops
> GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens. This might be great if it translates to agentic engineering and not just benchmarks. It seems some of the gains from Opus 4.6 to 4.7 required more tokens, not less. Maybe more interesting is that they’ve used codex to improve model inference latency. iirc this is a new (expectedly larger) pretrain, so it’s presumably slower to serve.
BrokenCogs
I'm here for the pelicans and I'm not leaving until I see one!