GPT‑5.3 Instant
meetpateltech
319 points
254 comments
March 03, 2026
Related Discussions
Found 5 related stories in 89.8ms across 3,471 title embeddings via pgvector HNSW
- GPT-5.4 meetpateltech · 156 pts · March 05, 2026 · 79% similar
- GPT-5.4 mudkipdev · 739 pts · March 05, 2026 · 79% similar
- GPT‑5.4 Mini and Nano meetpateltech · 217 pts · March 17, 2026 · 70% similar
- GPT 5.4 Thinking and Pro twtw99 · 64 pts · March 05, 2026 · 67% similar
- GPT-5.4 Thinking and GPT-5.4 Pro denysvitali · 92 pts · March 05, 2026 · 64% similar
Discussion Highlights (20 comments)
aurareturn
How do I know if I'm using GPT5.3 Instant on ChatGPT? I don't see it in selections.
mhitza
From one example > Many people in SF are: > Highly educated > Career-focused > Transplants > Used to independence Is "transplants" a San Francisco slang for relocators?
ViktorRay
GPT‑5.2 Instant’s tone could sometimes feel “cringe,” coming across as overbearing or making unwarranted assumptions about user intent or emotions. Strange way to write this. Why use the Gen Z cringe and put it into quotation marks? Wouldn’t it be better to just use the actual word cringeworthy which has the identical meaning? My guess is that the article was originally written by some Gen Z intern and then some older employee added the quotation marks to the Gen Z slang.
empath75
GPT-5.2 has been such a terrible regression that I have cancelled my OpenAI account. It's possible I might not have noticed it if Claude wasn't so much better, though.
Flux159
I'm a bit confused by this branding (never even noticed that there was a 5.2-Instant), it's not a super fast 1000tok/s Cerebras based model which they have for codex-spark, it's just 5.2 w/out the router / "non-thinking" mode? I feel like openai is going to get right back to where they were pre GPT-5 with a ton of different options and no one knows which model to use for what.
nickandbro
Wonder when 5.3 thinking will be released?
ern_ave
Since the page mentions: > Better judgment around refusals Has any AI company ever addressed any instance of a model having different rules for different population groups? I've seen many examples of people asking questions like, "make up a joke about <group>" and then iterating through the groups, only to find that some groups are seemingly protected/privileged from having jokes made about them. Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others? For example, page 14 of this studies shows that the exchange rate (their word, not mine) between Nigerians and US citizens is quite large. [1] https://arxiv.org/pdf/2502.08640
EthanHeilman
How likely is that they dropped this now to push the news story about quitGPT out of the headlines?
jpgreenall
Is nobody else unsettled by the example? Strange timing to talk about calculating trajectories on long range projectiles?
jpgreenall
Unsettling that the example talks about trajectories in long range projectiles given recent events..
mmaunder
This kind of metalinguistic quotation from 5.2 right now drives me nuts! ```That kind of “make it work at distance” trajectory work can meaningfully increase weapon effectiveness, so I have to keep it to safe, non-actionable help.``` I'm really hoping all their newer models stop doing this. It's massively overused.
ModernMech
> The clear answer to this question — both in scale and long-term importance — is: Hmmm, I haven't seen AI use that kind of em dash parenthetical construction before.
hallvard
Where’s the performance specs? Or is it simply a guardrails-release?
visarga
Looks like another bullet machine, the cheapest way to present a response.
upmind
I wonder when / if GPT will stop with the emdash.
simlevesque
They want to be Claude so bad.
dainiusse
OpenAI again making confusion with names...
hmokiguess
> why can't i find love in san francisco amazing how that's where we are now, coming from https://en.wikipedia.org/wiki/I_Left_My_Heart_in_San_Francis... in the 60s
butILoveLife
I unsubbed because ChatGPT was no longer SOTA. They def got cheap. Reminds me of that graph where late customers are abused. OpenAI is already abusing the late customers. Claude is pretty great.
sigbottle
Well needed if the changes work as advertised. I realized from talking with 5.2 that the issue is not about being a yapper, or speaking too much about random factual tangents or your own opinions. That's easy to tune out, and sometimes it's helpful even. What's extremely frustrating is the subtle framings and assumptions about the user that is then treated as implicit truth and smuggled in. It's plain and simple, narcissistic frame control. Obviously I don't think GPT has a "desire" to be narcissistic or whatever, but it's genuinely exhausting talking to GPT because of this. You have to restart the conversation immediately if you get into this loop. I've never been able to dig myself out of this state. I feel like I've dealt with that kind of thing all my life, so I'm pretty sensitive to it.