Tell HN: OpenAI silently removed Study Mode from ChatGPT

smokel 168 points 70 comments April 12, 2026
View on Hacker News

Here's hoping that it will return soon, as I really liked it.

Discussion Highlights (20 comments)

brumar

After all, this "mode" was just a system prompt (last time I looked).

janpmz

I was concerned about big players offering the same functionality when building listendock.com, but maybe there is a place for specialized apps like that.

el_io

Haven't use 'Study Mode' in OpenAI, but can't you just ask it to act as a study coach or whatever you want it to be?

altmanaltman

I remember videos with titles like "OPENAI CHANGED STUDYING COMPLETELY WITH THIS ONE SUPER UPDATE!" and obnoxious thumbnails on youtube when it was first launched. I guess studying changed it.

CatDeveloper_

they do it with other stuff to i feel like they see how much users actually interact with those features and base their decsisoins kinda like how google owuld remove some features at random..

foundermodus

What was the Study Mode? I never saw it.

ok123456

Gemini still has its study mode.

jegudiel

I used to enjoy studying with ChatGPT too. I was on their Plus plan.

m-hodges

I tried it a few times and always found it disappointing. It typically started off like a structured "lesson" but as I chatted with it, it would forget the syllabus is had proposed and we never "completed" the thing we set out to learn.

utopiah

Before this Sora, and before that large government contracts. I don't think they care so much for the random consumer anymore. They use anything and everyone for PR but they get closer to IPO they are focusing what actually might make them profitable. TL;DR: bet on stuff being removed

shivang2607

Do people even used that ?

derrida

Has ChatGPT gotten worse over past few months or is it I just have seen other things higher quality, or they stopped caring about user or something? All of a sudden feels like it gives me boilerplate and boiler plate of PR and cheesy reasoning, and like no actual answers - worse even - highly confident wrong answers that it then seeks to justify or explain (like it doesn't seem humble enough to be like "Actually, got that wrong" or if challenged it just caves over, accepts too readilythe assumptions in what the user is asking, or just blindly accepts a premise of the question) it's almost useless, like before it used to seem like could get it to emulate the way a certain writer or discourse speaks, now it seems like this derpy highschool just wants to be in kid that went into public relations and the language no matter what the topic seems always the same, it's really spammy feeling, I could be asking it questions about like how medieval monks talked about light and the breath in latin and it will be replying like I'm interested in monetising or improving my lifestyle or some b.s. I don't think it used to be this way? reminds of a circa 2003-6 wordpress sites - blackhat seo - feeling to generate back links to push affiliate links or something, with markov generated content designed to push back links for the actual human written landing page It's not like this on the other llms, something's up. Or maybe they have just found the niche and it is a bunch of people who do think like that - like I dunno - middle management the world over that is scary ... bonus ghastly incantations of the epistemology of middle management

Marciplan

in regards of sunsetting, they are better at being Google than Google is at being Google

treetalker

FWIW, Kagi Assistant still has a Study mode / custom assistant. It works well and I use it a few times per week.

enejej

Another piece of evidence that shows OAI has no vision and taste re. Project selection. Describes this whole LLM hype really. Will be jarring if it ends up being that the value created (in terms of revenues) is mostly around software production.

drivebyhooting

I’ve tried using it for working through AIME. It was ok, but significantly worse than a human teacher. It generally knew how to solve the questions, but does not know how to properly scaffold the solution. It mostly just prompts simple calculations, rather than guide to get the insight. What’s worse is that ChatGPT would occasionally disagree with my calculation because it can’t do arithmetic!

trashface

They do stuff like that. They also killed "Robot" personality last year which was my favorite. The replaced it with "Efficient" or something, but it isn't the same. Robot was terminator-esq, appropriate for the new age we are entering IMO.

bko

I think its prob enough to do a prompt. Isn't that what these things are? Probably had some extra scaffolding before but now engine is good enough where just saying help me study results in the same results. I personally dont want modes. It should be smart enough to infer my intention and act accordingly

paulcole

How would they remove it loudly?

hbcondo714

The also removed Chat mode (from their Codex VSCode extension): https://github.com/openai/codex/issues/11007

Semantic search powered by Rivestack pgvector
4,351 stories · 40,801 chunks indexed