OpenAI says its new model GPT-2 is too dangerous to release (2019)
surprisetalk
274 points
80 comments
April 08, 2026
Related Discussions
Found 5 related stories in 58.3ms across 3,871 title embeddings via pgvector HNSW
- The Sudden Fall of OpenAI's Most Hyped Product Since ChatGPT fortran77 · 25 pts · March 30, 2026 · 61% similar
- OpenAI hit with lawsuit claiming ChatGPT acted as an unlicensed lawyer droidjj · 15 pts · March 08, 2026 · 58% similar
- GPT-5.4 meetpateltech · 156 pts · March 05, 2026 · 56% similar
- GPT-5.4 mudkipdev · 739 pts · March 05, 2026 · 56% similar
- Frequent ChatGPT users are accurate detectors of AI-generated text (2025) croemer · 11 pts · April 07, 2026 · 55% similar
Discussion Highlights (20 comments)
JumpCrisscross
Had a minor conniption until I saw the year. OpenAI just struggled to close a round. And the New Yorker just published an unflattering profile of Altman [1]. So it would make sense they'd go back to the PR strategy of "stop me from shooting grandma." [1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
johnfn
Wow! I totally remember reading the bit I'll quote down below back in 2019 and having my mind utterly blown. What a blast from the past. If anything, I think this moment was even more astounding to me than GPT 3.5, 4, etc. > For example, researchers fed the generator the following scenario: > > In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. > The GPT-2 algorithm produced a news article in response: > > The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
buremba
It playbook is that a model is too dangerous until a competitor releases a competing model that beats yours.
make3
the thing could barely make full grammatical sentences, it's funny to see that even then they were overclaiming the fuck out of their myself
cinkhangin
I think they are right unintentionally. The growing amount of low-quality content everywhere could become a real problem.
villgax
Very safe to use the outputs to make a better model coz scraping the internet for publicly accessible content means your publicly shared outputs only become part of the same lol
measurablefunc
I'm wondering when people are going to figure out the doom marketing playbook.
JackYoustra
AI systems far weaker than GPT-2 have had terrible effects. The result of how information and power is distributed mostly flows along the lines of reward hacking recommendation engines, powered by even weaker models. And yet, somehow, it is beyond disagreeable but unbelievable that other people may have and may still reasonably believe that these things are too dangerous for widespread release?
romanzubenko
I remember seeing this article and example output text and feeling what's the big deal? It wasn't until I got early access to GPT-3, that I though like something big is about to happen. At the time only a few companies/yc alums had access and I remember showing playground to people outside of tech, and my friend just kept asking "How does it know about my [x] domain? It it a trick?".
strangescript
Their concerns weren't completely off base, I think they just over estimated how much it would really matter in the grand scheme.
guessbest
Feels like from the before times.
Sunspark
The current "too dangerous" hype today is Anthropic's Mythos. They say it is so mighty that they will wall it off and only grant access to approved corporations.
subroutine
They finally did release 2.0 under the MIT license. That was the last version (a 1.5-billion-parameter model) they would release open source. GPT3 for comparison has 175 billion parameters.
bertmuthalaly
Now that I see this in the light of the recent sama article, I wonder whether the point of the "it's too dangerous" rhetoric is to enable "Open" AI to avoid open-sourcing the weights and process. A convenient pretext for maintaining a monetizable competitive advantage while claiming a benevolent purpose.
nsmog767
Zero mention of Sam Altman…interesting
ramoz
I fine tuned GPT-2 on the FAR (federal acquisition regulation) and demoed it to a CFO at a 3-letter. This was shortly after the release when we were building a templating system to automate RFP and RFI creation. I proclaimed that the customer soon wouldn't have to write any of the mad lip parts themselves, and they can use AI to do it. It sounded great until I demoed and the model went off the rails with some rhetoric entangling "Trump", "Russia", "China", "CIA", "Voting" -- the demo was for a janitorial procurement at the agency.
october8140
"You don't want no part of this" | Walk Hard: The Dewey Cox Story https://youtu.be/CepW8wAuL_M
SpicyLemonZest
I'm somewhere between frustrated and baffled why people raise this as an example of overselling. This was clearly a reasonable call! Not all the experts quoted in the source article agree that the model should have been held back, but they all agreed that the risks were real and it's understandable why OpenAI would do it.
SilverSlash
Someone needs to make a compilation of all these classic OpenAI moments. Including hits like GPT-2 too dangerous, the 64x64 image model DALL-E too scary, "push the veil of ignorance back", AGI achieved internally, Q*/strawberry is able to solve math and is making OpenAI researchers panic, etc. etc. I use Codex btw, and I really love it. But some of these companies have been so overhyping the capabilities of these models for years now that it's both funny to look back and tiresome to still keep hearing it. Meanwhile I am at wits end after NONE OF Codex GPT-5.4 on Extra High, Claude Opus 4.6-1M on Max, Opus 4.6 on Max, and Gemini 3.1 Pro on High have been able to solve a very straightforward and basic UI bug I'm facing. To the point where, after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself. Update: some 20 minutes later, I have fixed the bug. Despite not knowing this particular programming language or framework.
apical_dendrite
I have a lot of trouble understanding the mindset of a person who thinks that what they're building is so dangerous that it must be locked away or it will cause untold harm, but also that they must build it as fast as possible. I can understand it in the context of the Manhattan project, where you're fighting a war for survival. I cannot understand how you can do it as a commercial enterprise.