The changing goalposts of AGI and timelines
skandium
356 points
304 comments
March 08, 2026
Related Discussions
Found 5 related stories in 53.6ms across 3,471 title embeddings via pgvector HNSW
- Measuring progress toward AGI: A cognitive framework surprisetalk · 114 pts · March 18, 2026 · 56% similar
- The first 40 months of the AI era jpmitchell · 156 pts · March 28, 2026 · 51% similar
- OpenAI Has New Focus (on the IPO) aamederen · 193 pts · March 18, 2026 · 51% similar
- Group Pushing Age Verification for AI Turns Out to Be Backed by OpenAI SilverElfin · 41 pts · April 02, 2026 · 50% similar
- OpenAI to Cut Back on Side Projects in Push to 'Nail' Core Business megacorp · 15 pts · March 17, 2026 · 50% similar
Discussion Highlights (20 comments)
bilekas
Hah can you imagine a world where OpenAi says to all the people who have dumped billions in : "well we lost guys, sorry about that, were just gonna help Google now". I'll eat my hat after I sell you a bridge.
kirubakaran
""" I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together. """ - Caitlin Kalinowski, previously head of robotics at OpenAI https://www.linkedin.com/posts/ckalinowski_i-resigned-from-o...
rishabhaiover
> The impotence of naive idealism in the face of economic incentives A great point. I saw blinding idealism during the early days of GPT era.
croes
> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do > It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one. No, the spirit is clearly meant for near AGI and we aren’t near AGI
choult
The writing was on the wall as soon as it went all-in on commercializing the tech. This will never happen, LLMs are already being used very unsafely, and if this HN headline stays where it is OpenAI will quietly remove their charter from their website.
diabllicseagull
> The impotence of naive idealism in the face of economic incentives. I don't think it was so much the naivety of idealism, but more an adoption of idealism and related language to help market what was actually being built: a profit-first organization that's taking its true form little by little.
dataflow
It's clever and funny, but nobody is legitimately near AGI, and their own AML Corp link proves Altman believes as much: > Achieving AGI, he conceded, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.” > At the Snowflake Summit in June 2025, Altman predicted that 2026 would mark a breakthrough when AI systems begin generating “novel insights” rather than simply recombining existing information. This represents a threshold he considers critical on the path to AGI. Though I'm sure they'll try to change the charter before we get to that point, but yeah.
labrador
The way Sam Altman bungled the Pentagon deal by swooping in a few hours after Anthropic was fired should be grounds for OpenAI finding another CEO.
swingboy
Purely anecdotal, but GPT 5.4 has been better than Opus 4.6 this past week or so since it came out. It’s interesting to see it rank fairly low on that table. Opus “talks” better and produces nicer output (or, it renders better Markdown in OpenCode) than 5.4.
throwaw12
OpenAI: - we are building Open AI - only if you have more than $10B net worth - we are against using AI for military purposes - except when that case is allowed by government - we are on a mission to help humanity - again, we define humanity as set of people with more than $10B net worth - surrender? - sure, sure, we will, only to people with more than $10B net worth, they can do whatever they want to our models, we will surrender to them
bluegatty
AI will be used wherever computers, silicon, RAM, software, GPUs and robots are today. And that's it. Everything beyond that is nuance. Nuance matters, but it's not the real story, it's the side show.
p-o
I think the brunt of the disruption regarding AI is already behind us for LLMs at least. It's possible we'll see improvements over the following months/years, but government will inevitably start to catchup to the level of disinformation and confusion that AI has brought to this world. Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs. For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.
dmix
This is taking Sam Altmans PR statements as proof of AGI? Even the quote they used questions the premise of the article > “We basically have built AGI” (later: “a spiritual statement, not a literal one”)
0xbadcafebee
AGI isn't going to happen within the next 30 years so this is moot. The actual researchers have said so many times. It's only the business people and laypeople whooping about AGI always being imminent. You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot do. The best LLM "memory" is a search engine and document summarizer stuffed into a context window (which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you). To learn it would need RL (which requires specific novel inputs) and retraining (so that it can retain and compute answers with the learned input). This would all take too much time and careful input/engineering along with novel techniques. So AGI is too expensive, time consuming, and difficult for us to achieve without radically different designs and a whole lot more effort. Not only are LLMs not AGI, they're still not even that great at being LLMs. Sure, they can do a lot of cool things, like write working code and tests. But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first. It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do. If they had a real memory and RL in real-time, they wouldn't have these problems. But we're a long way away from that. LLMs are fine. They aren't AGI.
aleph_minus_one
I disagree with the headline: "Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project." I claim that currently no "value-aligned, safety-conscious project comes close to building AGI", both for the reasons - "value-aligned, safety-conscious" and - "close to building AGI". So, based on this charter, OpenAI has no reason to surrender the race.
ozgung
"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks." [Wikipedia] One can argue that they have already achieved this. At least for short termed tasks. Humans are still better at organization, collaboration and carrying out very long tasks like managing a project or a company.
Muhammad523
Two days from now and ClosedAI will remove their charter...
sreekanth850
He is the most terrible ceo among all of them.
m3kw9
" if a value-aligned, safety-conscious project " and which project is that? Are you sure Anthropic isn't aware of this and angling for this? And are you sure what Anthropic say is really value-aligned and safety concious? The PR bit surely is working right?
jimmydoe
Time to get rid of charter and be a normal member of this capitalism :)