AI (2014)

bjornroberg 69 points 68 comments March 20, 2026
blog.samaltman.com · View on Hacker News

Discussion Highlights (11 comments)

nik736

> (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.) Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.

jryio

> The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine. Man will do nothing and machine will do everything. That's a bleak world no one is preparing for. How is that universal basic income scheme coming along?

Jensson

> And maybe we don't want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine. This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things. Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.

drcongo

Wait, so his keyboard has got a shift key?!

trilogic

Nailed it 12 Years ago... damn it, then after all Sam is not just talk and money. I just got humbled. This make me reconsider all my POV about Sam Altman.

mpalmer

The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs . He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown. At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have. I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad. We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.

DeathArrow

In a sane world AI revolution would be driven by the likes of Andrew Ng, Andrej Karpathy, Yann LeCun and not by a brigade of Sam Altmans.

Alan_Writer

"If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine." Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear. The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them. On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI. So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.

archagon

Altman is a ruthless capitalist, not a thought leader in any way. Why are we sharing his writing while pretending that he is?

Frannky

I model LLMs as searchers. Give the input search and they match an output. The massiveness of the parameters and training data let them map data in a way searching looks like human thinking. They can also permutate a little and still stay in a space that can overlap with reality. The human brain may be doing a very similar thing though, search and permutation via searched rules. It may be doing it just in a functional way, with more ability to search on massive data that may be with holes but filled with synthetic data via mind subprocesses on learned rules. I think machines can eventually get there, especially if we can figure out how to harness continuous models instead of discrete ones. And I have a feeling that functional analysis may be the key.

maxutility

I found Sam's early 2015 posts on machine superintelligence and regulation [1] [2] to be even more interesting in hindsight, given OpenAI's accelerationist bent of late, OpenAI president Greg Brockman's lobbying efforts against AI regulation, and frequent accusations of attempted regulatory capture. [1] https://blog.samaltman.com/machine-intelligence-part-1 [2] https://blog.samaltman.com/machine-intelligence-part-2 Sam's recommendations at the time include: 1) Provide a framework to observe progress… 2) Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers…, require that certain parts of the software be subject to third-party code reviews, etc. 3) Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world. … 4) Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research. 5) Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI… Also, in his acknowledgments he gives the greatest thanks to onetime partner, now rival, Dario Amodei.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed