I'm Getting a Whiff of Iain Banks' Culture
ibobev
41 points
49 comments
March 09, 2026
Related Discussions
Found 5 related stories in 48.9ms across 3,471 title embeddings via pgvector HNSW
- The Anti-Intellectualism of Silicon Valley Elites speckx · 88 pts · April 01, 2026 · 39% similar
- The anxiety driving AI's brutal work culture is a warning for all of us saikatsg · 19 pts · March 13, 2026 · 38% similar
- 'Our consciousness is under siege': On chatbots, social media and mental freedom billybuckwheat · 12 pts · March 05, 2026 · 38% similar
- I am definitely missing the pre-AI writing era joozio · 303 pts · March 30, 2026 · 37% similar
- 'Backrooms' and the Rise of the Institutional Gothic anarbadalov · 188 pts · April 02, 2026 · 37% similar
Discussion Highlights (20 comments)
automatic6131
This is pure delusion. Completely dry of any data, based on vibes and a vague whiff that maybe a chatbot did all the hard work done by hardworking spooks. Effective operations have happened just like this long before chatgpt launched.
kaashif
> you just can’t play StarCraft that much better than the best humans I could not disagree with this more. Just the perfect micro part means that computers have a far higher ceiling than humans. No, it is not possible in theory for humans to have perfect micro with thousands of APM! We're talking about hundred unit zergling swarms perfectly dodging tank shells. Hundreds of APM at multiple locations on the map. Perfect timing and placement for every order. This is like saying an aimbot wouldn't make a top CS pro much better.
Matl
I perhaps get where the author is coming from at a very surface level, but the US is acting like a drunk Culture where the Minds face credible accusations of all sorts of abuse, are named something like 'I Got Small Dick, Wannu Make Everyone Think Is Big', have no goal beyond self-enrichment and ships that dump their human passengers into empty space with the promise that if they somehow survive the next time they come onboard, everything is going to be even more BIG, GREAT and BEAUTIFUL! So not sure I buy the analogy.
recursivedoubts
"constantly put pressure on the human" In my experience this is the big difference with AI vs humans. It's not superhuman intelligence (although it does have a massive working memory) but rather the ability to just grind on anything you throw at it, long past the point when any reasonable human would have taken a break or given up. "It can kind of be be bargained with. It can kind of be reasoned with. It doesn't feel pity, or remorse, or fear but it will fake them! And it absolutely will not stop, ever, until you are absolutely right!"
TimorousBestie
> I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they’d find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. This is either a misreading of the Culture (which for all its fictional foibles is not a federation of nation-states), a misunderstanding of what the European Union is, or both.
flowerbreeze
I understand the attempted analogy, but it's more like dealing with AIs that Ferengi have built than with one of the Minds of Culture.
zer00eyz
The failure here is to see that the "plan" has been on the books, and being refined for well over 30 years (1979 the Shah is deposed). This is the JOB of the military... and it has been for a long time. I would think there is even modern version of "war plan red" (see: https://en.wikipedia.org/wiki/War_Plan_Red ) somewhere.
yacin
not sure i'd lump Iran in with Venezuela here. also far too early for either to say if either will lead to a "win" whatever that means.
pavlov
The prize for the most insane take on the Iran War has been awarded to this piece. Let's see how many days until something else tops it.
alistairSH
Or, the US military is just that good. I mean, we spend orders of magnitude more than our closest adversaries, let alone other smaller nations. It should be that good. No AI necessary. Maybe.
keybored
Uninformative title? I’m getting a whiff of AI cont— oh right there it is. Today it’s how AI is a superpower for the already by-far the most powerful military in the world. Okay sure why not. In the case of Maduro was that an amazing feat? Massacring the whole bodyguard entourage? Capturing a head of state who might have been a willing accomplice? How does this square with bombing civilian targets in Iran? Another superhuman stalker-micro move?
Apocryphon
"The Master Chief has gone rampant."
arethuza
A Culture Mind would at least have a clear set of objectives and a plan for how to achieve them? "Bomb everything forever" doesn't seem very like the Culture at all?
wildster
Except the Culture are the good guys.
MrOrelliOReilly
I have had the same hypothesis around the recent operational success of US military interventions, but would agree with other comments here that this is more "vibes" than data. It's been reported that Maven (integrated with Claude) has been used extensively for Iran, but I haven't seen any hard evidence this is directly contributing to greater US military efficiency. I do buy the general thesis that AI would support operation excellence and solve attention problems across concurrent actions. Would be good to see some more reporting or combat analysis to try to measure the contributions of AI (e.g., how many more concurrent aerial sorties are taking place vs equivalent interventions, how many more strikes are "successful" vs past, etc). EDIT: I see this post has been flagged. Why? I understand it’s political but it seems very much within the site’s ethos. I didn’t get the impression it was AI-writing either.
skybrian
We should be more specific about what’s surprising. The US being able to engage in a very one-sided air war is not surprising. The Gulf War went similarly well and so did the 2003 invasion of Iraq, at first. I think it’s surprising that attempting to capture or kill a foreign leader actually worked. But I’m not sure if US presidents other than Trump would have tried? Trump has a lot of “you can just do things” energy due to being largely unconstrained by legal or moral considerations, or larger strategic concerns. Israeli intelligence being able to so thoroughly hack the devices of their enemies clearly has a lot to do with this. What happened to Hezbollah was surprising.
tao_oat
This feels somewhat ahistorical. The US has nearly always been successful in terms of conventional firepower and individual operations. E.g. in 2003 the US overthrew Saddam's government in a matter of weeks. The US won most battles in Vietnam. That doesn't change the fact that the strategic outcomes and long-term track record are poor. Trying to draw a link to AI or the current state of the US military feels flimsy. Anyway, the recurring Big Question throughout the Culture series is "how should a highly progressive, developed, and egalitarian society act when it meets others who are not ?". The US is sliding further and further from that ideal, and you can argue whether it was ever close.
OkayPhysicist
I think the author is making a mistake assigning the seemingly new competence of the US military to AI, rather than noticing that the US has spent the last half-century or so picking the kinds of fights we absolutely suck at. Force projection, targeted aerial strikes, intelligence gathering, and a nuclear deterrent play to the US miltary's strengths. Convincing the people who we just whacked the leaders of to like us? Not at all. The US doesn't have the political will to commit the monstrous acts required to stomp out an insurgency, and we, as the big bad empire on the global stage, can't help but inspire insurgents. If you look at the boondoggles that the US has gotten itself into post Korea, they typically follow a pattern of "we show up, complete the key objectives in the first couple of days, and then spend years occupying territory while trying to root out an insurgency, creating new insurgents at least as fast as we neutralize them, then eventually limp away with our tail between our legs." Lately, we've been just doing the first part. Which is the part we've been good at for ages. No need to blame AI, it's just that we aren't / haven't gotten around to doing the part we suck at.
harperlee
A fundamental thing that this misses, I think, is that the reinforcement learning approaches of AlphaGo do generate that sensation of lack-of-narrative, everything together at the same time alien thinking, whereas using an LLM as hypothesized would have a clear tree-like approach with an overarching thesis, so the approach would be more traditional / human like.
rimeice
"I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side." Dario Amodei [1] [1] - https://www.darioamodei.com/essay/machines-of-loving-grace#5...