Taste in the age of AI and LLMs

speckx 233 points 192 comments April 07, 2026
rajnandan.com · View on Hacker News

Discussion Highlights (20 comments)

dk970

The new world order is what not to build...

furyofantares

Extremely ironic piece of slop.

gmaster1440

If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?

allears

And if anybody knows about good taste, it's techies, right?

ibero

https://x.com/netcapgirl/status/2024140332963705342?s=46 evergreen.

CharlieDigital

> One of the most useful things about AI is also one of the most humbling: it reveals how clear your own judgment actually is. If your critique stays vague, your taste is still underdeveloped. If your critique becomes precise, your judgment is stronger than the model output. You can then use the model well instead of being led by it. Something I find that teams get wrong with agentic coding: they start by reverse engineering docs from an existing codebase. This is a mistake. Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated. This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).

rvz

Well, nope. There are three real moats left in software: Distribution, Data (Proprietary) and Iteration Speed. Very successful companies have all three: Stripe, Meta, Google, Amazon.

verdverm

Related: https://blog.kinglycrow.com/no-skill-no-taste/ Discussion: https://news.ycombinator.com/item?id=47089907

dlev_pika

Rick Rubin said it best. https://youtu.be/jg1WUOxY6Cg?si=0ajVvgKnyuSz0e2Y

everyone

I dont buy the authors argument. Not much has changed imo. Mediocre slop has always been the easiest thing to generate.

dist-epoch

Ah, the classic "we'll ship production to China and just do design and marketing in US, because we have taste on what to build, and China doesn't". That worked really well...

anonzzzies

I use AI for code and we review that code and write tests ourselves first which the AI cannot touch. For writing we hardly ever do, unless we know the requester of something is incompetent and will never read it anyway; then it is a waste of time to do anything, but they expect something substantial and nice looking to tick a few boxes. It is great for that; a large bank with 40 layers of management, all equally incompetent, asked for a 'all encompassing technical document vault'; one of them sent an 'expectation document' which contained so much garbage as to show they did not even know what they were asking, but 1000s of pages was the expectation. So sure, claude will write that in an hour, notebooklm will add 100 slidedecks for juiceness. At first sight it looks amazing; its probably mostly accurate as well, but who knows; they will never ever read it; no one will. We got the 20m+ (with many opportunities to grow much larger) project. Before that was only in reach of the huge consultants (where everyone in those management levels worked before probably) who we used to lose against. Slop has its purpose.

inerte

Ah, Steve Jobs vs Bill Gates. Designer vs 41 shades of blue. This is nothing new. There's space for everybody.

danielvaughn

Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out. There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be. Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.

jrm4

It buried the more important point, one tech hasn't learned yet. Taste may be kind of important because it helps toward the truly important thing, which is skin-in-the-game. But also, with the right skin-in-the-game, you don't even need "taste." You just need real life consequences, which we don't do enough in tech.

ben8bit

> A practical loop for training taste Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.

LurkandComment

Think about moats in long term vs short term: Speed and distribution aren't a long-run moat because they are something AI can canabalize in a platform. Eventually they will coexist on your distribution base and offer it at a lower cost than you. Its a mote if it holds up before you exit at a high valuation... which a lot are setup to do. Taste: that's interesting. There is an argument there. It's hard to keep in the long-run and requires a lot of reinvestment in new talent Proprietary data: Yes, very much so. Trade Craft: Your new shiney system will still have to adhere to methods of of old clunky real world systems. Example, evidence for court. Methods for investigations. This is going to be industry specific, but you'd be surprised how many there are. This is long-term. Those who have the moat should focus on short burts of meaningful changes as they will rely heavily on gaining trust in established systems. In those places its more about trusting whats going on than doing it faster and better, so you want trust + faster and/or better.

tayo42

> That is why so much AI-generated work feels familiar: This was already a complaint people had before Ai. Like when logos and landing pages all used to look the same. Or coffee shops all looking the same.

boshalfoshal

I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc). Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art. Now where I disagree is that 1. "taste" is a defensible moat 2. "taste" is "ai-proof" to some extent "Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste." Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed). I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes. In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.

micromacrofoot

taste isn't a moat at all because it's so variable, in fact this stuff will start dictating what taste is through broad proliferation you already see it on facebook with all the ai generated meme sharing... taste is being eroded there

Semantic search powered by Rivestack pgvector
3,871 stories · 36,122 chunks indexed