What if the browser built the UI for you?

jonnonz 22 points 22 comments April 05, 2026
jonno.nz · View on Hacker News

Discussion Highlights (12 comments)

jawns

Big brands spend millions establishing a particular look, style, format. They don't want you to treat their sites as merely a set of APIs to scrape and customize based on your own style preferences. They want you to have a branded experience.

mempko

Why limit to a browser? Why not the whole system. Check out this horrible thing I'm building. https://abject.world

sublinear

I'm struggling to understand what's being described here. If it's personalized clients, that's what we already had for most web services before the iPhone and app-ification of everything. It failed because making things compatible is a hard problem and a highly political/bureaucratic tarpit. > most SaaS products still ship hand-crafted React apps, each building its own UI, its own accessibility layer, its own theme system, its own responsive breakpoints Contrary to popular belief on HN, building these React apps are not "bullshit jobs" in the broader corporate world, nor going to be replaced by AI. They're the backbone of all ecommerce today and the ground floor for business operations because they keep us out of the tarpit . The implementation details are irrelevant here anyway. The actual problem was always how a business retains full control of its brand and UX.

burnto

I appreciate this idea. I don’t think it fits our current mental model of the web (or mobile), which makes it thought provoking. If you squint, it’s like the optimistic Web 2.0 era of open APIs expecting a bunch of various UIs and mashups to spring up. The business model could be challenging with the client-centric focus though, unless the adaptive browser slips ads in, which is an unpleasant thought.

chatmasta

Cool idea and line of thought, obviously rough and early but it gets you thinking. “Software as clay” is obviously where the industry is heading, and as you say we’re approaching this from multiple angles… applying it directly in the browser is certainly an intriguing idea. Why’d you make the prototype a separate browser instead of implementing with a chrome extension? Something like greasemonkey but with an LLM generating the scripts on the fly..

vochsel

I do like this idea, and agree on the timelines of the world grappling with what to change and what to keep with these new capabilities. Having a traditional web page with styles and assets AND the spec allow LLM's to be a bit more guided by the original site's design and intent. More of a remix or like Arc's boosts/skills feature. There's also the reality the a lot of the things you'd want to be promptable (sorting, functionality, enrichment) couldn't be done on just the front end. You need some mix of UI and API logic to be promptable...

Traubenfuchs

I estimate far above 90% of frontends do the same thing you could do with .jsp or .jsf 20 years ago and yet here we are still not having perfectly reusable frontend primitives and everyone doing custom development. We were closer to that with bootstrap than now with tailwind. I am convinced neither client side nor backend side AI solutions will solve this. Fully on topic: It would be naive to believe that serious web offerings would allow you to do this. Reality is moving in a different direction: Try applying custom css and js to reddit, for example: The website is a nightmarish matryoshka of shadow dom components and that‘s only the beginning of the flashification and silverlightification of the web.

NAR8789

Self-describing API endpoints... is the server side for this basically just HATEOAS?

designerarvid

When the canned sentence structures of LLMs are frequent and unaltered throughout an article like this, I always wonder whether the thinking also has been done mainly by the machine.

danpalmer

I doubt this will happen for a few reasons: 1. Branding. Companies want to control their interfaces for all sorts of reasons. Branding is a big one. Clarity and comms are another. 2. LLMs in the hot path. LLMs are expensive, a hell of a lot more expensive than executing some Javascript locally. Hell, you'd still probably need to do that under this model anyway. We're likely to see LLM usage filter into the right places, use-cases with higher leverage, LLMs to create a UI that is shipped to all users over LLMs creating UI on the fly every time. Costs and time will dictate this just like they have dictated how every other technology is used.

dodomodo

What the article misses is that generating a good UI is not easy, a good interface conveys so much more semantic information then just it's underlying API, and it does that without the user needing to concisely interpret the information. And it's not just semantic information, presenting any kind of information in a way which enable the user to seamlessly interpret and use it is not an easy task. AI, definitely lowered the bar for making some UI, but it doesn't help with the fundamentals challenges of making a UI, at least not more so then it helps with the fundamental challenges of any other job in our industry.

mattlondon

With respect I feel like the author is missing a whole bunch here about the point of a website. It's not just content/info/data, it's a performance (in the creative sense). Brands spend a lot of time honing their appearance - not just fonts and colours but the whole composition and visual pacing - their entire "say something without saying anything at all" aspect etc. Just walk through any place with physical shops and really look at how the stores have worked on their appearance and how they present themselves to customers. They're not just selling a product, they're selling a lifestyle/feeling/etc/etc. They're not just going to give that creative control away to some LLM. Another way to think of it is instead of people watching a movie or play when they go to the cinema or theater, they're just given the script to read. Same information but the entire artistry of both the performers and the directors is totally absent, leaving it up to each reader to imagine the delivery of lines or the scene's setting etc. I think on HN and in tech in general people seem to forget that "the first bite is with the eye", and that is why "normal people" never liked or used RSS. The desire to leave our mark and to create (and view!) visually appealing things seems to be pretty innate in humans - we've been doing it since cave paintings. I struggle to think of a world where we just hand that over to AIs and humans have zero creative control.

Semantic search powered by Rivestack pgvector
3,558 stories · 33,161 chunks indexed