Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors. So I built a CLI that lets the agent open a browser, interact with the page, record what happens, and collect any errors. Then it bundles everything — video, screenshots, logs — into a self-contained HTML file I can review in seconds. proofshot start --run "npm run dev" --port 3000 # agent navigates, clicks, takes screenshots proofshot stop It works with whatever agent you use (Claude Code, Cursor, Codex, etc.) — it’s just shell commands. It's packaged as a skill so your AI coding agent knows exactly how it works. It's built on agent-browser from Vercel Labs which is far better and faster than Playwright MCP. It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time. Open source and completely free. Website: https://proofshot.argil.io/
Discussion Highlights (19 comments)
Imustaskforhelp
Great to see this but exe.dev (not sponsored but they are pretty cool and I use them quite often, if they wish to sponsor me that would be awesome haha :-]) actually has this functionality natively built in. but its great to see some other open source alternatives within this space as well.
Horos
what about mcp cdp ? my claude drive his own brave autonomously, even for ui ?
VadimPR
Looks nice! Does it work for desktop applications as well, or is this only web dev?
zkmon
Taking screenshots and recording is not quite the same as "seeing". A camera doesn't see things. If the tool can identify issues and improvements to make, by analyzing the screenshot, that's I think useful.
theshrike79
What does this do that playwright-cli doesn't? https://github.com/microsoft/playwright-cli
lastdong
This is basically what antigravity (Google’s Windsurf) ships with. Having more options to add this functionality to Open code / Claude code for local models is really awesome. MIT license too!
can16358p
How would this play with mobile apps? I'd love to see an agent doing work, then launching app on iOS sim or Android emu to visually "use" the app to inspect whether things work as expected or not.
jofzar
I'm going the opposite of everyone else is saying. This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked. Also looks useful for PR's, have a before and after changed.
z3t4
I'm currently experimenting with running a web app "headless" in Node.JS by implementing some of the DOM JS functions myself. Then write mocks for keyboard input, etc. Then have the code agent run the headless client which also starts the tests. In my experience the coding agents are very bad at detecting UX issues, they can however write the tests for me if I explain what's wrong. So I'm the eye's and it's my taste, the agent writes the tests and the code.
onion2k
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors. I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.
m00dy
Gemini on Antigravity is already doing this.
boomskats
I find the official Chrome DevTools MCP excellent for this. Lighter than Playwright, the loop is shorter, and easy to jam into Electron too.
alkonaut
This would be _extremely_ valuable for desktop dev when you don't have a DOM, no "accessibility" layer to interrogate. Think e.g. a drawing application. You want to test that after the user starts the "draw circle" command and clicks two points, there is actually a circle on the screen. No matter how many abstractions you make over your domain model, rendering you can't actually test that "the user sees a circle". You can verify your drawing contains a circle object. You can verify your renderer was told to draw a circle. But fifty things can go wrong before the user actually agrees he saw a circle (the color was set to transparent, the layer was hidden, the transform was incorrect, the renderer didn't swap buffers, ...).
dbdoskey
This is really cool. Have you thought of maybe accessing the screen through accessibility APIs? For Android mobile devices I have a skill I created that accesses the screen xml dump as part of feature development and it seems to work much better than screenshots / videos. Is this scalable to other OS's?
sd9
I've always found screenshots on PRs incredibly helpful as a reviewer. Historically I've had mixed success getting my team to consistently add screenshots to PRs, so this tool would be helpful even for human code. At work, we've integrated claude code with gitlab issues/merge requests, and we get it to screenshot anything it's done. We could use the same workflow to screenshot (or in this case, host a proofshot bundle of) _any_ open PR. You would just get the agent to check out any PR, get proofshot to play around with it, then add that as a comment. So not automated code reviews, which are tiresome, but more like a helpful comment with more context. Going to try out proofshot this week, if it works like it does on the landing page it looks great.
dude250711
That is not UI, that's just some web pages with JS.
nunodonato
I've been using playwright-cli (not mcp) for this same purpose. It lacks the video feature, I guess. But at least is local and without external dependencies on even more third parties (in your case, vercel). Perhaps you could allow to use a local solution as an alternative as well?
peter_retief
Thanks! I do share screenshots and paste them manually for front end stuff, nice idea though.
grahammccain
This is really useful thank you!