The tool that won't let AI say anything it can't cite

volatilityfund 35 points 14 comments April 10, 2026
github.com · View on Hacker News

Discussion Highlights (11 comments)

nnevatie

Considering that Claude sometimes confuses the identities of itself and the user, this might as well cite the user - "you just said X".

4ndrewl

I tried it with the Car Wash question (it failed) and all it's claims were mostly fuel consumption or emissions related, and this "factual (ai) Weather, traffic, and personal urgency are the only significant variables that could tilt the decision toward driving." My gut feeling is that if this could be done, it would be a core part of one of the model provider's output.

hdemmer

Used the demo app: Q: Who directed Scarface? A: - 1983 film (most commonly referred to): Directed by Brian De Palma. - 1932 original version: Directed by Michael Curtiz. This is wrong. The 1932 movie is by Howard Hawks.

0x3f

Well, I would have tried it but the website kills Firefox. Hard to see how you could really make this work though. You might as well just add "fetch and re-read all sources explicitly to make sure they are correct" to a normal prompt.

jampekka

The HN title is quite a strong claim, but it's nowhere to be seen in the repo. It seems to be fully prompt based, so the AI still can say anything it pleases. How well do these complicated prompt systems usually work? My strategy is to stick mostly to just simple prompts with potentially some deterministic tools and vendor harnesses, based on the rationale that these are what the models are trained and evaluated with. And that LLMs still often get tripped up when their context is spammed with too much stuff.

Gijs4g

The website fully stutters to a halt. Managed to ask if Ali Khamenei is still alive. It answered "Yes, ..."

tomlockwood

I love how at the beginning of this boom people were talking about how heuristics applied to AI outputs were short-term gains disguised as real progress. Now it seems like almost every new tool is a series of heuristics applied to AI outputs.

pjmalandrino

Why are you building your own DAG system instead of just using LangGraph? You could cut complexity and focus on what actually matters : the claims, evidence tiers, conflict detection. Also, embedding claims in the Chain of Thought instead of post-processing them might force rigor earlier in the pipeline. (Assuming the zero-deps constraint isn't a blocker?)

est

Looks like it's just find sources in Confluence against bullshit Claude Code says? I thought it can search for online cites.

todotask2

The interfactive app caused my mouse moving so sluggish on macOS.

doginasuit

I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person. Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.

Semantic search powered by Rivestack pgvector
4,179 stories · 39,198 chunks indexed