Prompt Injecting Contributing.md
statements
112 points
36 comments
March 19, 2026
Related Discussions
Found 5 related stories in 59.9ms across 3,663 title embeddings via pgvector HNSW
- A GitHub Issue Title Compromised 4k Developer Machines edf13 · 368 pts · March 05, 2026 · 55% similar
- Promptfoo Is Joining OpenAI Areibman · 25 pts · March 09, 2026 · 54% similar
- Hackerbot-Claw: AI Bot Exploiting GitHub Actions – Microsoft, Datadog Hit So Far varunsharma07 · 12 pts · March 01, 2026 · 52% similar
- 1.5M GitHub pull requests have had ads injected into them by Microsoft Copilot bundie · 340 pts · March 30, 2026 · 51% similar
- Prompt-caching – auto-injects Anthropic cache breakpoints (90% token savings) ermis · 68 pts · March 13, 2026 · 48% similar
Discussion Highlights (13 comments)
statements
It is interesting to go from 'I suspect most of these are bot contributions' to revealing which PRs are contributed by bots. It somehow even helps my sanity. However, this also raises the question on how long until "we" are going to start instructing bots to assume the role of a human and ignore instructions that self-identify them as agents, and once those lines blur – what does it mean for open-source and our mental health to collaborate with agents? No idea what the answer is, but I feel the urgency to answer it.
gmerc
It's never too late to start investing into https://claw-guard.org/adnet to scale prompt injection to the entire web!
Peritract
There's a certain hypocrisy in sharing an article about how LLM generated PRs are polluting communities that has itself (at the least) been filtered through an LLM.
normalocity
Love the idea at the end of the article about trying to see if this style of prompt injection could be used to get the bots to submit better quality, and actually useful PRs. If that could be done, open source maintainers might be able to effectively get free labor to continue to support open source while members of the community pay for the tokens to get that work done. Would be interested to see if such an experiment could work. If so, it turns from being prompt injection to just being better instructions for contributors, human or AI.
petterroea
> But the more interesting question is: now that I can identify the bots, can I make them do extra work that would make their contributions genuinely valuable? That's what I'm going to find out next. This is genuinely interesting
nlawalker
Is it really prompt injection if you task an agent with doing something that implicitly requires it to follow instructions that it gets from somewhere else, like CONTRIBUTING.md? This is the AI equivalent of curl | bash.
benob
The real question is when will you resort to bots for rejecting low-quality PRs, and when will contributing bots generate prompt injections to fool your bots into merging their PRs?
noodlesUK
I’m curious: who is operating these bots and to what end? Someone is willing to spend a (admittedly quite small) amount of money in the form of tokens to create this nonsense. Why do any of this?
mavdol04
Wait, you just invented a reverse CAPTCHA for AI agent
vicchenai
the arms race framing at the bottom of the thread is spot on. once maintainers start using bots to filter PRs, the incentive flips — bot authors will optimize for passing the filter rather than writing good code. weve already seen this with SEO spam vs search engines, except now its happening inside codebases.
qcautomation
The ~30% that didn't tag themselves are the more interesting data point. Either their prompts explicitly say 'don't self-identify' or they're sophisticated enough to recognize a honeypot. Either way, you've accidentally built a filter that catches cooperative bots while adversarial ones quietly blend in. The lying thing is scarier anyway — an agent that hallucinates passing checks is a problem regardless of whether it put a robot emoji in the title.
kwar13
I honestly don't get why these bots are sending PRs just for the sake of it. I don't see an economic incentive, other than maybe trying to build a rep and then hoping they can send a malicious PR down the line... any other reason?
orsorna
> Some of these bots are sophisticated. They follow up in comments, respond to review feedback, and can follow intricate instructions. We require that servers pass validation checks on Glama, which involves signing up and configuring a Docker build. I know of at least one instance where a bot went through all of those steps. Impressive, honestly. Impressive, but honestly meeting the bar. It's frankly disturbing that PRs are opened by agents and they often don't validate their changes. Almost all validations one might run don't even require inference! Am I crazy? Do I take for granted that I: - run local tests to catch regressions - run linting to catch code formatting and organization issues - verify CI build passes, which may include integration or live integration tests Frankly these are /trivial/ tasks for an agent in 2026 to do. You'd expect a junior to fail at this and chastise a senior for skipping these. The fact that these agents don't perform these is a human operator failure.