GitHub's fake star economy
Liriel
760 points
358 comments
April 20, 2026
Related Discussions
Found 5 related stories in 76.0ms across 5,126 title embeddings via pgvector HNSW
- Show HN: GitAgent – An open standard that turns any Git repo into an AI agent sivasurend · 108 pts · March 14, 2026 · 52% similar
- My AI Agents Lie About Their Status, So I Built a Hidden Monitor kaylamathisen · 13 pts · March 04, 2026 · 51% similar
- GitHub's Historic Uptime todsacerdoti · 447 pts · March 31, 2026 · 50% similar
- Show HN: I tested 15 free AI models at building real software on a $25/year VPS j0rg3 · 17 pts · April 02, 2026 · 49% similar
- Show HN: AI agents run my one-person company on Gemini's free tier – $0/month ppcvote · 15 pts · March 08, 2026 · 49% similar
Discussion Highlights (20 comments)
talsania
Seen this firsthand, repos with hundreds of stars and zero meaningful commits or issues. In hardware/RTL projects it's less prominent.
dafi70
Honest question: how can VCs consider the 'star' system reliable? Users who add stars often stop following the project, so poorly maintained projects can have many stars but are effectively outdated. A better system, but certainly not the best, would be to look at how much "life" issues have, opening, closing (not automatic), and response times. My project has 200 stars, and I struggle like crazy to update regularly without simple version bumps.
Topfi
I don't know what is more, for lack of a better word, pathetic, buying stars/upvotes/platform equivalent or thinking of oneself as a serious investor and using something like that as a metric guiding your decision making process. I'd give a lot of credit to Microsoft and the Github team if they went on a major ban/star removal wave of affected repos, akin to how Valve occasionally does a major sweep across CSGO2 banning verified cheaters.
Lapel2742
I do not look at the stars. I look at the list of contributors, their activities and the bug reports / issues.
apples_oranges
I look at the starts when choosing dependencies, it's a first filter for sure. Good reminder that everything gets gamed given the incentives.
lkm0
We're this close to rediscovering pagerank
AKSF_Ackermann
So, if star to fork ratio is the new signal, time to make an extra fake star tier, where the bot forks the repo, generates a commit with the cheapest LLM available and pushes that to gh, right?
anant-singhal
Seen this happen first-hand with mid-to-large open source projects that sometimes "sponsor" hackathons, literally setting a task to "star the repo" to be eligible. It’s supposed to get people to actually try your product. If they like it, they star it. Simple. At that point, forcing the action just inflates numbers and strips them of any meaning. Gaming stars to set it as a positive signal for the product to showcase is just SHIT.
elashri
I usually use stars as a bookmark list to visit later (which I rarely do). I probably would need to stop doing that and use my self-hosted "Karkeep" instance for github projects as well.
nryoo
The real metric is: does it solve my problem, and is the maintainer still responding to issues? Everything else is just noise.
aledevv
> VCs explicitly use stars as sourcing signals In my opinion, nothing could be more wrong. GitHub's own ratings are easily manipulated and measure not necessarily the quality of the project itself, but rather its Popularity. The problem is that popularity is rarely directly proportional to the quality of the project itself. I'm building a product and I'm seeing what important is the distribution and comunication instead of the development it self. Unfortunately, a project's popularity is often directly proportional to the communication "built" around it and inversely proportional to its actual quality. This isn't always the case, but it often is. Moreover, adopting effective and objective project evaluation tools is quite expensive for VCs.
nottorp
Why is zero public repos a criteria? I paid github for years to keep my repos private... But then I don't participate in the stars "economy" anyway, I don't star and I don't count stars, so I'm probably irrellevant for this study.
Oras
Would be nice to see the ratio of OpenClaw stars
spocchio
I think the reason is that investors are not IT experts and don't know better metrics to evaluate. I guess it's like fake followers on other social media platforms. To me, it just reflects a behaviour that is typical of humans: in many situations, we make decisions in fields we don't understand, so we evaluate things poorly.
m00dy
same here on HN as well
socketcluster
My project https://github.com/socketCluster/socketcluster has been accumulating stars slowly but steadily over about 13 years. Now it has over 6k stars but it doesn't seem to mean much nowadays as a metric. It sucks having put in the effort and seeing it get lost in a sea of scams and seeing people doubting my project's own authenticity. It does feel like everything is a scam nowadays though. All the numbers seem fake; whether it's number of users, number of likes, number of stars, amount of money, number of re-tweets, number of shares issued, market cap... Maybe it's time we focus on qualitative metrics instead?
bjourne
> The CMU researchers recommended GitHub adopt a weighted popularity metric based on network centrality rather than raw star counts. A change that would structurally undermine the fake star economy. GitHub has not implemented it. > As one commenter put it: "You can fake a star count, but you can't fake a bug fix that saves someone's weekend." I'm curious what the research says here---can you actually structurally undermine the gamification of social influence scores? And I'm pretty sure fake bugfixes are almost trivial to generate by LLMs.
ozgrakkurt
> Jordan Segall, Partner at Redpoint Ventures, published an analysis of 80 developer tool companies showing that the median GitHub star count at seed financing was 2,850 and at Series A was 4,980. He confirmed: "Many VCs write internal scraping programs to identify fast growing github projects for sourcing, and the most common metric they look toward is stars." > Runa Capital publishes the ROSS (Runa Open Source Startup) Index quarterly, ranking the 20 fastest-growing open-source startups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index startups that attracted investment did so at seed stage, with $169 million raised across tracked rounds. GitHub itself, through its GitHub Fund partnership with M12 (Microsoft's VC arm), commits $10 million annually to invest in 8-10 open-source companies at pre-seed/seed stages based partly on platform traction. This all smells like BS. If you are going to do an analysis you need to do some sound maths on amount of investment a project gets in relation to github starts. All this says is stars are considered is some ways, which is very far from saying that you get the fake stars and then you have investment. This smells like bait for hating on people that get investment
fontain
https://x.com/garrytan/status/2045404377226285538 “gstack is not a hypothetical. It’s a product with real users: 75,000+ GitHub stars in 5 weeks 14,965 unique installations (opt-in telemetry, so real number is at least 2x higher) 305,309 skill invocations recorded since January 2026 ~7,000 weekly active users at peak” GitHub stars are a meaningless metric but I don’t think a high star count necessarily indicates bought stars. I don’t think Garry is buying stars for his project. People star things because they want to be seen as part of the in-crowd, who knows about this magical futuristic technology, not because they care to use it. Some companies are buying stars, sure, but the methodology for identifying it in this article is bad.
ernst_klim
I think people expect the star system to be a cheap proxy for "this is a reliable piece of sorfware which has a good quality and a lot of eyes". I think as a proxy it fails completely: astroturfing aside stars don't guarantee popularity (and I bet the correlation is very weak, a lot of very fundamental system libraries have small number of stars). Stars also don't guarantee the quality. And given that you can read the code, stars seem to be a completely pointless proxy. I'm teaching myself to skip the stars and skim through the code and evaluate the quality of both architecture and implementation. And I found that quite a few times I prefer a less-"starry" alternative after looking directly at the repo content.