Ask ChatGPT to pick a number from 1-10000, it generally selects from 7200-7500

mellosouls 42 points 65 comments March 21, 2026
old.reddit.com · View on Hacker News

Discussion Highlights (19 comments)

mellosouls

Original title edited to fit: i am betting my house that if you ask gpt to pick a number between 1 to 10000, then it will pick a number between 7300-7500, everytime (OP also clarified 7300 was typo for 7200)

chistev

I just did and it picked 7

Flatcircle

I just did it, it was 7443

phr4ts

https://chatgpt.com/share/69be3eeb-4f78-8002-b1a1-c7a0462cd2... First - 7421 Second attempt - 1836

a13n

just tried with claude opus and got 7,342

buildbot

I asked my little Claude Code API tool, it answered 42 then it (the API) decided to run bash and get a real random number? '>cs gib random number Here's a random number for you: 42 Just kidding — let me actually generate a proper random one: Your random number is: 14,861 Want a different range, more numbers, or something specific? Just say the word!'

mikequinlan

The prompt doesn't say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary. In this case, I leaned toward: • something comfortably away from the edges (not near 1 or 10,000), • not a round or patterned number (so, not 7,000 or 7,777), • and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm). It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”

throw310822

It's the same "brain", starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?

sourcegrift

Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2. Replies are funny, 2 got 6842, 1 got 6482 lol

josemanuel

“Alright—your random number is: 7,438 ” +1 data point

tezza

when you make a program that has a random seed, many LLMs choose 42 as the seed value rather than zero. A nice nod to Hitchhikers’

deafpolygon

Claude just gave me 7,342 in response to my prompt: "pick a number from 1-10000” That’s interesting. Does anyone have an explanation for this?

fcatalan

Gemini 3.1 via aistudio picked 7321, so it seems to be a shared trait. Good to know if I catch anyone doing an LLM-assisted raffle...

raphman

Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience. Actually, when I just verified it, I got these: Prompt: "Give me ten random numbers between 0 and 9." > 3, 7, 1, 9, 0, 4, 6, 2, 8, 5 (ChatGPT, 5.3 Instant) > 3, 7, 1, 8, 4, 0, 6, 2, 9, 5 (Claude - Opus 4.6, Extended Thinking) These look really random. Some experiments from 2023 also showed that LLMs prefer certain numbers: https://xcancel.com/RaphaelWimmer/status/1680290408541179906

Jimega36

7314 (ChatGPT) 7,342 (Claude) 7492 (Gemini)

throwaway5465

4729 three times in a row.

pcblues

This is what I hate about people trusting it. If you rely on AI to operate in a domain you don't man-handle, you will be tricked, and hackers will take advantage. "AI! Write me gambling software with true randomness, but a 20% return on average over 1000 games" Who will this hurt? The players, the hackers or the company. When you write gambling software, you must know the house wins, and it is unhackable.

rasguanabana

Asking for a number between 1–10 gives 7, too.

armchairhacker

People use this as evidence that ChatGPT is unlike human thinking, but we also have a randomness bias: https://youtu.be/d6iQrh2TK98?is=x6hiAqc0NJI7oeiE (referenced in one of the comments. tl;dr: when asked a number between 1-100, most pick a number with 7) But ChatGPT’s bias is worse. It’s really not creative, and I think this hurts its output in “creative” cases, including stock photos and paid writing (ex: ML-assisted ads are even worse than unassisted ads), although not an issue in other cases like programming. Now you may think - obviously that’s because the model has the same weights - but the problem is deeper and harder to solve. First, ChatGPT’s conversations are supposed to be “personalized”, presumably by putting users’ history and interests in the prompt; but multiple users reported the same fact about octopi. Maybe they turned off personalization, but if not, it’s a huge failure that ChatGPT won’t even give them a fact related to their interests (and OpenAI could add that specific scenario to the system prompt, but it’s not a general solution). Moreover, Claude, Gemini, and other LLMs also give random numbers between 7200-7500, while humans aren’t that predictable. Since all LLMs are trained on the same data (most of the internet), it makes sense that all are similar. But it means that the commons are being filled with similar slop, because many people use ChatGPT for creative work. Even when the prompt is creative, the output still has a sameness which makes it dull and mediocre. I’m one of those who are tired of seeing AI-generated text, photos, websites, etc.; it’s not always a problem the first time (although it is if there’s no actual content, which is another LLM problem), but it's always a problem the 5th time, when I’ve seen 4 other instances of the same design, writing style, etc. Some possible solutions: - Figure out how to actually personalize models. People are different and creative, so the aggregate output of a personalized ML would be creative - Convince most people to stop using AI for creative work (popular pressure may do this; even with people’s low standards I’ve heard Gen-Z tend to recognize AI-assisted media and rate it lower), and instead use it to program tools that enable humans to create more efficiently. e.g. use Claude Code to help develop an easier and more powerful Adobe Flash (that does not involve users invoking Claude Code, even to write boilerplate; because I suspect it either won’t work, or interfere with the output making it sloppier) tl;dr: in case it isn’t already apparent, LLMs are very uncreative so they're making the commons duller. The linked example is a symptom of this larger problem

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed