Uncovering the Magic Number: Why ChatGPT Consistently Chooses 73
product#llm🏛️ Official|Analyzed: Apr 17, 2026 23:04•
Published: Apr 17, 2026 19:57
•1 min read
•r/OpenAIAnalysis
It is absolutely fascinating to observe how a Large Language Model (LLM) like ChatGPT develops predictable patterns when asked to perform seemingly random tasks. The consistent selection of the number 73 highlights the exciting quirks of neural network training and probability distribution in Generative AI. Exploring these numerical biases helps researchers and enthusiasts better understand the hidden mechanics of model inference and token prediction.
Key Takeaways
- •Generative AI relies on token probability rather than true randomness, often favoring specific numbers like 73 based on training data.
- •Users discovered this predictable pattern across multiple independent devices, showing the consistency of the model's responses.
- •These fun experiments offer valuable insights into how Large Language Models process prompts during inference.
Reference / Citation
View Original"Everytime I ask ChatGPT to "pick number from 0 to 100" it ALWAYS sends 73"