Reverse-engineering the Pump.fun trending algorithm.
Pump.fun does not publish how its trending board ranks tokens. From hundreds of measured launches, we have a working model — four observable signals, a 60–120 second sample window, a steep decay function, and a non-linear weight on watchlist velocity. Here is everything we have learned.
Pump.fun's trending ranks four signals — recent volume velocity, distinct-buyer count, native chat density, and watchlist velocity — sampled on roughly a 60–120 second cadence with a steep decay. The biggest under-exploited signal is watchlist velocity, which appears to be weighted non-linearly. The biggest mistake is misaligning your launch pulse with the sample window — a session that runs across the gap between two samples scores worse than a smaller session that lands the spike inside one sample.
What we can observe (and what we can't)
Pump.fun's trending algorithm is not open source. What we can observe is the input/output behavior — given a token's measurable on-chain and in-app state at time t, what is the token's rank on the trending board at time t+? With enough observations across enough launches, the function becomes inferable.
Our dataset is a few hundred launches we either ran ourselves, instrumented, or scraped from public surfaces, with per-minute snapshots of: trade volume, distinct buyer count, chat message rate, watchlist count, holder count, market cap, time elapsed since launch, and trending-board position.
What we cannot observe: the algorithm's exact weights, its decay constants, or any private signals it might use (e.g., Pump.fun-side anti-spam scoring, account-quality heuristics, etc.). The model below is a best fit to the observable input/output behavior, not a leak.
The four signals the board samples
Across the dataset, four signals consistently predict trending-board placement with high explanatory power. In rough order of weight:
- Recent volume velocity — the derivative of cumulative volume over the last sample window.
- Distinct-buyer count in the window — count of unique addresses that traded in the window.
- Native chat density — message rate in the Pump.fun chat for the token.
- Watchlist velocity — rate of distinct accounts adding the token to their watchlist.
A fifth signal — holder count growth — appears in the model but with weak independent weight. It is largely captured by the distinct-buyer count above.
Critically: the algorithm does not appear to weight cumulative lifetime stats (total volume, total holders). Everything is windowed. A token that did 500 SOL in the last 90 seconds outranks a token that did 5,000 SOL spread across the last hour.
Volume velocity: the obvious one
Volume velocity is the headline signal. It is also the cheapest to fake at low quality and therefore the easiest for the algorithm to discount when paired with other low-quality signals (small wallet pool, no chat, no watchlist activity).
What the data suggests:
- The algorithm uses rolling-window SOL volume, not trade count. A single 5 SOL whale buy moves the score more than fifty 0.1 SOL retail buys.
- Window length is approximately 60–90 seconds — longer than a single Solana slot, short enough to react to a sudden push.
- The signal is concave in volume — diminishing returns above a certain threshold, which we estimate at 80–120 SOL/min depending on the token's curve position.
Implication: dumping 1,000 SOL into a 60-second window does not get you 10× the score of a 100 SOL push. The signal saturates. The strategy is to land enough volume to clear the threshold, not to outspend it.
Distinct-buyer count: the wallet-pool tell
Distinct-buyer count is the signal that quietly punishes shallow wallet pools. Two scenarios that produce identical volume:
- 50 distinct wallets, 20 trades each = 1,000 trades, 50 distinct buyers.
- 1,000 distinct wallets, 1 trade each = 1,000 trades, 1,000 distinct buyers.
The first scores meaningfully lower than the second despite identical volume. The model suggests distinct-buyer count enters the score with a near-linear weight, which means the wallet-pool depth of your bot is a direct multiplier on your trending score.
This is the structural reason 50-wallet bot pools are obsolete. Even if the volume is identical, the score is half.
Chat density and language diversity
Native chat density is harder to measure cleanly because Pump.fun's chat is rate-limited and human chat patterns are noisy. What the data supports:
- Chat density is logarithmic. The first few messages per minute matter a lot. The 50th message per minute matters less than the 5th.
- Language diversity adds weight. A chat with messages in 6 languages outscores a chat with the same density in 1 language. Our best guess: the algorithm is normalizing for global-launch credibility.
- Repeated near-identical messages decay fast. A comment database of 50 strings degrades quickly because the algorithm penalizes repetition (or the rate-limiter does — we cannot disentangle).
The practical implication is the reason serious bots maintain comment databases in the four-figure entry count, across a dozen languages, with native dialect rather than machine translation.
Watchlist velocity: the under-exploited signal
Watchlist velocity is the signal most operators leave on the table. Adding a token to a Pump.fun watchlist is a single click, costs no SOL, and produces a clear "this account is interested" signal.
Our data suggests watchlist velocity enters the trending score with a non-linear (super-linear) weight at low values — going from zero watchlist adds to ten in a 60-second window moves the score significantly more than going from a hundred to a hundred and ten in the same window. The algorithm appears to treat the early watchlist signal as discovery validation.
The strategy: distribute watchlist adds across as many distinct accounts as possible, timed to the early portion of the session when the score signal is hottest. Twenty distinct accounts adding the token in the first two minutes outranks two hundred adds spread across an hour.
This is the most under-exploited signal in the bot market. Most bots ignore it. The ones that move it produce visibly better trending placement at the same volume cost.
The sample window and the decay function
The trending board does not update continuously. Our best estimate, from observing rank changes against measured input changes:
- The board re-samples every 60–120 seconds, with the cadence tightening during peak Solana hours.
- Each sample uses a windowed view of the last ~60–90 seconds for volume velocity and ~30–60 seconds for watchlist and chat signals.
- The score decays roughly exponentially with a half-life on the order of 4–6 minutes. A token that scored 100 at sample t and stops contributing scores ~50 at t+5 minutes, ~25 at t+10 minutes, etc.
The combination of windowing and decay is the structural reason a smaller, sharper push beats a larger, smeared push. A 100 SOL pulse landing inside a single sample window contributes the full 100 to that sample's score. The same 100 SOL spread across 10 minutes is sampled 5–10 times at 10 SOL each — and each sample's contribution decays before the next push arrives.
Optimization: landing the pulse
The optimization implication is concrete. To maximize trending placement at a given session size:
- Identify the sample window cadence at launch time (it varies by hour — peak hours are tighter).
- Front-load 30–50% of session volume into the first 60–90 seconds, aligned to a sample window edge so the spike falls fully inside one sample.
- Pair the volume spike with synchronized chat and watchlist activity in the same window.
- Maintain a sustained baseline above the saturation threshold for the rest of the session — enough to keep the decayed score from falling below the trending cutoff, no more.
- If the curve graduates mid-session, re-launch the pulse pattern on the new venue immediately — Pump.fun's trending board treats post-graduation tokens as new entries with a fresh window.
This is the substantive content of what a curve preset like Burst is doing internally. The "spike at 8–15% of session" is not arbitrary — it is the alignment to the sample window pattern.
Five myths we have ruled out
- "More holders = trending." Holder count is barely independent of distinct-buyer count, which is what actually drives the signal.
- "Long-running launches outrank short ones." The opposite — windowing and decay punish duration without sustained pulse.
- "The board punishes obvious bots." No — it punishes obvious signals (low wallet diversity, repeated comment text, no watchlist). A bot that addresses these is indistinguishable from organic.
- "Curve position matters for trending." Indirectly, via the saturation threshold. Tokens late on the curve need less SOL to clear the threshold, but the threshold-clearing matters more than the SOL number.
- "More languages = always better." Diminishing returns — the data supports a 4–6 language sweet spot for global launches; beyond that, the signal saturates.
Caveats and what we don't know
This is a model, not a leak. Some things we cannot observe and have not modeled:
- Anti-spam scoring. Pump.fun almost certainly runs a private spam-score on accounts, and that score likely modulates how chat and watchlist signals from those accounts are weighted. We cannot observe this directly.
- Editorial overrides. The team operating Pump.fun can manually feature or de-feature tokens. We have observed apparent overrides in a small fraction of launches in our dataset.
- A/B test variance. Pump.fun has visibly run multiple trending UI variants in 2025–2026; some signals may be weighted differently in different cohorts.
Treat the model as the best fit to the observable surface, not as gospel. The signals it identifies are the right ones to optimize. The exact weights are approximate.