Provably fair RNG — how do you actually verify a seed?

Rocky Mtn Rebecca

New Member
Joined
2024-05-18
Posts
67
Location
Boulder, CO

I've seen "provably fair" on probably 20 different operator landing pages now but every time I click into the explainer it's just a hand-wavy paragraph about hashing and "trust the math." Can someone walk me through what verifying a seed pair actually looks like, in concrete terms, for someone who has a stats background but is new to the crypto-game side?

Help me sanity-check this — the variance is doing the talking, but I'd like to know what the math is supposed to be doing first.

Provably Fair Fiona

Trusted Reviewer
Joined
2022-09-25
Posts
529
Location
Ottawa, ON

Rebecca, this is my favourite question. Walk-through:

Before each round, the operator generates a server seed and shows you only the SHA-256 hash of it. You provide (or auto-receive) a client seed. The round result is deterministically derived from: hash(server_seed + client_seed + nonce). After the round (or after you rotate seeds), the operator reveals the original server seed.

To verify: take the revealed server seed, hash it yourself with SHA-256, and confirm the result matches the hash you were shown beforehand. If it matches, the server couldn't have changed its seed mid-round. Then re-derive the round outcome from server_seed + client_seed + nonce and confirm it matches the result you got.

Most operators publish a verifier widget on their site. The trustless version is to run the calculation in a local tool (there are good open-source ones). Verify, don't trust.

Blockchain Bruno

Chain Watcher
Joined
2022-05-30
Posts
673
Location
Winnipeg, MB

Fiona nailed the mechanics. One footgun worth knowing: "provably fair" only proves the operator couldn't change the seed after committing to the hash. It doesn't prove the seed was generated fairly to begin with. A malicious operator could pre-compute a million server seeds, hash all of them, and only commit hashes whose outcomes favor the house against the typical client-seed range.

This is why client_seed control matters. If you can set your own client seed to something genuinely unpredictable (a Bitcoin block hash from after the commitment, ideally), the operator can't have pre-selected for it. On-chain or nothing.

Vault Analyst

Senior Member
Joined
2022-03-14
Posts
847
Location
Toronto, ON

Adding the statistical-significance angle, since Rebecca mentioned a stats background. Even if every individual round is provably fair, a small sample of rounds (say n=200) will look very different from the expected RTP distribution. Don't infer fairness from session-level variance. Infer it from: (a) hash verification of every round, (b) long-run cross-session frequency of outcomes against the published probability table, (c) seed rotation freshness.

For dice and crash specifically I'd want n > 5000 personal rounds before I trusted my variance-vs-expected as informative. Run the numbers before you run the deposit.

Brooklyn Benny

Veteran
Joined
2022-08-04
Posts
612
Location
Brooklyn, NY

The legal side that complements the math side: provably fair is not a substitute for a license. It proves the round was honest, it doesn't prove the operator will pay your withdrawal or honor your bonus terms. I've seen operators with bulletproof provably-fair implementations that withheld cashouts for "compliance review" for weeks.

So verify the seed AND verify the operator's payout history. Technically fair, practically broke is still broke.

Rocky Mtn Rebecca

New Member
Joined
2024-05-18
Posts
67
Location
Boulder, CO

This thread is exactly what I needed, thank you all. Going to spend a weekend running my own verifier on the next 500 rounds I play and post the comparison against the published probability table. Will tag everyone here when the data's clean. Help me sanity-check this then — same thread, different angle.

Dundas Danielle

Regular
Joined
2023-08-21
Posts
214
Location
Hamilton, ON

If you do that analysis, please post the spreadsheet. I genuinely want to see what a verified vs. published distribution comparison looks like at n=500. There's a real chance some of us learn something from your run.

The math is the math.