Bigger Blocks, Bigger Stakes: Solana’s Firedancer Gambit
Jump Crypto’s Firedancer proposes lifting Solana’s per‑block compute cap after Alpenglow, promising fewer failed trades, new MEV dynamics, and bigger bursts.
Breaking: a proposal to uncap Solana’s blocks
Solana’s alternative client team Firedancer, built by Jump Crypto, has submitted a developer proposal to remove Solana’s fixed per‑block compute limit after the network completes the Alpenglow upgrade. The pitch is simple and bold: stop capping every block at the same preset compute ceiling, let leaders produce blocks as large as the network can actually handle, and rely on the new consensus rules to skip any block that stragglers cannot execute in time. You can read the proposal, filed as SIMD‑0370, on GitHub under the Solana Improvement Documents repository. Read the SIMD‑0370 pull request.
In plain English, this is a call to let the hardware and software do the talking. Instead of a one‑size‑fits‑all block limit, the protocol would allow capacity to flex with validator performance. Firedancer argues that after Alpenglow, the network can safely tolerate oversized blocks because validators that cannot keep up will explicitly signal a skip, and the chain will move on.
Bottom line upfront: removing the static ceiling could turn congestion spikes into higher throughput, fewer failed trades, and more predictable user experience.
What a block cap is, and why it exists
On Solana, every transaction consumes compute units, a rough measure of how much work the network must do to execute it. Today there is a protocol parameter that caps the total compute units per block. That limit exists for two reasons. First, it prevents a leader from packing blocks so dense that propagation and execution blow past the 400 millisecond slot target. Second, it ensures that the median validator, not the most powerful one, sets the pace, which protects decentralization.
The cap also causes familiar pain. During a meme‑coin mint, a whale airdrop, or a hot nonfungible token launch, the network hits the ceiling. Leaders cannot fit more work into a block, even if their hardware could, so users face rising priority fees, retries, and failed trades. The static ceiling becomes a speed governor.
How Alpenglow changes the safety envelope
Alpenglow is Solana’s forthcoming overhaul of consensus and data flow. It replaces the existing vote transactions and heavy gossip with a direct vote engine, plus a redesigned propagation layer. Crucially for this debate, Alpenglow introduces an explicit skip signal for voters who cannot execute a proposed block within the slot time. That skip outcome is not a fault, it is part of the protocol’s normal path. The Firedancer proposal is timed for after Alpenglow precisely because this mechanism turns oversized blocks from a potential network‑wide stall into a single‑slot miss that the chain can tolerate.
If you want the canonical source, the Alpenglow specification was merged as SIMD‑0326. It describes the voting fast path, fallback, and the skip conditions that make dynamic block sizing thinkable. See the SIMD‑0326 pull request.
Think of Alpenglow as adding shock absorbers to the car. Hitting a pothole at speed no longer bends the axle, the suspension compresses and you keep rolling. Under that model, a large block is a pothole that some validators absorb by skipping, while the network continues forward.
Bigger blocks during bursts, fewer failed trades
What does uncapping mean in practice during the moments that matter? Consider three common spikes:
- Airdrops and token mints. If leaders can pack more transactions per block, the network’s burst capacity increases. Wallets that submit properly priced priority fees should see fewer rejections because the leader can include additional transactions rather than bouncing them into the next slot or discarding them.
- Decentralized exchange surges. During volatile price moves, a larger block lets more swaps clear in the same slot. That cuts the tail of failed or late trades, which currently show up as frustrated retries and slippage. Market makers can update quotes more frequently because the pipeline accepts more work per interval. For related design shifts on exchanges, see our Uniswap v4 hooks primer.
- On‑chain games and consumer apps. High‑fanout, low‑value interactions often get crowded out during congestion. Bigger blocks provide headroom for these micro‑transactions to ride alongside heavy DeFi flow instead of being starved.
The short version is simple. When demand arrives in sharp spikes rather than a smooth curve, bigger buckets catch more water.
New MEV and fee dynamics
Uncapping blocks will not only smooth spikes, it will also alter incentives around fees and maximal extractable value.
- Priority fee markets could flatten. With more capacity in a slot, the fee required to get into the current block during normal conditions should decline, though the top of the block will still command a premium during bursts. Builders should expect a wider distribution of fees across the block rather than a sudden step function at the inclusion boundary.
- Bundle strategies evolve. Searchers and relays, especially those integrated with Jito‑powered infrastructure, may shift from ultra‑tight bundles to larger batch strategies, since leaders can admit more total compute without bumping into hard ceilings. That increases the appeal of batch auctions, time‑sliced liquidity updates, and multi‑transaction strategies that were previously brittle under strict caps.
- Backrun capacity expands. Larger blocks create more room for backruns and sandwich protection logic. Protocols will have to adapt slippage and cancellation policies accordingly, since the probability that an adversary finds room late in the block can rise even when overall throughput is improving. For cross‑ecosystem context on validator and protocol changes, see our Post‑Pectra Ethereum outlook.
The important takeaway for DeFi teams is to treat fee policy and transaction packaging as a first‑class product surface. Bigger blocks reward systems that bid intelligently, protect users from edge‑of‑block adversaries, and exploit batchable work.
The validator hardware question, clear‑eyed
If blocks can scale with performance, do we invite a validator hardware arms race that squeezes out smaller operators? That risk is real and deserves an honest accounting.
- Economic pressure to upgrade. Under Alpenglow, a validator that regularly skips blocks loses rewards relative to peers who execute more often. Removing the compute ceiling would likely amplify that pressure by increasing the variance of block sizes. Operators who invest in better CPUs, faster networking, and tuned storage will capture more fees. Those who do not will skip more and earn less.
- Propagation limits still exist. Even if the compute limit is removed, other guardrails remain. The protocol enforces limits on the number and size of shreds, so leaders cannot arbitrarily bloat block data beyond what the network can disseminate inside a slot. In other words, bigger blocks are bounded by what can be broadcast and verified in time, not only by raw execution horsepower.
- Diversity through multiple clients. The rise of Firedancer matters here. A more efficient client reduces the hardware needed for a given level of performance and adds implementation diversity. That counterweights centralization pressure by making high performance more accessible and by avoiding single‑client bugs.
A pragmatic frame is helpful. The network does not need every validator to keep up with the fastest leader in every slot, it needs enough stake to execute and vote quickly for safety and liveness. As long as propagation and verification constraints are respected and skip behavior is predictable, the marginal increase in hardware pressure could be an acceptable trade for a large capacity win.
A realistic path from talk to rollout
The road from an idea on GitHub to live mainnet traffic has several gates. Expect a sequence like this:
-
Specification and review. The proposal will be refined in the Solana Improvement Documents process. The open discussion will stress test safety assumptions, especially around propagation limits, signature verification, and the interaction with Alpenglow’s skip rules.
-
Safety experiments. Before any mainnet activation, engineers will simulate oversized blocks on devnet and testnet with mixed client sets, including Agave, Jito, and Firedancer variants. The focus will be on worst‑case slots that combine heavy compute, many writable accounts, and large data payloads. Operators should expect telemetry about block dissemination time, failed execution rates, skip rates, and leader fairness under varying hardware topologies.
-
Client readiness and share. Firedancer adoption matters, but so does parity across clients. If one client is the only implementation that reliably keeps up with bigger blocks, the change will stall. A credible path requires both Firedancer and Rust clients to demonstrate stable performance at higher load.
-
Governance and phased activation. After Alpenglow is live, removing the compute limit would likely be gated behind a feature flag and a governance vote or a clear social consensus among core teams and validators. An initial phase could raise limits slot‑by‑slot through leader hints or dynamic targets, then remove enforcement once telemetry shows safe margins.
-
Post‑launch monitoring. If adopted, expect strict monitoring of skip rates, peer liveness, and block size distribution. The community can revert or adjust policy if the network trends toward unhealthy concentration or if smaller operators are being pushed out faster than anticipated.
Timelines will hinge on Alpenglow’s deployment and on client share. The most honest outlook is that uncapping is feasible after Alpenglow proves itself in production, not before.
Actions for builders, starting now
Bigger blocks only help if your software is ready to benefit. A checklist for teams:
-
Wallets and consumer apps
- Surface priority fee guidance based on real time block occupancy and account contention, not static presets. Aim for adaptive bids that target inclusion within one or two slots during spikes.
- Show clear retry states. Under skip behavior, some submitted transactions will miss a slot even when the network is healthy. Present idempotent retries, prevent duplicate intents, and avoid user confusion.
- Preflight smarter. Use smaller compute budgets by default, reserve high budgets for paths that truly need them, and cache simulation results for common flows so users do not pay unnecessary fees.
-
Exchanges and market makers
- Embrace batchable designs. Move from one‑by‑one swaps to micro‑batches per slot where possible, with matchers that can fill more orders if the block opens wider. This reduces price impact and failure rates when the leader has room.
- Tune quotes and slippage windows. If inclusion is more likely within the same slot, quotes can be tighter with shorter validity. During bursts, widen gracefully rather than failing outright.
- Integrate bundle flows thoughtfully. With more backrun capacity late in a block, protect users by simulating adversarial inserts. Consider batch auctions at the slot boundary for predictable ordering.
-
Protocol engineers
- Optimize account write patterns. Larger blocks do not eliminate write locks. Minimize hot account collisions to benefit from parallel execution when capacity expands.
- Instrument for skip awareness. Build metrics for slots where your transactions landed, skipped, or were delayed. Use this to adapt fee bidding and backoff heuristics.
- Test at higher compute envelopes. Run load tests that assume blocks can be substantially larger than today. Profile CPU, networking, and signature verification hotspots. For comparisons to modular stack rollouts, see U16a on the Superchain.
Signals for investors to watch
If Solana can safely lift per‑block ceilings after Alpenglow, three investment themes follow.
- Capacity driven fee revenue. Bigger blocks convert pent‑up demand into fees during spikes. Track network fee revenue per day, the share from priority tips, and the ratio of failed to successful transactions during high‑volatility windows. Rising throughput with steady or lower failure rates is the KPI that matters.
- App migration and new categories. High compute verticals like on‑chain order books, perps engines, parallel matching, restaking frameworks, and real time data services become more viable when block capacity expands. Watch for teams rebuilding latency‑sensitive systems natively on Solana rather than on appchains or specialized rollups.
- Validator economics and client share. Gains are more durable if multiple clients handle bigger blocks. Monitor the share of Firedancer versus Rust clients, the distribution of validator hardware, and stake decentralization. Healthy diversity reduces tail risks from single client bugs.
For portfolio positioning, prefer protocols that monetize throughput, not only asset prices. Builders that optimize user execution quality under congestion will capture flow as capacity expands.
The trade‑offs, without hand‑waving
Uncapping blocks is not a free lunch. There are concrete risks and mitigations to keep in view.
- Centralization pressure. Rewards will tilt toward well resourced operators. Mitigation comes from efficient clients that lower the hardware bar, from propagation limits that cap data size, and from social norms or economic programs that support smaller validators.
- Fairness and censorship. Larger blocks give leaders more latitude to shape ordering. Countermeasures include transparent bundle policies, public relay telemetry, and protocol‑level constraints on reordering for protected flows like liquidations.
- Hidden bottlenecks. Signature verification, account lock contention, and network bandwidth can all dominate before raw compute does. Load testing must target real‑world transaction mixes, not only synthetic transfer floods.
The right question is not whether bigger blocks are perfect, it is whether the net gain in capacity, resilience, and user experience outweighs the measured increase in validator demands. With Alpenglow’s skip semantics, that answer can be yes if the rollout is staged and data driven.
The bottom line
SIMD‑0370 reframes Solana’s scaling goal. Instead of asking how high a static block ceiling should be, it asks whether the ceiling is necessary at all once Alpenglow is in place. Firedancer is betting that dynamic capacity, enforced by what the network can actually process in a slot, will deliver smoother spikes, better execution, and more revenue for builders and validators. The governance path runs through specification, testing, and client readiness, and it will take discipline to ship without eroding decentralization. If Solana clears those gates, bigger blocks could become a feature, not a bug, and the chain would enter a new regime where performance is limited by engineering, not by a number.
When a network chooses to scale by removing constraints, it must trade certainty for opportunity. The opportunity here is tangible. More room per slot means fewer broken user moments during the exact minutes the world is watching. That is the kind of progress users notice and remember, and it is the standard Solana now has a shot to set.