Firedancer's plan to uncap Solana blocks after Alpenglow
Jump Crypto’s Firedancer team is pushing SIMD-0370, a proposal to drop Solana’s fixed per-block compute ceiling once Alpenglow lands. Here is what changes, why it could cut congestion and fees, and the tradeoffs validators and builders should prepare for.

Breaking: Solana’s speed limit is on the chopping block
Solana’s next gear clicked into view this week. Jump Crypto’s Firedancer team published a governance proposal to remove Solana’s per-block compute unit ceiling after the Alpenglow consensus upgrade goes live. In plain English, the protocol would stop enforcing a fixed block size and instead let leaders pack as many transactions as the network can actually handle. The draft, filed as SIMD-0370 on GitHub, argues that Alpenglow’s new skip vote mechanism makes static caps redundant and that market incentives will push capacity higher over time.
For traders who lived through Solana’s congestion spikes, creators who saw mints stall, and payment apps trying to guarantee instant checkout, this is big. If adopted, block space would expand and contract with real hardware and software performance rather than an arbitrary number. Fees should become less spiky. And builders could aim for designs that assume much higher consistent throughput, especially as onchain checkout finally feels real.
First, a quick primer: compute units and the current ceiling
Every Solana transaction comes with a compute budget measured in compute units. Think of compute units as the time slices a program needs on a validator’s CPU. Today, the protocol adds up the compute for all transactions in a block and enforces a per-block ceiling. In the past year, that ceiling has sat around 60 million compute units, with proposals to lift it to 100 million. While these limits helped tame worst-case blocks, they also created a hard cap that ignored how much faster the average validator has become.
Under Alpenglow, the ceiling becomes optional because the network gains a new way to protect itself from blocks that are too heavy to process in time.
How Alpenglow changes the rules
Alpenglow rewires Solana’s consensus and networking. Two key ideas matter for this story:
- Voter and skip certificates. Validators vote off-chain on each block. A block finalizes with one of two paths: a fast path if roughly 80 percent of stake signs in the first round, or a two-round path if at least 60 percent signs across two rounds. If a validator cannot execute a block within the slot time, it sends a SkipVote. When about 60 percent of stake sends SkipVotes for that slot, the chain issues a skip certificate and moves on.
- Faster propagation. Alpenglow’s Rotor replaces today’s gossip patterns with a streamlined relay flow, reducing the time it takes to get data to everyone. The goal is sub-second finality at internet latency.
Put together, Alpenglow says the chain should not stall because some validators ran out of time. Instead, it either finalizes quickly or skips.
What SIMD-0370 actually proposes
The Firedancer draft takes Alpenglow’s logic to its natural conclusion. If blocks that are too heavy simply get skipped, then a fixed compute ceiling is unnecessary. The proposal requests removing the block-level compute check in validator software and leaving two important guardrails intact:
- Validators still enforce timeouts. Each client should stop executing a block if it exceeds a local deadline. This is the trigger for SkipVotes.
- Other safety limits continue to exist. For example, maximum shred counts and network-level constraints remain to prevent pathological blocks. Per-transaction compute limits also remain unless the community explicitly proposes to change them later.
The net change is conceptual: leaders would size blocks to what the network can absorb in time, rather than to an arbitrary cap. The protocol’s safety comes from the skip certificates and from the same voting thresholds that Alpenglow introduces.
Why this could unlock the next wave of throughput
A fixed ceiling is like a city that never raises the speed limit even after it widens the highway and repaves the road. Removing the ceiling lets leaders use all the lanes the hardware and client software can safely support.
Here is what that could mean on the ground:
- DeFi. Liquidations, oracle updates, and orderbook cranks often bunch into the same slots during volatility. Bigger blocks reduce the odds that these bursts overflow capacity. That means fewer failed liquidations, tighter spreads on perps and spot order books, and more predictable fees when it matters most.
- Payments. A checkout flow that targets sub-second confirmation becomes easier to guarantee if leaders can pack larger micro-bursts during peak minutes. The experience starts to feel like tapping a contactless card, and it lines up with our coverage of how onchain checkout finally feels real.
- Gaming and real-time apps. On-chain tick updates, matchmaking, or item mints during live events frequently arrive in waves. Bigger blocks absorb the burst, reducing the tail of slow confirmations.
Basic economics helps here. When the supply of block space goes up to meet demand, the marginal price per unit of compute tends to fall. That does not mean fees crash across the board; it means the worst spikes compress because the system can soak up more transactions in the slots where demand peaks.
Skip votes and safety, explained simply
Skip votes sound scary until you model them. Imagine a leader tries to produce a heavy block. Each validator starts executing immediately. Validators that see they will miss their deadline throw a SkipVote. If enough of the network is struggling, the SkipVotes cross the threshold and the slot is formally skipped. The chain keeps moving without waiting for stragglers.
This design has two safety benefits:
- It prefers progress over stalls. A few slow or overloaded validators cannot hold everyone else hostage.
- It pushes incentives in the right direction. Leaders that overstuff a block learn very quickly where the network’s limits are because skipped slots hurt their rewards. Followers that fall behind also have a cost in missed voting rewards, which encourages upgrades.
The risks are real and should be debated in the open. But the safety model is not hand-wavey; it relies on explicit voting thresholds and certificates, not vibes.
The tradeoffs: what gets harder
Removing a global ceiling does not come for free. Here are the main pressures it creates, along with how the ecosystem can respond.
- Hardware arms race and centralization pressure. If the most profitable strategy becomes buying ever faster hardware, small validators risk being pushed out. Mitigations include maintaining per-transaction limits, keeping maximum block data size caps, and designing rewards so that exotic hardware has diminishing returns. The community can also consider shorter epochs to adjust leader schedules faster as performance changes.
- Bandwidth and propagation. Even if CPUs can execute bigger blocks, the network still needs to ship those shreds to everyone. Rotor helps, but relay placement and bandwidth upgrades matter. Operators will need to monitor end-to-end propagation times and not just replay times.
- Geographic advantage. Proximity to leaders can matter because it allows earlier start on execution. This exists today and does not disappear with a fixed ceiling. Transparency around leader schedules and relay topology can reduce unfair advantage.
- Catch-up and snapshots. New or restarted validators must be able to catch up. As block sizes increase, snapshot generation, state download, and replay performance must stay comfortably ahead of live block production. Client teams are already cutting replay times, but operators should rehearse failure recovery under heavier loads.
None of these issues is a dealbreaker, but none can be hand-waved away. SIMD-0370’s own discussion thread raises several of them directly. That is healthy. The right answer is careful measurement on testnets and phased rollouts.
Why this could slash congestion and fees
Congestion on Solana often shows up as a queue of transactions that cannot all fit into a string of upcoming blocks. Local fee markets and priority fees help rank them, but when the queue grows faster than the rate of processing, users see retries and rising tips. Larger blocks attack the root cause by increasing the service rate in the busy moments.
There are still hard limits. Leaders must leave enough slack so that a supermajority can execute within the slot time. But a dynamic, market-sized block lets Solana use more of the true capacity that modern hardware and optimized clients deliver. Over time, that should flatten the tallest fee spikes and reduce the number of user-visible failures for high-throughput protocols.
Timeline and the signals to watch
Here is how this likely plays out over the next few quarters, in order of dependency rather than strict dates:
- Alpenglow rollout. The community already approved the Alpenglow consensus change via governance, and client teams are working toward testnets in late 2025 with mainnet activation targeted in 2026. The key milestones to watch are public testnet gates for Rotor and the off-chain voting path, plus hard data on notarization, finalization, and skip rates under load.
- SIMD-0370 discussion and scoping. The current status is review. Expect clarifications on which limits remain, such as per-transaction compute caps and maximum block data size, and on how timeout aborts are implemented across clients.
- Client readiness across implementations. Agave and Firedancer need to implement the removal cleanly. For Agave, the change centers on eliminating the replay-time block cost check and ensuring robust execution aborts on timeout. Firedancer will do similar with its own code paths. Before mainnet, you want to see parity in behavior across implementations to avoid subtle consensus splits.
- Operator testing. Large validators and staking providers should publish replay and propagation benchmarks from devnets and testnets. Look for distributions, not just medians: p90 and p99 execution and data transport times tell you whether the long tail can keep up.
- Governance sequencing. Once Alpenglow’s core pieces are proven on testnet, SIMD-0370 can move to a formal vote. Expect an opt-in phase or activation epoch planning to give operators time to budget hardware and bandwidth upgrades.
How this contrasts with Ethereum’s path
Ethereum’s base layer is scaling data for rollups, not execution throughput for the main chain. Pectra shipped in 2025 with a set of improvements that doubled blob capacity and tightened worst-case block size, while pointing toward peer data availability sampling in future upgrades. The Ethereum Foundation’s own summary outlines the near-term blob scaling and the longer-term plan for sampling and statelessness, a path designed to keep the bar for home stakers reasonable while rollups do the heavy compute work. See the foundation’s recap, Pectra mainnet announcement, and compare it with our look at Polygon's stateless speed in Rio and Starknet's decentralized sequencers reorg.
Solana is taking a different bet. With Alpenglow and proposals like SIMD-0370, Solana is trying to turn hardware progress and client engineering into immediate base-layer throughput. Both visions are coherent. One routes most computation to layer twos and scales data. The other keeps computation on the base chain and tries to make it fast enough for real-time apps. If SIMD-0370 passes, Solana is signaling an accelerationist curve into 2026, one where throughput tracks what the median validator can do in practice rather than what a static rule allows.
Practical guidance for builders and operators
Here are concrete steps teams can take now, lined up by role.
-
Validators and staking providers
- Measure your end-to-end performance. Do not just clock replay; instrument propagation, execution, and vote submission under synthetic heavy blocks. Track medians and p99s.
- Budget upgrades where they pay off. Faster disk and network uplinks often move the needle more than raw CPU. Plan staged upgrades that can be rolled back.
- Rehearse failure modes. Practice catching up from snapshots under load so you know your recovery envelope when block sizes grow.
-
DeFi protocols and market infrastructure
- Update transaction strategies. Calibrate compute budgets and priority fees for bursty conditions that include larger blocks. Design for more consistent inclusion rather than fighting for the front of the queue.
- Spread critical operations. For example, schedule oracle updates and crank calls across slots so they are less correlated. Larger blocks help, but even distribution still wins.
-
Game studios and real-time apps
- Move more of the loop on-chain. If your tick budget was the bottleneck, revisit what can run on Solana when blocks can be bigger in peak windows.
- Build graceful degradation. Even with larger blocks, some slots will skip. Make sure clients handle skips without janky UX.
-
Wallets and relays
- Improve fee guidance. Teach fee estimators to read load conditions under Alpenglow. Inclusion probability under dynamic block sizes is a different curve than under a hard cap.
- Harden submission paths. Under bursty conditions, transaction gating and resubmission logic matters more. Test with larger, more variable blocks.
What to watch in the data once this ships
- Skip rate. How often do slots skip under realistic bursts, and is the rate trending down as operators upgrade hardware and clients?
- Inclusion latency. Median time to inclusion and the tail during spikes are the numbers that matter for users, not just raw transactions per second.
- Fee volatility. Are the worst spikes flattening during volatility events in DeFi and during hot mints in consumer apps?
- Client diversity. Are Agave and Firedancer both keeping pace under larger blocks, and are there meaningful differences in behavior that need tuning?
The bottom line
Alpenglow gives Solana a new safety valve in skip votes and a faster voting and propagation path. Firedancer’s SIMD-0370 leans into that, replacing a centrally chosen speed limit with a network that learns its limits from reality. If the ecosystem can keep centralization pressures in check and prove clean behavior across clients, the payoff is a chain that feels less congested, more predictable, and ready for the kinds of real-time apps that draw users without telling them they are on a blockchain at all. That is the acceleration curve Solana wants to ride into 2026.