After Alpenglow, Firedancer aims to uncap Solana blocks
Jump Crypto’s Firedancer team has proposed SIMD-0370 to lift Solana’s per block compute cap once Alpenglow ships. Dynamic blocks could raise throughput, fee capture, and real user performance, while sharpening debates about propagation limits and decentralization pressure.

Breaking: a proposal to remove Solana’s block compute cap
In late September 2025, engineers from Jump Crypto’s Firedancer team filed a new Solana Improvement Document that lands like a gauntlet on the table. The proposal, known as SIMD-0370, would remove Solana’s fixed per block compute limit once Alpenglow is live. In plain terms, it suggests that leaders should be free to pack blocks as large as their hardware and the network can handle, and that validators unable to keep up should simply skip those blocks and keep the chain moving. You can read the primary text in the official SIMD-0370 pull request.
Why this matters now: Solana’s throughput has historically been gated by a protocol cap on the total compute that fits in each block. SIMD-0370 argues that, in a post Alpenglow world with explicit skip votes, the cap becomes redundant and blocks should scale dynamically with actual capacity.
What the cap does today
Solana measures execution work in compute units, or CUs. Each transaction consumes some amount of compute, and each block has a ceiling on the sum of those computes. Today that ceiling is 60 million CUs per block, with a separate proposal on the table to raise it to 100 million CUs. These ceilings were designed as safety rails so the median validator could execute blocks inside the slot time.
Think of a block like a cargo container on a conveyor belt. The conveyor moves at a fixed speed. A global cap means every container is the same size, no matter whether the forklift waiting at the next station is small or massive. If your forklift is oversized and could move a larger container, the cap still forces you to load the same fixed amount.
That fixed size approach has virtues. It keeps hardware expectations predictable and reduces the chance that a slow validator stalls the network. But it also imposes a static ceiling on throughput and fee capture when demand spikes. In recent surges, caps rather than raw network capacity determined how many swaps, liquidations, or token mints could land per block.
Why Alpenglow changes the calculus
Alpenglow is a sweeping consensus redesign targeting much faster finality and a simpler control plane for future upgrades. It restructures voting and data flow into components that aim to make confirmation take roughly the time it takes to cross the network. The official explainer introduces Votor for voting and Rotor for data dissemination, and sets expectations for sub second finality once deployed. For background, see Anza’s overview, Alpenglow: A New Consensus for Solana.
The part that matters for SIMD-0370 is the skip vote. In Alpenglow, voter nodes can explicitly signal that a block took too long to execute, so it is skipped and the network advances. That means the chain has a built in brake that engages when a block exceeds what a significant fraction of validators can process in time. If slow blocks are skipped anyway, a separate, fixed compute cap becomes less necessary as a guardrail.
Bigger, dynamic blocks in practice
If SIMD-0370 is adopted after Alpenglow, the leader’s job becomes more market driven. A leader will try to pack as many profitable transactions as possible into the slot, subject to real world limits like network bandwidth and how fast other validators can execute. If they push too far, more validators will skip, confirmation will slow or the block will not gather enough votes, and the leader will have wasted a slot. That feedback discourages reckless block stuffing and encourages leaders to find the profitable edge where most of the network can still keep up.
The proposal also clarifies that not all limits disappear. Even with the compute cap removed, protocol level constraints such as maximum shred counts and other size controls still apply. In other words, Solana would not allow arbitrarily huge blobs that cannot be sliced, propagated, and reassembled. The change targets the compute budget ceiling, not the transport envelope.
A thought experiment helps. Imagine a volatile DeFi hour where average blocks currently hit 60 million CUs and stand at 85 percent fullness. If a leader can safely execute and disseminate 90 million CUs in the same slot time because their hardware, kernel tuning, and network path are top tier, then they can try to pack that much. If most validators keep up, the chain accepts larger blocks, throughput goes up, and the leader earns more fees. If many validators cannot keep up, skip votes rise and the leader gets punished economically. The market searches for the per slot maximum that the actual network can sustain.
Where users feel the difference
-
DeFi traders: Fewer failed swaps at peak minutes. When demand and priority fees spike, larger dynamic blocks absorb more of the backlog, so price impact from delays is smaller and slippage protection triggers less often. Order books can process more cancels and replaces without starving taker flow.
-
Payments: If a payments rail wants sub second finality for retail checkout or game items, dynamic blocks reduce the number of slots your transaction must wait in the queue when bursts hit. That translates into steadier p95 and p99 latencies for real users, complementing efforts like Visa’s stablecoin payout rails.
-
Automated agents and market makers: Agents that post and pull quotes hundreds of times per second benefit when aggressive, parallelizable microtransactions fit into a single slot rather than spill. More quote refreshes land on chain, which strengthens on chain price discovery.
-
Builders of complex programs: Heavier instructions that previously fought for space underneath a fixed ceiling get more headroom. That includes protocols that do batched accounting, restaking flows with many cross program invocations, or settlement engines that bundle dozens of updates per user action.
The fee and MEV picture
Fees are where protocol design, hardware, and incentives meet. Leaders who can safely pack more transactions will capture more fees per slot. That enriches block producers relative to slow validators and pushes the network toward higher performance. Searchers and builders who participate in Solana’s auction ecosystems should expect changes in how value is captured. Larger, more variable blocks increase the space in which priority fee bidding, backrun opportunities, and liquidations occur. The total pie can grow even if the marginal fee per transaction does not. For context on auction design tradeoffs, see Arbitrum’s auction experiments.
For validators, the important nuance is that fee revenue will correlate both with block production share and with the ability to pack at the profitable edge without triggering excessive skip votes. Over time, that drives investment into CPU, memory bandwidth, storage write performance, and network interfaces. In the short run, operators with tuned hardware and clients that execute faster will likely see a fee tailwind.
The grounded risks
Acceleration has costs. SIMD-0370 surfaces three key risks that must be handled with discipline.
- Centralization pressure from a hardware arms race
Dynamic blocks reward performance. That can concentrate returns in operators with the best data center locations, kernel tuning, and the funds to upgrade often. If left unchecked, stake and production could drift toward a smaller set of well capitalized validators. That outcome is avoidable but not automatic.
What to do: treat validator economics as a product. Publish and maintain minimum hardware guidance that reflects the network’s evolving envelope. Track stake weighted hardware diversity and client performance, and set soft gates in release cycles that only widen the envelope when key decentralization metrics stay healthy. Encourage client competitiveness so gains do not come from a single implementation. And consider targeted incentives, like crediting timely votes more when network load is high, to reward responsive operators that run in more challenging geographies.
- Block propagation and execution bottlenecks
Bigger blocks must still move through the network and execute fast enough to meet Alpenglow’s voting windows. Even if a leader can produce a large block, it is not useful if many validators cannot receive and execute it in time. Alpenglow’s Rotor aims to improve data dissemination by leveraging stake weighted bandwidth, but the system can still saturate links under chokepoints.
What to do: expand the envelope in phases, with real time telemetry. Measure block propagation time distributions, per region bandwidth saturation, and execution p95. Keep conservative shred and packet limits until data shows room to loosen. Invest in parallel execution improvements and disk IOPS budgeting within validator software. Require client maintainers to demonstrate headroom under adversarial traffic patterns before toggling wider defaults.
- Skip vote dynamics and user experience
Skip votes are a pressure valve, not a panacea. If leaders consistently aim too high, skip rates can climb and the chain can waste capacity on blocks that never gather sufficient votes. That could produce more variance in user latencies. There is also the strategic risk that a malicious or misconfigured leader repeatedly proposes oversized blocks, effectively rate limiting a window of slots.
What to do: watch skip rates like a hawk and set alarms when p95 stretches. Calibrate client heuristics so local execution time estimates inform how aggressively leaders pack the next block. Consider mitigations for pathological leaders, such as reputation weighted soft constraints on block composition, without reintroducing a static cap.
One more caveat from the ongoing discussion: some core contributors argue that blocks are often not full today, so removing the limit might not deliver immediate user level gains. That is a real possibility in off peak hours. The case for SIMD-0370 is not that every block will suddenly be larger, but that when demand is there, the protocol will allow blocks to scale to match it rather than hitting a fixed ceiling. The pull request discussion records both the enthusiasm and the skepticism.
What changes for builders
-
Budget with headroom: stop designing right against the current ceiling. Assume that at peak, your transactions may land inside larger blocks with higher competition. Size priority fees based on the value at risk in the next few slots rather than yesterday’s cap.
-
Make failure cheap: even with dynamic blocks, some bursts will trigger skips. Build idempotent transactions and retries that minimize user pain when a block is skipped.
-
Test for burstiness: load test on devnets with transaction bursts rather than a steady stream. Watch how your program behaves when it is competing for space in blocks that grow and shrink with demand.
-
Consider batchers: when more space is available, off chain batchers that coalesce many tiny actions into single transactions have more opportunities. That can cut user fees without starving your program during quiet periods.
The validator playbook
-
Profile your pipeline: measure CPU stalls, memory bandwidth, disk writes, and network interrupts under synthetic load. Fix the bottleneck that shows up first. Many operators find that kernel tuning and NIC offloads deliver low hanging fruit.
-
Upgrade deliberately: a step function jump in RAM clock or storage write speed often matters more than a marginal CPU model bump. Track real p95 execution time per block against your slot time budget.
-
Pick your client with intent: Firedancer, Agave, and other clients bring different execution engines and networking stacks. Diversity is a security benefit and a performance lever. Track your success rate and skip rate by client.
-
Join testnets early: when Alpenglow testnets open, run in them. Treat skipped blocks and slow votes as bugs in your stack until proven otherwise.
Roadmap from idea to rollout
Where we stand:
-
Proposal status: SIMD-0370 opened for discussion in late September 2025. It is explicitly intended for the post Alpenglow world and debates are underway in the open pull request.
-
Alpenglow timing: the engineering goal is testnets in late 2025 and mainnet activation targeted after that. The exact cadence will depend on test results and validator readiness.
-
Phasing plan: if Alpenglow lands smoothly, expect a staged approach for removing the cap. First, activate the feature behind a cluster wide gate with conservative defaults. Next, widen parameters after on chain telemetry shows safe propagation and execution. Finally, consider removing the enforcement of a compute cap entirely while preserving transport limits like shreds.
-
Compatibility with future designs: removing the cap now does not preclude multiple concurrent proposers or asynchronous execution. Some designs might reintroduce a form of dynamic limit for coordination, but today’s static ceiling is not a prerequisite for those paths. For a contrasting approach to scaling tradeoffs, see the Ethereum L1 zkEVM plan.
KPIs that tell the story
If this idea moves from discussion to testnets and then mainnet, these metrics will separate hype from progress:
-
Skip rate by epoch and by region: rising skip rates signal overstuffed blocks, bad networking, or both. Watch p95 and p99 specifically.
-
Block fullness distribution: not averages, distributions. The rightward tail in peak hours is where the user experience is won or lost.
-
Fee capture per block: total fees and the share attributable to non vote transactions. If dynamic blocks work, leaders should see a measurable lift under load.
-
MEV capture and distribution: track the share between block producers, searchers, and protocols. The goal is more capacity without rent seeking that overwhelms priority fee markets.
-
Client mix and performance: stake share and production share by client, plus their relative execution times and vote inclusion latencies. Healthy diversity is both a resilience and a performance KPI.
-
Propagation time and shred loss: end to end dissemination metrics, not just leader outbound. Rotor’s promise shows up here first.
-
Execution latency p95 and p99: the user facing metric that wraps it all together. If these numbers fall in peak periods without rising variance, the UX story is working.
The bottom line
SIMD-0370 is a simple idea with complex consequences. If Solana trusts Alpenglow’s skip votes to act as a safety brake, then the network can let real capacity decide how big a block should be in any given slot. That opens the door to more throughput, better fee capture, and smoother peak hour UX. It also raises the stakes for validator operators and client teams, who will now set the network’s envelope with their engineering.
The path forward is to prove the edges in public. Start on testnets, measure ruthlessly, and widen only when the metrics say it is safe. If the data shows that dynamic blocks raise peak capacity while holding skip rates and p99 latency within targets, Solana will have taken another meaningful step toward internet speed finance. If not, the same data will tell the community where to adjust. Either way, the next few months will be more about measurement than slogans, and that is how breakthrough infrastructure gets built.