High Rate of Transaction Drops and Transaction Inclusion Delays

Incident Report for Base

Postmortem

Postmortem: Transaction Drops and Inclusion Delays

Summary

On Saturday, January 31, 2026 between 19:17 and 20:41 UTC (2 hours and 26 minutes), Base Mainnet experienced a period of high transaction inclusion latency and increased mempool evictions. During this time, an estimated 20% of transactions submitted landed onchain.

User Impact

  • Dropped Transactions: A total of 2.1 million transactions were dropped from the transaction pool and required resubmission by users. 512k transactions were included in blocks during the incident.
  • Block Delays: Spikes in incoming transactions, caused by the degraded inclusion times, resulted in block production delays. Approximately 2% of blocks in the incident window took longer than 2 seconds to build.

Root Cause

The root cause was a configuration change to how transactions were propagated between our mempool nodes and the block builder. The change was intended to address an ongoing (but less severe) incident causing periodic delays in transaction inclusion. 

The change increased the size of the queue for transactions waiting to be fetched from mempool nodes by the block builder. We determined that queue sizing was causing the builder to drop transactions that could have been executed during high-activity periods, resulting in the periodic delays. 

This had unintended effects as it intersected with unexpected behavior in the mempool clients that prevented proper management of mempool broadcasting queues. With the increased fetch size, a higher number of older transactions were attempted to be fetched which weren’t actually executable due to rapidly rising base fees. This led to failures in the fetch request and the builder retrying those requests, which triggered a negative feedback loop of the mempool nodes re-inserting them to the top of the queue. The builder continuously requested older transactions instead of the latest seen ones which were more likely to be executable.

When organic network traffic spiked on Saturday (peak tx submission rate was 25,000 TPS), this cascaded into an inability for the builder to source executable transactions from the mempool nodes reliably.

Mitigation

Mitigation of the incident was rolling back this config change to restore the previous state of transaction propagation parameters.

Our Learnings and Next Steps

We are taking immediate and long-term actions to prevent this type of incident from recurring and to improve our response.

1. Technical Improvements

We are improving the stability by optimizing the pipeline of getting transactions from ingress to the mempool nodes to the builder, while removing the unnecessary overhead of P2P gossip. Additionally, we’re actively working to scale and tune mempool to better respond to spikes in transaction volume. 

This project is in progress, and estimated to take approximately one month, with stabilizing milestones along the way.

2. Alerting and change monitoring

We are focused on closing technical gaps related to monitoring:

  • Alert Tuning: We have updated our dashboards and alerts to better capture inconsistent degradation, like this incident, to ensure our team is immediately notified
  • Change Monitoring: We will establish active, temporary monitoring for major changes to mainnet infrastructure to catch even slight behavior differences during rollouts

We encourage all partner teams to subscribe to the Base Status Page to remain in the loop. Additional information is shared on X (buildonbase), and Discord.

Posted Feb 03, 2026 - 23:26 UTC

Resolved

We have validated the fix restored overall network stability. We will be conducting a full RCA and share a public postmortem in the coming days.

Periods of increased network congestion can still occasionally result in transactions being delayed or dropped - we are working on longer-term fixes to ameliorate this behavior and will share our plans and updates as we make progress.
Posted Jan 31, 2026 - 21:51 UTC

Monitoring

We have deployed a fix for the problem causing transactions to not propagate correctly to the block builder under load.

Transaction inclusion has improved; we're continuing to monitor the fix in order to ensure the problem is resolved.
Posted Jan 31, 2026 - 20:47 UTC

Identified

We are now seeing high failure rates in transaction processing by the txpool and block builder. Users will experience high rates of transaction drops from the txpool, as well as overall high transaction inclusion times.

We are actively troubleshooting this problem.
Posted Jan 31, 2026 - 20:35 UTC

Monitoring

We have identified an issue causing occasional dropped transactions during high-traffic periods. We have implemented a fix and are now monitoring the results.

Update 1/31: The fix is still being tested. We still have high rates of transaction distribution failures and continue to investigate solutions.
Posted Jan 30, 2026 - 23:05 UTC

Investigating

Base is experiencing intermittent transaction delays due to network congestion. Submitted transactions can occasionally take up to two minutes to be included in a block. We are actively working on a fix and expect this behavior to continue until it can be tested and deployed.
Posted Jan 22, 2026 - 22:19 UTC
This incident affected: Mainnet (Block production).