Understanding Attestation Packing Efficiencies

Analyzing the Attestation Packing Problem and estimating packing efficiencies of historical beacon blocks.

Before we Start

Hi all!

This will be a bit different from our normal posts and is largely targeted towards other core Proof-of-Stake consensus developers/contributers.

Because of this, this post will require fairly detailed knowledge of the Proof-of-Stake upgrade coming to the Ethereum network and a strong grasp on the Ethereum Proof-of-Stake specification.

Familiarity with the concept of attestation packing would also be useful, but it will be introduced in this post.


Term Definition
Available Attestations Attestations that were available to be included in a given block.
Included Attestations Attestations that were actually included in a given block.
Inclusion Delay The number of slots that have passed before an attestation was included.
Participation Rate The percentage of total staked ETH that participated in voting in a given epoch.
Attestation Packing The method used to pack attestations into a block.
Block Efficiency The ratio between Included Attestations and Available Attestations.
Normalized Efficiency The Block Efficiency corrected with a 'best-effort' attempt at removing offline validators from the dataset.


Even before genesis of the Proof-of-Stake-secured Beacon Chain in December 2020, client teams had been working hard to improve the performance of their software.

One such performance metric is validator reward efficiency; how much of the maximum possible reward is obtained.

Validator rewards are dependent on several different factors:

Inclusion Delay

Disclaimer: Rewards for certain attestation-related duties will be tweaked in the upcoming Altair hard-fork.

The first is the concept of inclusion delay. The more delayed your attestation was in being included into a block, the smaller your reward will be. The minimum possible delay is 1 slot. This gives you the entire possible reward. For every additional slot of delay, you begin to lose reward. If your inclusion is delayed by 2 slots, you receive 1/2 the possible reward. A delay of 3 slots is 1/3rd of the possible reward, and so on. If your validator does not produce an attestation or it is not included within 32 slots, you will receive no reward.

Figure 1: The effect of inclusion delay on validator rewards.

Attestation Accuracy

The second is attestation accuracy. When attestations are made, they make a vote for the source, target and head of the beacon chain. If your validator attests with the majority, you will receive a reward proportional to the percentage of the validator set who also attest in the majority. If your validator fails to attest, or does not attest with the majority, you are penalized.

For example, if 99% of the validator set attests to the same source, target and head, those validators will receive 99% of the possible accuracy reward. The 1% of validators who did not attest with the majority are penalized accordingly.

Block Reward

Finally, the block reward is the reward given to the block proposer and is proportional to the number of attestations included in the block. While there is no penalty for not proposing a block, the validator scheduled to propose the block misses out on the reward.

Helpful Links

For more information about rewards and penalties and how they are calculated see the following resources:

The Attestation Packing Problem

While there is a maximum number of attestations that are allowed to be included in any given block, BLS cryptography, allows us to 'aggregate' many signatures into a single signature. When these signatures are of the same attestation, we are able to aggregate all those individual attestations together.

This dramatically reduces the impact of attestations on the block size without compromising on consensus security, and allows us to support hundreds of thousands of validators.

Certain validators are assigned to be 'aggregators', and are responsible for collecting individual attestations in the pool, aggregating them and propagating them around the network. While these 'aggregators' are assigned to the task, block proposers themselves are also able to aggregate any compatible attestations as needed.

We can even aggregate other aggregates, provided that they don't overlap, that is, a signature isn't aggregated twice.

Since there is an upper limit on the number of aggregates which can be included, proposers, therefore, are unable to simply include every valid individual attestation that they receive, but rather must decide which aggregates to include and which to ignore in order to maximize block rewards. This is known as attestation packing.

When block space is limited, it is possible that certain aggregates need to be excluded. This means that a scenario could exist where an attestation is valid and known, but due to limited block space, it is not included since the proposer is only aware of it in a suboptimal overlapping aggregate.

For example, consider the following aggregates:

Figure 2: Overlapping Aggregates.

Here each box represents a validator's attestation (or lack thereof) as part of an aggregate for a given committee. The red coloured attestations are unique to each aggregate.

When it comes time to pack blocks full of attestations, if there is enough space, these aggregates could both be included and so all attestations are accounted for.

However, if there is limited block space, there could be situations where it is optimal to only include 1 (which means missing out on the 3 unique attestations in the other aggregate) so that a different, more optimal, aggregate could be included (perhaps one with 4 new unique attestations).

Of course if the proposer had access to other aggregates which included those missing attestations and didn't overlap, or the unaggregated attestations themselves, they could be aggregated together to solve the problem.

Inefficiencies arise when the proposer does not have access to those attestations or if block space is limited and the packing algorithm isn't able to correctly determine which set of aggregates to include to ensure the densest possible packing.

The problem for the proposer therefore, is to select a subset of the whole attestation set (aggregate and unaggregated) they have access to, such that they maximize their block reward.

This problem is an example of an Optimization Problem with an NP-hard complexity. It is very similar in structure to certain kinds of optimization problems such as the Maximum Coverage and Set Cover problems.

A key feature of these types of problems is that they are widely believed to be incapable of being solved in polynomial time (and some may not even be decidable). In practice this necessitates the use of approximation algorithms which run quickly but do not guarantee an optimal solution, or exact algorithms which are guaranteed to run reasonably quickly if the structure of the input is constrained.

Jacek Sieka from the Nimbus team has written a high-level explanation on Twitter regarding attestation packing and how Nimbus has implemented improvements in their own codebase.

A more detailed discussion of the attestation packing problem can be found in a write-up by Victor Farazdagi from Prysmatic Labs.

For those that are curious, the attestation packing algorithm used by Lighthouse is a greedy implementation of a weighted covering algorithm. A formal analysis of such can be found here. Michael Sproul from Sigma Prime also presented on this topic at Devcon5.

Throughout this post, we will not set out to find a new algorithm which is more efficient than the ones that are used currently, however we will attempt to find out how efficient the current Ethereum clients are at packing attestations into blocks, how they compare and how much efficiency there is to be gained.

Possible Sources of Inefficiency

First we should consider the possible reasons why an attestation was not included into a block. In a 200,000 validator network with an average participation rate of 99%, (the equivalent balance of) 2000 validators miss an attestation every epoch. This could be for 3 different reasons:

  1. Validator is offline, either in the short term due to an outage, or in the long term, perhaps the operator lost their keys, forgot about their node, etc.
  2. Network inefficiencies such as attestations being insufficiently propagated such that they are never visible to the block proposer or only visible in a suboptimal aggregate.
  3. The block proposer excludes the attestation during the packing process due to a lack of block space or inefficiencies in its block packing algorithm.

In an ideal case, we would be able to minimize or eliminate completely the effects of (1.) and (2.) and leave us with an overall block packing efficiency.

However, distinguishing between these sources is difficult and so to start, we will take a measure of efficiency as simply the ratio of attestations that were included into a given block and the attestations that should have been available assuming perfect network conditions.

Note: This will not be a true measure of packing efficiency since it does not distinguish between the other types of inefficiencies. It will however give us a starting point from which to compare different implementations of the algorithms used by different clients since averaged over a large enough dataset, the effects from (1.) and (2.) should affect the different clients equally, but any algorithmic differences will remain.


The method used to calculate the set of available attestations is as follows:

  • Query historical blocks using the Lighthouse Beacon Node API to get the committee data for each epoch.
  • Parse this data per slot to extract a list of the validators which are required to attest in each slot.
  • Pull historical blocks and extract the list of attestations which were packed into previous blocks.
  • Remove the attestations that were included in previous blocks up to 32 slots ago.

So for slot $n$:

\begin{equation} \label{eq1} A_n = \bigcup_{i={n - 32}}^{n - 1} C_i - I_i \end{equation}

$A_n$ = The set of available attestations for slot $n$.
$C_i$ = All attestations as defined by the committee selection at slot $i$.
$I_i$ = The attestations that were included in the block at slot $i$.

The attestation packing efficiency $E_n$ can then be calculated as:

\begin{equation} \label{eq2} E_n = \frac{|I_n|}{|A_n|} \end{equation}

Here, $A_n$ is an optimistic set of the available attestations for slot $n$. This assumes that all possible attestations were submitted and seen by the proposer.

As discussed eariler, due to network inefficiencies and offline validators, the calculated efficiencies will be lower than the true efficiency. However we will (naively) work on the assumption that since Mainnet's participation rate can be as high as 99.7% the error from these effects is small.

Using a Lighthouse tool called lcli, we can calculate the efficiencies of all the slots in a specific epoch range.

lcli etl-block-efficiency --output /path/to/output.csv --start-epoch 30000 --end-epoch 30999

For more info about lcli, see the usage/installation instructions.

This pulls the committee data and blocks from those epochs, calculates the available and included attestations as well as proposer info per slot and saves the data into output.csv.

Using graffiti analysis, the results can be separated into their various client implementations and staking pools. The vast majority of the proposers will be unidentifiable, however this should still give a good baseline for which to compare efficiencies.

Since this analysis isn't about comparing specific clients against each other, client and pool identities have been obfuscated.


Implementation Efficiency
Client 1 75.50%
Client 2 73.75%
Client 3 76.99%
Client 4 78.81%
Pool 1 77.04%
Pool 2 82.92%
Pool 3 78.58%
Pool 4 78.29%
Pool 5 81.18%
Pool 6 79.84%
Pool 7 81.98%
Unknown 79.77%

Table 1: The average attestation packing efficiencies* of Mainnet across the entire chain history.

From these results we can see that the calculated efficiencies are significantly lower than the average participation of the network. However, by taking an average of the entire history of chain, we begin to lose valuable information. We cannot see how clients improved over time, whether that be from client upgrades, a better packing algorithm, or from improved network conditions. In addition, some staking pools were not operating at Genesis, so for them, a direct comparison doesn't make sense as they will have inflated efficiencies.

Let's take Client 1 and separate the averages for every 10,000 epochs. This will showcase how Client 1's performance has improved over time.

Epochs Efficiency
1-10,000 70.72%
10,001-20,000 75.52%
20,001-30,000 75.42%
30,001-40,000 84.20%
40,001-50,000 84.86%

Table 2: The average attestation packing efficiencies* of Client 1 across the entire chain history in 10,000 epoch intervals.

Clearly significant improvements have been made since Genesis. However these improvements could be on the network level or on the client level.

So rather than 10,000 epoch intervals, a better way to visualize the improvements over time is by fitting a LOWESS-smoothed trendline to the entire data set.

Trendline of attestation packing efficiencies Figure 1: Trends of the attestation packing efficiency* for each of the 4 major clients across the entire chain history.

Likewise for some major staking pools:

Trendline of attestation packing efficiencies Figure 2: Trends of the attestation packing efficiency* for some major staking pools across the entire chain history.

From these results above, we do see a general trend upwards from all clients. The reasons are likely two fold:

  1. As client teams push new releases, the overall performance of the software improves and any changes to the packing algorithm, or related parts of the stack, improve efficiency.
  2. As clients perform better on the network the overall network conditions improve allowing for better attestation visibility and thus more efficient packing.

However, despite the improvements, the efficiency values are still far below what we want to be achieving in the best case.

Long-Standing Offline Validators

*As some of you may have realized, the participation rate is actually going to have a significant effect on the way we are currently calculating packing efficiencies.

The reason for this, is because offline validators have a disproportionate effect on the number of available attestations.

Consider the following example: there are 50,000 active validators on a network and 500 of the validators are perpetually offline. Assume attestations from online validators are being correctly submitted and propagated.

Using the above calculations from equation \ref{eq1}, one might expect a maximum possible attestation packing efficiency of 99%. However, while blocks occur every slot, validators only attest once per epoch.

So at each slot there is only 1/32 of the total active validator set (in this example ~1563 validators) available to make new attestations in that slot. However, all attestations that haven't been included in a block and have been submitted within 32 slots are also available to be included. This means the missing attestations from all 500 offline validators are 'available' in every slot and so take up a larger share of the available attestation set.

For the above example, assuming perfect attestation inclusion efficiency (discounting the offline validators), we'd expect calculated efficiencies equal to:

$$ E \approx \frac{Included}{Committees + Offline} \approx \frac{1563 * 0.99}{1563 + (\frac{31}{32} * 500)} \approx 76\% $$

More generally:

\begin{equation} \label{eq3} E_P \approx \frac{\frac{1}{32} * P}{\frac{1}{32} + (\frac{31}{32} * (1-P))} \end{equation}

$E_P$ = Estimate of Calculated Efficiency for P.
$P$ = Participation Rate.

Figure 3: Participation Rate vs Calculated Efficiencies.

Note: Here we are using participation rate as an estimate for the number of offline validators for a given epoch. In practice, other effects also impact participation rate, however offline validators are the largest.

Running the numbers for Mainnet, using a participation rate of 99.7%:

$$ E_P \approx \frac{\frac{1}{32} * 0.997}{\frac{1}{32} + (\frac{31}{32} * (1-0.997))} \approx 91.2\% $$

For a testnet such as Pyrmont, which typically has a lower participation rate of around 89% at the time of writing we get:

$$ E_P \approx \frac{\frac{1}{32} * 0.89}{\frac{1}{32} + (\frac{31}{32} * (1-0.89))} \approx 20.2\% $$

We can plot all the calculated block efficiencies for a small section of Pyrmont which allows us to get a clearer view (be sure to note the axes values):

Attestation packing efficiencies pyrmont Figure 4: Attestation packing efficiencies for a small section of Pyrmont.

Note that the three bands (1st at ~20%, 2nd at ~32%, 3rd at ~42%) are due to skipped slots. If one slot is skipped, the next proposer is able to include additional attestations and thus the proportion of 'available' attestations made up of offline validators is reduced and the calculated 'efficiency' increases. The same occurs in the case of double skipped slots.

Likewise for Mainnet:

Attestation packing efficiencies mainnet Figure 5: Attestation packing efficiencies for a small section of Mainnet.

No distinct bands exist here because there are much fewer skip slots on Mainnet compared to Pyrmont.

Clearly offline validators are having a very large impact on the calculated efficiencies. However, remember that these calculated efficiencies do not represent a true measure of the attestation packing efficiency since the method does not differentiate between different sources of inefficiency. Additionally, when it comes to calculating packing efficiencies, it makes sense to only include attestations that the proposer actually had access to.

Over a large enough dataset, the results here are still valid for direct comparisons, however this data gives no indication of how well different attestation packing algorithms perform.

In order to get a clearer picture of the actual effect of the algorithms used in attestation packing, we need to minimize the effect of these offline validators.

Finding a Better Way

In order to correct for the presence of these offline validators, we can implement a 'best-effort' attempt at detecting them which will allow us to remove them from the available attestation set.

By maintaining a list of validators which have had attestations included in a window spanning the last 3 epochs, we can create an estimate for the number of long-standing offline validators at epoch $e$: $O_e$.

This allows us to calculate a new normalized efficiency $N_n$ where:

\begin{equation} \label{eq4} N_n = \frac{|I_n|}{|A_n - O_e|} \end{equation}

To understand why a 3 epoch window was selected, consider the effect of changing the window size. As the window size decreases (say to 1 epoch), the size of $O_e$ increases, $|A_n - O_e|$ approaches $|I_n|$ and thus the normalized efficiencies would converge to 100%. Likewise, as the window increases to infinity, |$O_e$| approaches 0 and thus the normalized efficiencies would converge onto the original calculated efficiencies. A window of 3 epochs was chosen as a reasonable middle ground.

Note that this will not be correct for nodes that go briefly offline for less than 3 epochs since that behaviour is indistinguishable from random network inefficiency (which also can't be corrected for using this method).

After normalizing for long-standing offline validators:

Normalized attestation packing efficiencies pyrmont Figure 6: Normalized attestation packing efficiencies for a small section of Pyrmont.

Compared to the original calculated efficiencies, the shape is maintained, but efficiencies have risen significantly indicating the majority of the effect was indeed from offline validators. Additionally, the 3 distinct bands no longer exist which further supports this hypothesis.

Attestation packing efficiencies mainnet Figure 7: Normalized attestation packing efficiencies for a small section of Mainnet.

Mainnet remains similar, just with higher efficiencies. The trends below indicate the same effect, with normalized efficiencies being higher than the efficiencies originally calculated.

Trendline of attestation packing efficiencies Figure 8: Trends of the normalized attestation packing efficiency for each of the 4 major clients across the entire chain history.

Trendline of attestation packing efficiencies Figure 9: Trends of the normalized attestation packing efficiency for some major staking pools across the entire chain history.


In this post, we discussed the attestation packing problem and designed a naive method for estimating the packing efficiency of any given block. We found that this method was heavily impacted by offline validators, so much so that our calculations were reporting an efficiency of ~20% for an average participation rate of ~89%.

We attempted to correct for this error by removing long-standing offline validators from the available attestation set. We found that by doing so, we largely decoupled our results from the participation rate as our calculated efficiencies jumped to ~95%.

Our data will still be affected by other types of inefficiency such as briefly offline validators and network latency, since these are not being corrected for as they are very difficult to distinguish. Because of this, it is likely the true packing efficiencies are even higher than what's being reported here.

What the data does show is that attestation packing efficiencies are in general quite high, at least >90% on average with some efficiencies reaching in excess of 97%.

Attestation Packing Efficiencies per Client

As seen from the various figures presented in this post, all clients and pools maintain similar levels of efficiency, all being within a few percentage points. Certain clients seem consistently above the pack, indicating they could be using a different algorithm, or other additional optimizations to maximize their attestation coverage.

Overall though, from the data, there does not appear to be a significant discrepancy in attestation packing efficiencies between different clients.

The Limitations of Participation Rate

Another detail that these results show is the limitations of using Participation Rate as a network health metric.

Participation rate in it's simplest form, is just a count of the voting ETH across an epoch and is often used as a metric for network health.

Participation rate (when calculated this way) is only interested in whether attestations were included at all in a given epoch. This means for any given participation rate, a spectrum of scenarios exists between two extremes:

  1. Attestations are included as soon as possible ($delay = 1$)
    • In this case, the efficiencies of blocks are unaffected by late attestations.
  2. Attestations are included, but at the latest possible time ($delay \le 32$)
    • In this case, the efficiencies of every block has been impacted by late attestations.

In both cases, the participation rate is the same, since all those attestations are still being included. Yet the health of the network is suffering due to late attestations.

Certainly, participation rate is a useful and important metric, however care should be taken when using it to make inferences about the overall health of the network.

The Future of Attestation Packing

Currently, most blocks are not full of attestations. This greatly simplifies the process of packing blocks since there are fewer instances where certain aggregates need to be excluded.

It remains to be seen whether the current efficiencies we are achieving are sustainable as the number of attestations required to pack into a block increase.

Thank you!

If you made it this far, thank you for reading and I hope it was interesting.

If you have any questions, feel free to reach out on the Lighthouse Discord server.

Happy Staking!