On the evening of Valentine’s Day, February 14th, Beijing time, Ethereum founder Vitalik Buterin and Ethereum Foundation (EF) researcher Dankrad Feist held an educational seminar on the expansion solution “Danksharding”. If you want to understand how blockchain can achieve large-scale expansion while increasing the properties of “decentralization” and “security”, then this seminar is a good starting point. Note: Regarding the significance of “Danksharding” to Ethereum, it is recommended that readers first read the article “Understanding Ethereum’s “Scaling Killer” Danksharding in One Article”. The following content comes from the "Dude, what's the Danksharding situation ? " PPT provided by Dankrad Feist. If you want to watch the full seminar video, you can visit the official youtube channel of the Ethereum Foundation. Overview1. What is old: (1) Data sharding; (2) Use the data availability promised by KZG; (3) Use a separate sharding scheme to shard the original data; 2. What's new: (1) Proposer-Builder (Data Generator) Separation (PBS); (2) crList; (3) 2D solution; (4) Recommended architecture 3. Summarize the advantages and disadvantages What is oldData Sharding Providing data availability (DA) for Rollup and other scaling solutions; The meaning of the data is defined by the application layer;
Goal: Provide a data availability layer of approximately 1.3 MB/s and full sharding capabilities (10 times the current maximum data capacity and 200 times the normal capacity); Data sharding has been a goal of Ethereum since late 2019; Data availability sampling (DA sampling) Want to know that O(n) data obtained through O(1) work is available; Idea: Distribute data into n chunks; Each node downloads k (randomly selected) chunks; Erasure coding
KZG Commitments Polynomial Commitment C(f) Evaluate y = f(z)
C(f) and π(f,z) are elliptic curve elements (48 bytes each) KZG promises to serve as the root of data availability Think of a “KZG root” as something like a Merkle root; The difference is that the "KZG root" commits to a "polynomial" (all points are guaranteed to be on the same polynomial, while the Merkle root cannot guarantee this); Split Sharding Proposal what's newProposer-Builder (Data Generator) Separation (PBS) Invented to counter the centralization trend caused by MEV; MEV means that more sophisticated participants can extract more value than regular validators, which means advantages for large mining pools; PBS “contains” this complexity/centralization in a separate role with an honest minority of assumptions;
Censorship Resistance Program – crList
crList (“hybrid PBS” design) KZG 2d Solution Why not encode everything in the KZG commitment?
Goal: Encode m shard blobs in d KZG commitments;
KZG 2d scheme properties All samples can be verified directly against the promises (no fraud proofs!); A constant number of samples ensures probabilistic data availability; If 75%+1 samples are available:
The combination is DankshardingThe execution block and the shard block are built together;
⇒ Validation can be aggregated; Danksharding Honest Majority Verification Each validator chooses s = 2 random rows and columns; Only prove whether the assigned row/column is available for the entire epoch period; An unavailable block (<75% available) cannot obtain more than 2^(-2s) = 1/16 proofs; Danksharding Refactoring Each validator should reconstruct any incomplete rows/columns they encounter; In doing so, they should shift the missing samples to the orthogonal lines; Each validator can transfer 4 missing samples between rows/columns (about 55,000 online validators to guarantee full reconstruction) Danksharding DA sampling (Malicious Majority Safe) Future upgrades Each full node checks 75 random samples on the block matrix; This ensures that the probability of an unavailable block passing is < 2^(-30); Bandwidth 75*512 B / 16s = 2.5 kb/s; Summarize the advantages and disadvantagesadvantage Simple Design:
Tight coupling between execution chains and shards:
The slices do not require separate PBS; Increased resistance to bribery as data is immediately confirmed by 1/32 of the validator set (instead of 1/2048 in the old sharding scheme) and increases to the full validator set within one epoch. Thanks to the 2d scheme, a full node (without running a validator) will be able to ensure data availability with 75 samples (2.5 kb/s) instead of 30*64=1920 samples (60 kb/s); New challenges Added data generator requirements:
Provides more power to data generators as they act as execution + data layer service providers;
|
<<: Understanding the recent popularity of modular public chains in one article
>>: NFT’s decline is already evident: no more work, no more rolling
If the Bitcoin halving cycle is used as a time an...
As a man, you should be more generous. Don't ...
For men, most people hope that they can be more s...
In the past, there was a very popular short video...
Following the previous theft, a Japanese digital ...
On November 30, the United Arab Emirates governme...
Polkadot has been developing rapidly recently and...
A group of anonymous developers inspired by Harry...
What people fear most in life is offending a sche...
Twitter is developing a tab to showcase users’ NF...
There are many moles on the eyebrows, but moles o...
What does an evil mother-in-law look like? The fi...
Author: GTong Image source: Dazhi CoinDesk's ...
In fact, we all say that a gentleman uses words r...
Recently, an article titled "Beware of the R...