[RETRACTED] On an emergent block time
0. This entry was originally posted on 22.01.2025, and retracted on 8.3.2025, though I still leave them up. The Issue with the first design is that hash difficulty is exponential to the number of leading zeroes, so scoring chains by total count of leading zeroes would simply incentivize one-zero hashes. You can rescue this by scoring a chain as the sum of exp(K) over all block, with k being each block's count of leading zeroes for a hopefully time-neutral score, but it would then work. But then we're simply tracking the hashing time invested in any chain, so roughly the same (without controlling for computing power) as thew VDF count system, at which point the critical issue with both designs emerges: they offer no protection again deep reorganizations. Nothing would stop a miner form accumulating VDF cycles on a stale block and potentially have this chain become equal to or even surpass the main one in terms of total cycles, at a catastrophic loss of information. Indeed, whoever runs their machine continuously will accumulate approximately equal cycle time, hence there would be many chains of equal score at any one time. You can come up with tie-breaks or clever scoring rules but i can't seme to find any that doesn't just incentivize minimal-length blocks all the time, or else is subject to deep reorganizations. Now, the modified exp(k) score PoW is still viable, as any attacker would need to compete with the string of the single luckiest miners across the entire network to force a deep reorganization (under the exp(k) rule a miner would stop computing if hitting a particularly lucky string of zeroes in an improbably short time, and the main chain would be the concatenation of the luckiest single miners), which I think is far less likely than a 51% attack in a classic PoW, but still am not sure and I like my other systems better anyways. So anyway, the modified PoW based on sum of exp(k) can still be viable is anyone wants to try it.
1.1 Let’s generalise the standard PoW mechanism thus: suppose that, instead of adopting a common difficulty setting for the entire network, you let every miner solve as complex a puzzle as they wish, as measured by number of leading zeroes in the block’s hash.
1.2 The rules are also modified such as, instead of the longer blockchain always being adopted, now all nodes must go for the blockchain with the largest count of leading zeroes over all blocks.
1.3 The “standard” PoW mechanism is then just a special case of this general setup, with the number of zeroes fixed for all at any one time.
1.4 In actual practice you may wish to make it a bit less cacophonic by setting the actual selection metric to block length + count of zeroes over the 100 most recent blocks. Anyway.
1.5 When would a miner decide to stop hashing? For the same compute, the longer he hashes, the higher the chances of his block being included in the winning chain, but also the higher the chances of someone else mining two blocks in the meantime (blocks are still sequential, of course).
1.6 When deciding when to stop hashing, a miner would consider the volume of transactions incoming, the chance of their block propagating and probably other considerations that escape me. So, what the network itself should consider when determining the otherwise fixed block time.
1.7 I say that what such a setup would do is lead to the emergence of a responsive block time that varies to meet the needs of the network at that time.
1.8 Further, winning the block now would not only just be a function of compute, but also of strategy / knowledge of the market, with an added dose of random chance thrown in.
1.9 Finality would be worse per block, but block time would likely be shorter on average, so finality in terms of time may well be better.
1.10 For what it’s worth, energy use may also improve a bit, though orphan blocks would explode, no question there.
2.1 Now substitute PoW for a Verifiable Delay Function, where instead of counting zeroes we count cycles. A VDF is far less sensitive to one’s rig, and will require a stableish time to compute.
2.2 Now it still makes sense to put your best machine in the game, but there’s no point to linking machines in parallel since the VDF does not lend itself to parallelization. Larger miners would likely just run many independent blocks in parallel to try to have one of them win.
2.3 I still reckon that the sensitivity of the chance of winning the next block to compute would fall a bit closer to a linear (fair) relationship. Basically, the chance of success would be proportional to the count of one’s machines above a certain capacity threshold (you still need to have the power to check the block, can’t put your phone on it).
2.4 I think that this setup would make a 51% attack more challenging, between requiring the attacker actually spend half the network’s total time to have a chance of success, and still being far from guaranteed, given the (likely) probabilistic link between number of machines and chance of success. Even if I control 51% of the machines in the system, the chance of having two my block as part of the final chain would still be just 51%, I think.
2.5 And again, energy use may fall a bit again, but meh.
2.6 Note how we would have been unable to substitute VDF cycles for hashing difficulty without going through the intermediate step of a flexible block time. Standard PoW doesn't lend itself to translation to VDF.
Comments
Post a Comment