Revealing the Method in the Madness

The big advantages that Web3 and rollups gain from Syscoin

Demo

The method emerges from the madness as the solution comes to life!

https://www.youtube.com/watch?v=Mjfei_l9mpI

Bedrock optimistic rollup + PoDA + Pali Wallet 2.0 + Bitcoin merge-mined security + Finality

Optimistic Rollup with PoDA (Proof of Data Availability)

This represents a general design for a layer 2 using PoDA on Syscoin layer 1. The process can be applied with ZK-based rollups (we have prototyped this with Hermez zkEVM) the same as we have applied with Optimism Bedrock. Since Optimistic rollups have some advantages over zkEVM right now due to the overhead of ZK proving, we chose to integrate with Bedrock to begin, and will introduce a hybrid solution likely along the lines of Bedrock/Hermez/zkSync upon emergence of hardware-efficient ZK proving solutions. We feel the Bedrock design is currently the cleanest, most secure and efficient of any of the many rollup designs today.

  1. The sequencer (responsible for preserving the order of the unsafe blocks on the rollup, and enabling data availability) sends raw transactions to PoDA (on our UTXO chain) which confirms the blob via its Keccak hash. Data lookup can be performed in the NEVM via a precompile by its Keccak hash. He would create blobs (for now he creates just 1 per L2 block) but theoretically the sequencer can create multiple blobs at once.
  2. An ancillary service is running to get the blob data from the UTXO chain to throw it in an indexer. Anyone can do this, and only one honest person needs to do this for the design to remain censorship resistant. We use Cloudflare R2 which is a distributed database for our initial bedrock release.
  3. Upon confirmation of the blob, the batcher calls a smart contract for data lookup to ensure the blob is available on the network. In Bedrock’s case, we called this BatchInbox.sol. Simply put, it allows an array of Keccak hashes to be passed in via standard calldata. It loops to check they exist on the network via a precompile. The data availability is now preserved and the hash of the data is stored in the calldata of the call to the smart contract that verifies the data exists. Theoretically since it can process an array of hashes you can process multiple blobs here. Note that the UTXO chain can process up to 32 blobs of 2MB each but if you process more than that you simply need to wait for confirmation for all of the blobs prior to checking for confirmation of the data via BatchInbox.sol.
  4. The L2 Node derives the state deterministically via the underlying chain reading the BatchInbox call (filtered by the address of the batcher to ensure its not responding to calls from other addresses). We can fetch the Keccak blob of the hash, check that the contract executed successfully. That means the data existed and was sent by the sequencer at the time of the creation of L2 blocks.
  5. Then we fetch the data from any indexer that has archived it. We run one by default storing the data in the cloud via Cloudflare R2 but we can preserve the data in any way including Filecoin or decentralized stores as well. It can be anybody storing this data and providing it as a service.
  6. Upon receiving the data from an indexer, we rehash it to check that it is consistent with what was given to BatchInbox. We then process it as normal by deserializing the rollup transactions and putting them into rollup blocks.

Pali Wallet 2.0

The current Pali Wallet, version one, already has more than 30,000 monthly active users. The next major release of Pali Wallet will enable users to securely manage their cryptocurrencies across both EVM and Bitcoin-core based blockchains, including Syscoin (SYS) and Syscoin Tokenized Assets, Bitcoin (BTC), Ethereum (ETH) and ERC[20,721,1155] tokens, and will enable users to interact with Web3 decentralized applications (dApps). The design of Pali Wallet 2.0 is focused on simple UX, efficiency and speed that are compatible with one-second blocks, and privacy enforcement without data collection. Given that some crypto extensions and stores are now tracking users, we will offer a very easy way to install the extension locally without relying on application stores and their discretionary censorship.

Overall System Design

The architecture of Syscoin

Finality — Multi-Quorum Chainlocks

Recall in an earlier post, Jagdeep Sidhu explained that Syscoin’s masternodes are validators that run as full nodes. This increases the Nakamoto Coefficient (a measurement of the degree of decentralization of disparate services/nodes). These full nodes do not trust others through consensus, rather they each reproduce their own local state in order to verify the chain. This aligns with our ideals. In contrast, things like sharding or relegating consensus to PoS do not. We are PoW at our foundation and know that in order to preserve long-term decentralized contracts with the world we must tie the computational integrity of the ledger to the tangibility of a race for physical energy extraction. Understanding that, the right questions right now are how does Syscoin’s finality work in a decentralized way, and how does it not relegate consensus into a system that is effectively PoS?

diagram of multi-quorum chainlocks relative to blocks

Proof of Data Availability (PoDA)

Architecture of PoDA
  • No trusted setup
  • Quantum safe
  • Very performant.
  • Trust yourself only
  • Fewer attack vectors
  • Data is simple to reproduce and check

Hash-based Blobs

On point one, we promote the use of hash-based blobs. But importantly, why does Ethereum utilize a KZG polynomial commitment scheme in the first place instead of the more efficient approach using Keccak?

Pruning in PoDA

The final piece to understand PoDA is how the protocol removes raw data after network participants have had the opportunity to archive it. Upon receipt of a new chainlock, the protocol prunes data older than or equal to six hours of age from the time of the previous chainlock. Since the previous chainlock is effectively the guaranteed finality of the chain, we depend upon it for data removal. The age of the data is based on the time difference between the last guaranteed finality, and the median time-stamp of the block that included the data blob.

Another World’s First — Inherited L1 Security with L2 Variable Gas Rate Dominance

Rollups today inherit the security of the base layer (Ethereum) by paying a variable fee based on the size of the batch of transactions settling from layer 2. With PoDA integrated into Bedrock this fee falls down drastically to just 1400 gas per batch (which itself is 2MB of transactions). Since the BatchInbox smart contract can take a variable number of batches (called blobs) to verify them via the data precompile they will be negligible in the overall cost of a layer 2 transaction. The rollup sequencer may pay 2000 gas (including overhead for the contract) for every 2MB of transactions (roughly 30000 transactions at 70 bytes per L2 transaction compressed) or roughly around only 15 gas per transaction.

  1. A trusted setup is required, a backdoor is technically possible to completely censor the entire data availability protocol on the settlement layer. Not likely but possible. There is no provable way to prove that the trusted setup was not confiscated or not done correctly.
  2. Not quantum resistant due to the need for a polynomial commitment scheme with a pairing assumption.
  3. Requiring new mechanics to work with data, to validate data externally from the clients instead of the simply tooling around Keccak hashing we use in PoDA.
  4. Sharding can theoretically improve throughput but requires longer liveness prior to pruning because of collusion possibilities and generally more edge-cases for attacks. Also introduces risk in complexity and goes against our philosophy of you only need to trust yourself and can validate the state of your chain yourself without trusting anyone else (Sharding requires an assumption that the data is generally available through half of the nodes and each node validates their own shard is committed to the entire blob). In our case we prune after only 6 hours of the previous chainlock (finality event), and there are no new edge-cases created using hash-based blobs since the blockchain itself assumes security of the hash construction with a great degree of Lindy effects.
  5. Proof-time of KZG and verifying the batch is more expensive. When a block is formed the entire batch needs to be validated, where a KZG Batch validation would check the entire blob (all shards included). In our tests we found Keccak hash-based blobs 2000x faster to verify and an immeasurable amount of times faster to create the hash versus the KZG commitment of a blob. This leads to less overhead on block propagation, validation and generally better decentralization principles for the existing blockchain design inherited from Bitcoin.
Dec 23, 2022 by Syscoin Foundation and Jagdeep Sidhu

--

--

Trustless Interoperability. Token & Asset Micro-Transactions. Bitcoin Core Compliant - Merge Mined.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Syscoin

Trustless Interoperability. Token & Asset Micro-Transactions. Bitcoin Core Compliant - Merge Mined.