Layer-2 Interoperability in Blockchain: Towards a more Secure and Performant Interoperability Verification DAO

Note: This is a blog expanding on the proposal of the paper located here

Overview

My previous article on Layer 2 protocols echoed the need for addressing block throughput and scalability issues on blockchains and discussed off-chain based technologies that solve this particular dilemma. As more DeFi protocols and dApps (decentralized apps) deploy and integrate their infrastructure to L2, a new dilemma arises: how do we transfer funds between different L2s efficiently and inexpensively?

Hence the concept of L2 interoperability was introduced, i.e. accommodating transactions between L2s while minimizing the interactions with L1, hence incurring lesser gas fees while being performant all the same. This hackernoon article is a good primer on the existing L2 interoperability solutions, do give it a read.

One perceived limitation of such implemented solutions is that they do not offer a guarantee of transaction legitimacy and non-repudiation. Suppose a given client wishes to transfer funds from his dApp hosted on a Loopring zkRollup (L21) to Polygon, another Layer 2 network (L22), his transaction ends up interacting with their interoperable Ethport and Hermez solutions respectively. There are two patterns commonly implemented by such solutions for effecting an L2 <-> L2 transaction:

  • Using Liquidity Providers (LPs) on both L2 protocols (or through a router as in the case of Connext, by either

    • withdrawal and rebalancing of funds by the LPs attached to both L11 and L12, OR

    • Using a state channel to lock in the funds from L11, and initiate a transfer to the appropriate router to receive the funds from another state channel tied to L22.

  • Executing a bunch of (former) common L2 transactions in L1 within a single batch, and updating the (letter) L2 accounts based on the outcome.

In all these cases, the truthfulness of transactions is ensured only by the L2 scaling solution, typically by solving and issuing fraud or validity proofs. However, no initiative has been taken by these technologies to verify the transactions by the latter L2 blockchain (L22) on the former L21 (and vice-versa). This added measure can ensure greater confidence that cross-transfers between L2s are secure.

Proposal

In purview of minimizing verification complexities across transfers through disparate L2 channels, we (I) propose a layer of intervention in the form of an EVM-compatible, broker-based DAO system that verifies such interoperable transactions to provide confidence that

  • A transfer is legitimate between the two parties, and

  • Sender at L21 is assured that the receiver at L22 is verified.

Does it have to be a DAO?

  • Decentralization: For one, we're looking at minimizing the barrier of entry for stakeholders, peers and L2 entities by supporting coordination on a peer-to-peer basis, thereby enabling multilateral movements (Ex. bridging zkEVMs for L2-based verification), and influences decision making for the better in opposition to a hierarchical power distribution.

  • Autonomy: The system governs itself. Same goes for the entities involved. A DAO can conveniently establish such governance through a set of rules codified in smart contracts.

  • Organization: With the privilege of autonomy, comes the requirement of consensus for a variety of DAO-related activities, including but not limited to:

    • voting on contract proposals

    • resolution of challenge(s) and disputes while verifying transaction authenticity

    • incentivizing (and discrediting) contributors, verifiers and other L2 entities in lieu of their activities within the system.

One possible solution is for the DAO to expose APIs for legitimacy verification (and checks) of L22, similar to what Ethereum exposes, which is consumed by L21 independently (and vice-versa). This is disadvantageous, because

  • there is no guarantee that L21 will execute the L22_verfication API (and vice-versa)

  • there is no guarantee of non-repudiation, i.e. L21 can deny the validity of a transaction as they see fit, regardless of the transaction under question being valid or not. We hence cannot guarantee data integrity and its proof of origin altogether.

To reinforce such guarantees of legitimacy, we (I) propose re-using the JSON-RPC communication to send the entire transaction(s) to the DAO using a different workflow which ensures transaction validity and integrity. Below section describes a typical workflow during L2 transfers. For more details on the architecture, you can peruse the paper here.

Verification Workflow

Entities

Sovereignty in the hands of a few is certainly disregarded. That said, certain entities precede over others depending on their level of involvement to ensure the stability of the system. These entities take the form of:

  • Primary Stakeholders: Those having an active and a rather high degree of involvement fall in this category. This would include (but not limited to) protocol developers, project and system maintainers, core members and document/proposal champions.

  • Secondary Stakeholders: Encompasses project contributors, transaction verifiers and validators in the system.

  • Layer 2 entities: Refers to external protocols who leverage our DAO solution. These entities also receive a stake depending on their level of involvement with the system.

Such entities described above are incentivized with native tokens, proportionate to their level of involvement and contribution within the system.

Governance

Entities are ultimately responsible for upholding consensus within the DAO. To achieve this, the two core mechanisms that entities must use are:

  • End-to-End Verifiable Voting System(s) for supporting a L2 integrations, approving a smart contract and system re-design(s).

  • Dispute resolution mechanism for challenging the demerits of a given verification contract

We now discuss each of these topics in depth.

Verifiable Voting

Let's assume an L2 provider wishes to integrate his protocol with our DAO system. We first need to establish consensus on the legitimacy and integrity of the L2. If possible, we might choose to maintain an automated contract which verifies its authenticity and validates the functioning of the L2. This is a cumbersome approach since

  • The L2 operates under different primitives which could drive code complexity of the validation contract, and hard forks/revisions need to be accounted for as well.

  • Having fewer number of contracts to test a wide range of L2 solutions will not only introduce program complexity, but is inefficient and could be severely error-prone as well.

Due to the reasons stipulated above, we (I) propose that only the primary stakeholders are involved in the vetting process through means of electoral voting:

  • A primary stakeholder can incur penalties in this process if

    • He stays neutral by voting on both sides.

    • His vote goes against the majority in an unbalanced proportion (Ex. his vote counts toward >30% of the opposition).

    • He misses the opportunity for voting in the first place.

  • A similar voting system must be leveraged to accommodate other relevant scenarios, such as

    • to approve a verification contract for a given L2.

    • handling dispute resolutions

    • electing an improvement or a fork, etc. Secondary stakeholders can also be included for voting on few of these situations as applicable.

Dispute Resolution

This typically comes into play when challenging:

  • The correctness of a contract in verifying transaction legitimacy.

  • The compute and bandwidth that is optimal for executing the contract, without affecting the DAO as a whole.

  • The speed of execution, and gas fees incurred while executing such a contract against L2 transaction(s).

Given that every existing contract in the DAO has already been accepted through consensus of stakeholders, there is a good amount of confidence that the contract will achieve its tasks in a logical manner. However, debates can arise due to the above reasons already mentioned regardless, which demands dispute resolution mechanisms to take the center stage. Let's discuss mechanisms that can be employed for such cases:

Case I

Let's assume a certain stakeholder S1 challenges the correctness of a verification contract developed by another stakeholder S2, and the contract has N instructions of execution. In this case, it would be useful to employ a recursive bisection algorithm on the contract code, similar to what Arbitrum uses in its dispute resolution process: - Both S1 and S2 withdraw native tokens as a deposit (B1 and B2). - Once S1 lodges a challenge on the contract L2x_verification, the asserter S2 bisects the code into floor(N/2) and ceil(N/2) steps such the the two assertions must combine to the single previous code assertion.

  • After this assertion, the challenger must challenge one of the assertions before this activity undergoes recursion on the bisected code section even further.

  • Care must also be taken that the challenger must be able to select steps that occur in between assertions since the challenge could encompass instructions residing in both sections of code.

  • Once a challenge for a code snippet is finalized, stakeholders vote on their decision, i.e. to accede or deny the challenge.

  • Depending on the outcome, the challenger is either provided with incentives from the deposit or penalized in the process otherwise by losing his share of the deposit.

  • The time complexity of this activity is O(log2N), excluding the time it takes for S1 to issue a new challenge to the respective code stub.

Case II

What if the contract was challenged for being inefficient, slow, and/or draining the system compute and resources? We have two options to solve this situation:

  • We perform the bisection as discussed above on the code snippet that is inefficient, and trigger a round of voting to decide consensus.

  • We execute the contract with the default runtime. As we execute the contract, we capture the following metrics:

    • the amount of CPU used

    • the amount of memory used

    • the amount of network bandwidth consumed.

Based on these metrics obtained, the challenger reasons out to the system in case one of the readings is off-limits. We can run a pre-compiled contract that analyzes the metrics and decides if it's within the range of a suitable working environment. If this is not the case, the challenger is incentivized and the contract is no longer used.

A Primer on Verification Contracts

A core functionality of the DAO is to pre-compile the verification contract(s) for a given L2 and maintain them, due to the following benefits:

  • Separation of concerns: It provides loose coupling between the contract's logic and the data that it depends on.

  • Performance: Contracts in C++ boast a performance improvement in contrast to their solidity equivalent. Parallelism and multi-threading support can be leveraged for improved efficiency of processing.

Instead of residing at a fixed address on the Ethereum chain, we (I) propose that the object file corresponding to the contract be cached within our DAO for off-chain execution instead, hence maintaining a pre-compiled contract. This executes the contract using the default node runtime instead of using the less-performant EVM.

In addition to this, we (I) also propose to cache the source file(s) corresponding to the pre-compiled verification contracts in such a way that changes to the contracts are recorded. This is desirable since we can simply revert to the previous version of the contract if the execution of the currently deployed version is undesirable.

Since saving file copies at different modification times incurs a lot of I/O for writing and demands more storage, we can leverage a versioning file system instead. A versioning system ensures that the history of changes recorded for a file is captured in a space-efficient manner. In our case, the versioning system proposed by Xunjie Li is optimal for our use case, since it proposes augmenting file inodes using a copy-on-write (CoW) mechanism to track file changes. This approach boasts of space efficiency while achieving reasonable file storage and retrieval speeds as well. We could also utilize the concepts outlined in Versionfs, since this retains Unix properties such as ownership, permissions etc. in a way that the DAO stakeholders can enforce policies which dictate access, control and file recovery and version retention as well.