Discover more from Cryptocurrency and Friends
Bridging Databases (Part 2)
A world of databases interconnected by bridges
Thanks for reading Cryptocurrency and Friends! Subscribe for free to receive new posts and support my work.
There has been a substantial discussion on whether a validating bridge defines the rollup, or put another way:
rollup ==? bridge.
We need to step back and look at the problem from first principles.
What is a blockchain?
How do we bridge assets from one database to another database?
Short recap: Blockchain as a Database (Part 1)
Our previous blog post explored “what is a blockchain?” and it was a wonderful opportunity to appreciate the purpose of a blockchain.
So, what is a blockchain?
A blockchain is just a data structure for dictating the total ordering of data (transactions).
It only has a single purpose:
Anyone can fetch the blockchain, parse the data (transactions) in order, and compute a copy of the same database.
Now — one of the hardest problems in a blockchain network — is how all parties can verify they have a copy of the same database and the one true blockchain.
Put simply, this entire field of blockchain engineering is focused on:
Instantiating a single database.
The only way for all parties, on a global scale, to converge on the same blockchain is to incorporate a consensus protocol. It allows all parties to collectively agree upon single decision — the latest block of data to append.
One interesting side note is that the block of data may not necessarily be a list of transactions and in fact there are two options:
Transaction history. An explicit list of transactions to execute.
State diffs. A list of “state diffs” that can be applied to update storage slots in the database.
It is up to the participants to decide on a rule set (“state transition function”) for how to parse the data into an update that can be applied to the database. The state diff approach is a very recent addition thanks to the rise of validity proofs.
So, in the end, all blockchain protocols are focused on guaranteeing that the right data is ordered correctly and publicly available to all. In fact, there is a very subtle assumption:
Recent broadcast. A consensus protocol only guarantees the publication of data at a point of time.
There is no guarantee that historically agreed decisions are available at all and most nodes who provide it do so altruistically.
Finally — there is a choice a project can make when deciding how to ensure the data for the database is publicly available:
Deploy a new consensus protocol (“layer-1”)?
Re-use an existing consensus protocol (“the rollup”)?
The former is a Hercule effort. It requires bootstrapping an entire eco-system that is willing to step up and typically requires an honest majority of participants to protect the system.
The rollup approach admits that bootstrapping such an eco-system is hard and there is little value in attempting to repeat it. It is much easier to leverage the layer-1 blockchain as a public bulletin board and allow it to guarantee the availability (and ordering) of all relevant data.
Whether a project decides to deploy a new layer-1 or a rollup, all approaches leads to a world of databases with varying degrees of trust assumptions and security.
We could easily categorise them as ‘closed’, ‘public’, or ‘open’ databases which focuses on whether the database is publicly accessible to read and the mechanism used to enable write-access to the database.
The above sounds theoretical, but there is empirical evidence that we now live in a world of hundreds, and thousands, of databases that want to interact with crypto assets.
This brings us to the next question:
How can a user move their assets from one database to another database?
We need to study how a user can move their assets from one database to another database. Even more generically, how can they pass messages across different systems.
Let’s find out!
Bridging Across Database
There is a single component that enables a world of blockchains:
Bridge: Provides passage over something that is otherwise difficult or impossible to cross.
The job of a bridge is very simple. It should facilitate passing a message from one database to another database. It can be viewed as a communication protocol where an entity (bridge) sits between a sender and receiver.
All communication protocols strive to provide the following properties:
Timely delivery. A message is available, at the earliest possible moment, to be consumed by the receiver.
Integrity. A message’s content in its entirety should be delivered.
Authentication. The receiver can verify that the message was indeed initiated by the sender.
One big difference is that the bridge, as as communication protocol, does not strive to protect confidentiality of the message. It is optional in the realm of bridges and there is no bridge protocol that protects the privacy of a message.
Now — the main problem that a bridge needs to solve — is the ability to convince the receiver about the message’s authenticity aka the attestation process.
In nearly all bridge designs, there is a set of authorities that sit between the sender (Database A) and receiver (Database B). Of course, the sender and receiver may be smart contracts that live on the respective database.
The authorities will:
Fetch. Pick up new messages from Database A.
Sign. Digitally sign all messages that should be sent to Database B.
Attest. Send the digital signature and message to Database B.
The attestation process requires the receiver database (Database A) to verify the message was indeed digitally signed by the trusted authorities. If so, it can be confident about the message’s content and origin.
The key issue: trusting the appointed authorities.
Going further, both the sender and receiver must blindly trust the appointed authorities to deliver the message and attest to its integrity. This is because the sender and receiver cannot see the outside world without assistance from a third party.
This type of bridge design can be called a human-operated bridge.
It relies on humans (server-side applications) to facilitate communication and enforce all rules on behalf of the sender/receiver. It is the easiest type of bridge to deploy and it can be applied to a wide range of different blockchains, but it is the hardest to secure and protect at scale.
Unfortunately, the entire point of cryptocurrency and the movement started by Satoshi Nakamoto was to remove all the trust required to make a system work and to replace it with cryptographic proof.
We need to do better.
Evolving Attestation Confidence
We should send cryptographic evidence, alongside the sender’s message, that will attest to the message’s authenticity.
If successful, the evidence should convince the receiving smart contract beyond all reasonable doubt about the message’s authenticity without having to trust the human operator.
The human operator’s role can be reduced: Assist with relaying messages (“relayer”) from Database A to Database B.
Let’s explore several approaches with varying degrees to what the evidence can actually prove.
Enforce Delivery of Correct Attestation (Nomad)
We ask the following question for the first approach:
Can we reduce trust in the operator by holding them accountable to their attestation?
In this approach, an authority still needs to attest to the authenticity of a message, but the authority can be immediately penalised if they attempt to cheat. For example, by attesting to a different and potentially conflicting message.
The Nomad Protocol is an example that implements the accountable approach.
It assumes there are two authorities:
Endorsement authority. Responsible for reading Database A, signing its proposed message, and submitting the digital signature to Database A.
Veto authority. Responsible for protecting Database B by cross-checking whether the signed message originated from Database A.
The endorsement authority should lock a considerable financial stake to deter malicious behaviour and potentially anyone can perform the role. On the other hand, the veto authority, must be appointed before the protocol is instantiated and it remains an open research problem on how to make the veto authority role permissionless.
The Nomad Protocol has three stages:
Endorse message. The endorser reads the message in Database A, signs the message and submits signature to Database A.
Relay message. Anyone can take the signed message and forward it onto Database B.
Opportunity to veto. There is a fixed time window (i.e., 30 minutes) for the veto authority to check whether the message sent to Database B is exactly the same as the message on Database A. If it is not the same, then the authority should veto and cancel the message.
It is a lightweight protocol as the only on-chain task is to verify a digital signature and the trusted authorities only need to monitor the relevant database that receives a message.
While our example assumes a single receiving database for illustrative purposes, the Nomad Protocol can be extended to allow multiple databases to receive the same signed message.
We need to consider the case when an authority may attempt to cheat the bridge protocol:
Endorser signs a message not endorsed by Database A.
Veto authority vetoes a message that was endorsed by Database A.
In the first case, the endorser is forced to produce a digital signature that Database A will not recognise if they endeavour to deceive the receiver. Any party can take the signature, send it to Database A, and empower Database A to slash the endorser.
In the second case, it remains a fully trusted role and there is no in-protocol method to fight back. One caveat is that the protocol transcript is publicly available for all participants and the veto authority could be removed after the fact via a voting protocol. However, by then, the damage may already be done as veto’ing an honest message can have significant implications on smart contracts that rely on time-based actions (i.e., liquidations).
Check Agreement of Consensus Protocol (SPV)
We ask the following question for the second approach:
Can we keep track of the consensus protocol for Database A?
As mentioned in the first post, all blockchain networks have a consensus protocol that periodically agrees to a new block at the tip of the chain. A block can be thought of as a batch update to the database.
In this approach, the goal is to allow the receiving smart contract on Database B to confidently track agreements produced by the consensus protocol of Database A.
To make this a reality, the consensus protocol must be light client friendly by default:
Light client: Require the minimum computation and data to confidently be convinced about the current state of the database.
Put simply, a light client allows a user to learn the value for an entry in the database and have confidence that the consensus protocol agreed the entry should be in the database.
The notion of a light client is as old as Bitcoin.
It was called Simplified Payment Verification (SPV) mode in the Bitcoin white paper. SPV allows a user to verify that they had indeed received coins on Bitcoin without the need to validate and hold a copy of the entire Bitcoin database.
The SPV process operates in the following manner.
Cryptographic commitments. Every block header contains a cryptographic commitment (tx_root) that represents the entire Bitcoin database. Additionally, the block header contains a link to the previous block and its own proof of work.
Check longest chain. A user can fetch the list of block headers and independently verify the proof of work represents the heaviest chain seen so far.
Inclusion proof. Once the user is confident that their list of block headers represents the canonical chain, anyone can then provide the user with an inclusion proof (Merkle tree) for a specific entry in the database.
Convinced. The user can cross-check the inclusion proof with the relevant block header and then be convinced about a specific an entry in the database.
Bitcoin SPV mode can be used as the basis for building a light client bridge that allows a smart contract on another blockchain to learn about content in the Bitcoin database.
An old project called BTCRelay implemented a light client bridge between Bitcoin (Database A) and Ethereum (Database B). It was a smart contract on Ethereum that could parse Bitcoin block headers, inclusion proofs, and follow the heaviest chain via the proof of work. In the end, it’s purpose was to verify whether a UTXO was indeed stored in the Bitcoin database and then allow another smart contract to act upon it.
Unfortunately, BTCRelay was a bit ahead of its time. While it was functionally complete, it failed to garner any significant traction.
There are a few lessons that we can learn form the project:
Accumulated gas cost. Tracking block headers requires many on-chain transactions and the financial cost accumulates over time.
Only works for forks of Bitcoin. BTCRelay was only compatible with blockchains that forked the Bitcoin codebase. A blockchain with a different consensus protocol required a dedicated implementation of its respective light client.
Still trusting consensus. BTCRelay can only check the proof of work is correct, but not whether the block header is valid. If an adversarial miner crafted a malicious block header with valid proof of work, then the bridge could be broken. It was still essential to wait ~6 confirmations and lean on Bitcoin’s the honest majority assumption (>51% of miners are honest).
Research has mostly focused on the ability to reduce the accumulated gas cost by aggregating several block headers into a single update. For example, NIPOPOWS and Glimpse which focus on PoW-related light clients.
However — there is a shimmer of success — as many bridges on other blockchain networks follow the philosophy of building a light client bridge:
IBC on Cosmos
Rainbow bridge by Near
P-Chain on Ava
Both IBC and the P-Chain have a dedicated layer for keeping track of agreements from the various consensus protocols. They all mostly follow the honest majority of signatures produced by the consensus protocol, but more often than that, there is not always a consequence if the consensus protocol decides to cheat the base layer by signing a conflicting message.
Still Trusting Participants — Can We Do Better?
Our article has covered:
Light client bridges,
In all cases, our reliance is consistently placed on a set of intermediaries who must vouch for the authenticity of a message sent across the bridge, even if it simply vouching that a consensus protocol concluded on a single decision.
I suspect there is a fundamental requirement that a bridge will always require a set of intermediaries to help transport a message from a sending database to a receiving database.
If we accept this premise, then we could also assume it makes sense to track decisions made by the participants in a consensus protocol, since they are the source of truth for deciding the total ordering of updates that will be applied to the database.
This brings us to the final question:
Can we build a bridge that can independently check the validity of a decision made by the consensus protocol?
It is a subtle difference.
The bridge should not only check that a consensus protocol agreed to the total ordering of updates to the database, but the update itself is valid relative to all other updates to the database.
If this can be achieved — then the receiving smart contract (Database B) can be convinced that an update was applied to the database and convinced that an entry in the database is actually valid.
It can then read the database, extract the relevant message, and then perform an action on it.
Can it be done? The answer is Yes.
Rise of the Validating Bridge
Anyone who has followed the posts on this substack is no stranger to concept of a validating bridge.
At its heart — it is about enabling a smart contract to go beyond checking whether a decision was made, but to independently verify whether content of the decision is correct.
The general idea is to go a few extra steps than a light client bridge:
Fraud vs Validity proof. Extending capability of light clients to succinctly verify a large computation is correct,
Data availability layer. Removal of a consensus protocol from Database B by re-using Database A for ordering data blobs,
Open membership. Allow parties to self-appoint and contribute towards towards the process of enforcing the validity of all database updates.
We have covered the topic in-depth several times with Deconstructing rollups, Where Is the One Honest Party for a Rollup? and A Better Mental Model for Rollups, Plasma, and Validating Bridges.
Now — to avoid rehashing a breakdown on validating bridges — there is one final insight to discuss.
This entire article has only focused on:
Ability to pass a message back and forth between the databases.
Enabling the receiver to have confidence about the authenticity of a message.
We have not discussed how a bridge can hold or transfer assets across the databases.
This is because, fundamentally, all bridges are responsible for passing messages in a timely manner and providing confidence about a message’s authenticity to the receiver.
Assets and Liabilities
We can now deploy smart contracts on each database and the smart contracts can interact with each other by sending messages across the bridge.
The most popular application is to enable moving assets from Database A to Database B (and vice versa). The two smart contracts include:
Vault. A smart contract on the sending database that holds assets in custody.
Issuance. A smart contract on the receiving database that has the right to issue an IOU for the assets held in the vault.
At a high level, the two smart contracts can enable a deposit/withdrawal flow that is very familiar for most users who want to move assets to another system:
Deposit process. A user can deposit coins into the vault on Database A and the vault can send a message via the bridge for the same quantity of assets to be issued at the user’s address on Database B.
Withdrawal process. A user can withdraw the coins by requesting the issuer to burn the coins on Database B and to send a message via the bridge for the same quantity of coins to be released by the vault to the user’s address.
One of the benefits of this design is that anyone with read access to the vault on Database A and read access to the issuer on Database B can verify that the assets cover the liabilities. Put another way, anyone can verify that the issued coins are backed up by a fully collateralised vault.
Of course, if we want to enable moving assets across database, and for this to work at scale, it all comes down to the security of the bridge and how we can trust the integrity of messages sent across.
We have historically failed to build secure bridges as they all relied on humans to protect the messages sent across. Most of the large exchanges including BitFinex, Binance, Bitstamp, and worst of all MtGox, have lost user funds due to an incident with their bridges. In addition to bridges that move funds across blockchains like PolyNetwork, Ronin Bridge, and Harmony.
As a community, we have experimented with various bridge designs over the years, some have simply relied on trusted intermediaries while others have tried to move trust from intermediaries to the consensus protocol of the respective blockchains.
Validating bridges are simply one step further — to not only trust the decision made by the consensus protocol, but to check that it is actually a valid decision.
Thanks for reading Cryptocurrency and Friends! Subscribe for free to receive new posts and support my work.
Now — what about the question:
rollup ==? bridge
I hope the answer is obvious after reading both articles. If not, then I may follow up with a “Battle of the Bridges’ as a part 3 for the series.
Regardless, in the end, all we are doing is passing messages, and bits, into the ether :)