Contributors/Reviewers | |
---|---|
Hart Montgomery | [email protected] |
Hugo Borne-Pons | [email protected] |
Jonathan Hamilton | [email protected] |
Mic Bowman | [email protected] |
Peter Somogyvari | [email protected] |
Shingo Fujimoto | [email protected] |
Takuma Takeuchi | [email protected] |
Tracy Kuhrt | [email protected] |
Rafael Belchior | [email protected] |
Date of Revision | Description of Changes Made |
---|---|
February 2020 | Initial draft |
- 1. Abstract
- 2. Introduction to Blockchain Interoperability
- 3. Example Use Cases
- 3.1 Ethereum to Quorum Asset Transfer
- 3.2 Escrowed Sale of Data for Coins
- 3.3 Money Exchanges
- 3.4 Stable Coin Pegged to Other Currency
- 3.5 Healthcare Data Sharing with Access Control Lists
- 3.6 Integrate Existing Food Traceability Solutions
- 3.7 End User Wallet Authentication/Authorization
- 3.8 Blockchain Migration
- 4. Software Design
- 4.1. Principles
- 4.1.1. Wide support
- 4.1.2. Plugin Architecture from all possible aspects
- 4.1.3. Prevent Double spending Where Possible
- 4.1.4 DLT Feature Inclusivity
- 4.1.5 Low impact
- 4.1.6 Transparency
- 4.1.7 Automated workflows
- 4.1.8 Default to Highest Security
- 4.1.9 Transaction Protocol Negotiation
- 4.1.10 Avoid modifying the total amount of digital assets on any blockchain whenever possible
- 4.1.11 Provide abstraction for common operations
- 4.1.12 Integration with Identity Frameworks (Moonshot)
- 4.2 Feature Requirements
- 4.3 Working Policies
- 4.1. Principles
- 5. Architecture
- 5.1 Integration patterns
- 5.2 System architecture and basic flow
- 5.3 APIs and communication protocols between Cactus components
- 5.4 Technical Architecture
- 5.4.1 Monorepo Packages
- 5.4.2 Deployment Diagram
- 5.4.3 Component Diagram
- 5.4.4 Class Diagram
- 5.4.5 Sequence Diagram - Transactions
- 5.5 Transaction Protocol Specification
- 5.6 Plugin Architecture
- 6. Identities, Authentication, Authorization
- 7. Terminology
- 8. References
Blockchain technologies are growing in usage, but fragmentation is a big problem that may hinder reaching critical levels of adoption in the future.
We propose a protocol and it's implementation to connect as many of them as possible in an attempt to solve the fragmentation problem by creating a heterogeneous system architecture 1.
There are two inherent problemsa that have to be solved when connecting different blockchains:
- How to provide a proof of the networkwideb ledger state of a connected blockchain from the outside?
- How can other entities verify a given proof of the state of a connected blockchain from the outside?
The Cactus
consortium operates for each connected blockchain a group of validator nodes, which as a group provides the proofs of the state of the connected ledger. The group of validator nodes runs a consensus algorithm to agree on the state of the underlying blockchain. Since a proof of the state of the blockchain is produced and signed by several validator nodesc with respect to the rules of the consensus algorithm, the state of the underlying blockchain is evaluated networkwide.
The validator nodes are ledger-specific plug-ins, hence a smart contract on the connected blockchain can enable the ledger-specific functionalities necessary for a validator node to observe the ledger state to finalize a proof. The validator nodes are easier discovered from the outside than the blockchain nodes. Hence, the benefit of operating the Cactus
network to enable blockchain interoperability relies on the fact that for any cross-blockchain interaction the same type of validator node signatures can be used. That means, the cross-blockchain interaction can be done canonically with the validator node signatures in Cactus
rather than having to deal with many different ledger-specific types of blockchain node signatures.
Outside entities (verifier nodes) can request and register the public keys of the validator
nodes of a blockchain network that they want to connect to. Therefore, they can verify the signed proofs of the state of the blockchain since they have the public keys of the validator nodes. This implies that the verifier nodes trust the validator nodes as such they trust the Cactus
consortium operating the validator nodes.
Figure description: V1, V2, and V3 are validator nodes which provide proofs of the underlying blockchain network through ledger-specific plug-ins. V1, V2, and V3 run a consensus algorithm which is independent of the consensus algorithm run by the blockchain network nodes N1, N2, N3, N4, and N5.
This section acts as a building block to describe the different flavors of blockchain interoperability. Business use cases will be built on these simple foundation blocks leveraging a mix of them simultaneously and even expanding to several blockchains interacting concurrently.
To describe typical interoperability patterns between different blockchains, the types of objects stored on a ledger have to be distinguished. The following three types of objects stored on a ledger are differentiated as follows:
- FA: Fungible asset (value token/coin) – cannot be duplicated on different ledgers
- NFA: Non-fungible asset – cannot be duplicated on different ledgersd
- D: Data – can be duplicated on different ledgers
Difference between a fungible (FA) and a non-fungible asset (NFA)
A fungible asset is an asset that can be used interchangeably with another asset of the same type, like a currency. For example, a 1 USD bill can be swapped for any other 1 USD bill. Cryptocurrencies, such as ETH (Ether) and BTC (Bitcoin), are FAs. A non-fungible asset is an asset that cannot be swapped as it is unique and has specific properties. For example, a car is a non-fungible asset as it has unique properties, such as color and price. CryptoKitties are NFAs as well. There are two standards for fungible and non-fungible assets on the Ethereum network (ERC-20 Fungible Token Standard and ERC-721 Non-Fungible Token Standard).
Difference between an asset (FA or NFA) and data (D)
Unicity applies to FAs and NFAs meaning it guarantees that only one valid representation of a given asset exists in the system. It prevents double-spending of the same token/coin in different blockchains. The same data package can have several representations on different ledgers while an asset (FA or NFA) can have only one representation active at any time, i.e., an asset exists only on one blockchain while it is locked/burned on all other blockchains. If fundamental disagreement persists in the community about the purpose or operational upgrades of a blockchain, a hard fork can split a blockchain creating two representations of the same asset to coexist. For example, Bitcoin split into Bitcoin and Bitcoin Cash in 2017. Forks are not addressing blockchain interoperability so the definition of unicity applies in a blockchain interoperability context. A data package that was once created as a copy of another data package might divert from its original one over time because different blockchains might execute different state changes on their data packages.
Blockchain interoperability implementations can be classified into the following types:
- Ledger transfer:
An asset gets locked/burned on one blockchain and then a representation of the same asset gets released in the other blockchaine. There are never two representations of the same asset alive at any time. Data is an exception since the same data can be transferred to several blockchains. There are one-way or two-way ledger transfers depending on whether the assets can be transferred only in one direction from a source blockchain to a destination blockchain or assets can be transferred in and out of both blockchains with no designated source blockchain and destination blockchain.
- Atomic swapf:
A write transaction is performed on Blockchain A concurrently with another write transaction on blockchain B. There is no asset/data/coin leaving any blockchain environment. The two blockchain environments are isolated but, due to the blockchain interoperability implementation, both transactions are committed atomically. That means either both transactions are committed successfully or none of the transactions are committed successfully.
- Ledger interactionf:
An actiong happening on Blockchain A is causing an action on Blockchain B. The state of Blockchain A causes state changes on Blockchain B. There are one-way or two-way ledger interactions depending on whether only the state of one of the blockchains can affect the state on the other blockchain or both blockchain states can affect state changes on the corresponding other blockchain.
- Ledger entry point coordination:
This blockchain interoperability type concerns end-user wallet authentication/ authorization enabling read and write operations to independent ledgers from one single entry point. Any read or write transaction submitted by the client is forwarded to the corresponding blockchain and then committed/executed as if the blockchain would be operate on its own.
The ledger transfer has a high degree of interference between the blockchains since the livelihood of a blockchain can be reduced in case too many assets are locked/burned in a connected blockchain. The ledger interaction has a high degree of interference between the blockchains as well since the state of one blockchain can affect the state of another blockchain. Atomic swaps have less degree of interference between the blockchains since all assets/data stay in their respective blockchain environment. The ledger entry point coordination has no degree of interference between the blockchains since all transactions are forwarded and executed in the corresponding blockchain as if the blockchains would be operated in isolation.
Figure description: One-way ledger transfer
Figure description: Two-way ledger transfer
Figure description: Atomic swap
Figure description: Ledger interaction
Figure description: Ledger entry point coordination
Legend:
To guarantee unicity, an asset (NFA or FA) has to be burned or locked before being transferred into another blockchain. Locked assets can be unlocked in case the asset is retransferred back to its original blockchain, whereas the burning of assets is an irreversible process. It is worth noting that locking/burning of assets is happening during a ledger transfer but can be avoided in use cases where both parties have wallets/accounts on both ledgers by using atomic swaps instead. Hence, most cryptocurrency exchange platforms rely on atomic swaps and do not burn FAs. For example, ordinary coins, such as Bitcoin or Ethereum, can only be generated by mining a block. Therefore, Bitcoin or Ethereum exchanges have to rely on atomic swaps rather than two-way ledger transfers because it is not possible to create BTC or ETH on the fly. In contrast, if the minting process of an FA token can be leveraged on during a ledger transfer, burning/locking of an asset becomes a possible implementation option, such as in the ETH token ledger transfer from the old PoW chain (Ethereum 1.0) to the PoS chain (aka Beacon Chain in Ethereum 2.0). Burning of assets usually applies more to tokens/coins (FAs) and can be seen as a donation to the community since the overall value of the cryptocurrency increases.
Burning of assets can be implemented as follows:
-
Assets are sent to the address of the coinbase/generation transactionh in the genesis block. A coinbase/generation transaction is in every block of blockchains that rely on mining. It is the address where the reward for mining the block is sent to. Hence, this will burn the tokens/coins in the address of the miner that mined the genesis block. In many blockchain platforms, it is proven that nobody has the private key to this special address.
-
Tokens/Coins are subtracted from the user account as well as optionally from the total token/coin supply value.
a: There is an alternative approach for an outside entity A to verify the state of a connected blockchain if this connected blockchain uses Merkle Trees to store its blockchain state. An outside entity A can store the Merkle Tree roots from the headers of committed blocks of a connected blockchain locally to verify any state claims about the connected blockchain. Any untrusted entity can then provide a state of the connected blockchain, such as a specific account balance on the connected blockchain, because the outside entity A can act as a lightweight client and use concepts like simple payment verification (SPV) to verify that the state claim provided by the untrusted entity is valid. SPV can be done without checking the entire blockchain history. Polkadot uses this approach in its Relay Chain and the BTCRelay on the Ethereum blockchain uses this approach as well. Private blockchains do not always keep track of their state through Merkel trees and signatures produced by nodes participating in such private blockchains are rarely understood by outside parties not participating in the network. For that reason, the design principle of Cactus
is to rely on the canonical validator node signatures for verifying proofs of blockchain states. Since Cactus
should be able to incorporate any type of blockchain in the future, Cactus
can not use the approach based on Merkle Trees.
b: A networkwide ledger view means that all network nodes have to be considered to derive the state of the blockchain which means that it is not the state of just one single blockchain node.
c: The validator nodes in Cactus
have similarities with trusted third-party intermediaries. The terminology trusted third-party intermediaries, federation schemes, and notary schemes are used when a blockchain can retrieve the state of another blockchain through these intermediaries. In contrast, the terminology relay is used when a chain can retrieve the state of another blockchain through reading, writing, or event listening operations directly rather than relying on intermediaries. This terminology is used in the central Relay Chain in Polkadot and the BTCRelay on the Ethereum network.
d: There might be use cases where it is desired to duplicate an NFA on different ledgers. Nonetheless, we stick to the terminology that an NFA cannot be duplicated on a different ledger because an NFA can be represented as data packages on different ledgers in such cases. Data is a superset of NFAs.
e: In the case of data, the data can be copied from Blockchain A to Blockchain B. It is optional to lock/burn/delete the data object on Blockchain A after copying.
f: The process in Blockchain A and the process in Blockchain B can be seen to happen concurrently, and consecutively, in atomic swaps, and ledger interactions, respectively.
g: An action can be either a read transaction or a write transaction performed on Blockchain A or an event that is emitted by Blockchain A. Some examples of that type of ledger interoperability are as follows:
- Cross-chain oracles which are smart contracts that read the state of another blockchain before acting on it.
- Smart contracts that wait until an event happens on another blockchain before acting on it.
- Asset encumbrance smart contracts which are smart contracts that lock up assets on Blockchain A with unlocking conditions depending on actions happening in Blockchain B.
h: Alternatively, any address from that assets cannot be recovered anymore can be used. A verifiable proof for the irreversible property of that address should be given.
Specific use cases that we intend to support. The core idea is to support as many use-cases as possible by enabling interoperability between a large variety of ledgers specific to certain mainstream or exotic use cases.
The following table summarizes the use cases that will be explained in more detail in the following sections. FA, NFA, and D denote a fungible asset, a non-fungible asset, and data, respectively.
Object type of Blockchain A | Object type of Blockchain B | Ledger transfer | Atomic swap | Ledger interaction | Ledger entry point coordination | ||
---|---|---|---|---|---|---|---|
One-way | Two-way | One-way | Two-way | ||||
D | D | 3.5 3.8 | 3.5 3.6 | - | - | - | 3.7 |
FA | FA | 3.1 | 3.4 | 3.3 | 3.3 | 3.3 | |
NFA | NFA | - | - | - | - | - | |
FA | D | - | - | 3.2 | - | - | |
D | FA | - | - | ||||
NFA | D | - | - | - | - | - | |
D | NFA | - | - | ||||
FA | NFA | - | - | - | - | - | |
NFA | FA | - | - |
Use Case Attribute Name | Use Case Attribute Value |
---|---|
Use Case Title | Ethereum to Quorum Escrowed Asset Transfer |
Use Case | 1. User A owns some assets on a Ethereum ledger2. User A asks Exchanger to exchange specified amount of assets on Ethereum ledger, and receives exchanged asset at the Quorum ledger. |
Interworking patterns | Value transfer |
Type of Social Interaction | Escrowed Asset Transfer |
Narrative | A person (User A ) has multiple accounts on different ledgers (Ethereum, Quorum) and he wishes to send some assets from Ethereum ledger to a Quorum ledger with considering conversion rate. The sent asset on Ethereum will be received by Exchanger only when he successfully received converted asset on Quorum ledger. |
Actors | 1. User A : The person or entity who has ownership of the assets asscociated with its accounts on ledger. |
Goals of Actors | User A loses ownership of sent assets on Ethereum, but he will get ownnership of exchanged asset value on Quorum. |
Success Scenario | Transfer succeeds without issues. Asset is available on both Ethereum and Quorum ledgers. |
Success Criteria | Transfer asset on Quorum was successed. |
Failure Criteria | Transfer asset on Quorum was failed. |
Prerequisites | 1. Ledgers are provisioned 2. User A and Exchanger identities established on both ledgers3. Exchanger authorized business logic plugin to operate the account on Quorum ledger.4. User A has access to Hyperledger Cactus deployment |
Comments |
W3C Use Case Attribute Name | W3C Use Case Attribute Value |
---|---|
Use Case Title | Escrowed Sale of Data for Coins |
Use Case | 1. User A initiates (proposes) an escrowed transaction with User B 2. User A places funds, User B places the data to a digital escrow service.3. They both observe each other's input to the escrow service and decide to proceed. 4. Escrow service releases the funds and the data to the parties in the exchange. |
Type of Social Interaction | Peer to Peer Exchange |
Narrative | Data in this context is any series of bits stored on a computer: * Machine learning model * ad-tech database * digital/digitized art * proprietary source code or binaries of software * etc. User A and B trade the data and the funds through a Hyperledger Cactus transaction in an atomic swap with escrow securing both parties from fraud or unintended failures.Through the transaction protocol's handshake mechanism, A and B can agree (in advance) upon * The delivery addresses (which ledger, which wallet) * the provider of escrow that they both trust * the price and currency Establishing trust (e.g. Is that art original or is that machine learning model has the advertised accuracy) can be facilitated through the participating DLTs if they support it. Note that User A has no way of knowing the quality of the dataset, they entirely rely on User B 's description of its quality (there are solutions to this problem, but it's not within the scope of our use case to discuss these). |
Actors | 1. User A : A person or business organization with the intent to purchase data.2. User B : A person or business entity with data to sell. |
Goals of Actors | User A wants to have access to data for an arbitrary reason such as having a business process that can enhanced by it.User B : Is looking to generate income/profits from data they have obtained/created/etc. |
Success Scenario | Both parties have signaled to proceed with escrow and the swap happened as specified in advance. |
Success Criteria | User A has access to the data, User B has been provided with the funds. |
Failure Criteria | Either party did not hold up their end of the exchange/trace. |
Prerequisites | User A has the funds to make the purchaseUser B has the data that User A wishes to purchase.User A and B can agree on a suitable currency to denominate the deal in and there is also consensus on the provider of escrow. |
Comments | Hyperledger Private Data: https://hyperledger-fabric.readthedocs.io/en/release-1.4/private_data_tutorial.html Besu Privacy Groups: https://besu.hyperledger.org/en/stable/Concepts/Privacy/Privacy-Groups/ |
Enabling the trading of fiat and virtual currencies in any permutation of possible pairs.
On the technical level, this use case is the same as the one above and therefore the specific details were omitted.
W3C Use Case Attribute Name | W3C Use Case Attribute Value |
---|---|
Use Case Title | Stable Coin Pegged to Other Currency |
Use Case | 1. User A creates their own ledger2. User A deploys Hyperledger Cactus in an environment set up by them.3. User A implements necessary plugins for Hyperledger Cactus to interface with their ledger for transactions, token minting and burning. |
Type of Social Interaction | Software Implementation Project |
Narrative | Someone launches a highly scalable ledger with their own coin called ExampleCoin that can consistently sustain throughput levels of a million transactions per second reliably, but they struggle with adoption because nobody wants to buy into their coin fearing that it will lose its value. They choose to put in place a two-way peg with Bitcoin which guarantees to holders of their coin that it can always be redeemed for a fixed number of Bitcoins/USDs. |
Actors | User A : Owner and/or operator of a ledger and currency that they wish to stabilize (peg) to other currencies |
Goals of Actors | 1. Achieve credibility for their currency by backing funds. 2. Implement necessary software with minimal boilerplate code (most of which should be provided by Hyperldger Cactus) |
Success Scenario | User A stood up a Hyperledger Cactus deployment with their self-authored plugins and it is possible for end user application development to start by leveraging the Hyperledger Cactus REST APIs which now expose the functionalities provided by the plugin authored by User A |
Success Criteria | Success scenario was achieved without significant extra development effort apart from creating the Hyperledger Cactus plugins. |
Failure Criteria | Implementation complexity was high enough that it would've been easier to write something from scratch without the framework |
Prerequisites | * Operational ledger and currency *Technical knowledge for plugin implementation (software engineering) |
Comments |
Sequence diagram omitted as use case does not pertain to end users of Hyperledger Cactus itself.
A BTC holder can exchange their BTC for ExampleCoins by sending their BTC to ExampleCoin Reserve Wallet
and the equivalent amount of coins get minted for them
onto their ExampleCoin wallet on the other network.
An ExampleCoin holder can redeem their funds to BTC by receiving a Proof of Burn on the ExampleCoin ledger and getting sent the matching amount of BTC from the ExampleCoin Reserve Wallet
to their BTC wallet.
Very similar idea as with pegging against BTC, but the BTC wallet used for reserves gets replaced by a traditional bank account holding USD.
W3C Use Case Attribute Name | W3C Use Case Attribute Value |
---|---|
Use Case Title | Healthcare Data Sharing with Access Control Lists |
Use Case | 1. User A (patient) engages in business with User B (healthcare provider)2. User B requests permission to have read access to digitally stored medical history of User A and write access to log new entries in said medical history.3. User A receives a prompt to grant access and allows it.4. User B is granted permission through ledger specific access control/privacy features to the data of User A . |
Type of Social Interaction | Peer to Peer Data Sharing |
Narrative | Let's say that two healthcare providers have both implemented their own blockchain based patient data management systems and are looking to integrate with each other to provide patients with a seamless experience when being directed from one to another for certain treatments. The user is in control over their data on both platforms separately and with a Hyperledger Cactus backed integration they could also define fine grained access control lists consenting to the two healthcare providers to access each other's data that they collected about the patient. |
Actors | * User A : Patient engaging in business with a healthcare provider* User B : Healthcare provider offering services to User A . Some of said services depend on having access to prior medical history of User A . |
Goals of Actors | * User A : Wants to have fine grained access control in place when it comes to sharing their data to ensure that it does not end up in the hands of hackers or on a grey data market place.User B |
Success Scenario | User B (healthcare provider) has access to exactly as much information as they need to and nothing more. |
Success Criteria | There's cryptographic proof for the integrity of the data. Data hasn't been compromised during the sharing process, e.g. other actors did not gain unauthorized access to the data by accident or through malicious actions. |
Failure Criteria | User B (healthcare provider) either does not have access to the required data or they have access to data that they are not supposed to. |
Prerequisites | User A and User B are registered on a ledger or two separate ledgers that support the concept of individual data ownership, access controls and sharing. |
Comments | It makes most sense for best privacy if User A and User B are both present with an identity on the same permissioned, privacy-enabled ledger rather than on two separate ones. This gives User A an additional layer of security since they can know that their data is still only stored on one ledger instead of two (albeit both being privacy-enabled) |
W3C Use Case Attribute Name | W3C Use Case Attribute Value |
---|---|
Use Case Title | Food Traceability Integration |
Use Case | 1. Consumer is evaluating a food item in a physical retail store.2. Consumer queries the designated end user application designed to provide food traces. 3. Consumer makes purchasing decision based on food trace. |
Type of Social Interaction | Software Implementation Project |
Narrative | Both Organization A and Organization B have separate products/services for solving the problem of verifying the source of food products sold by retailers.A retailer has purchased the food traceability solution from Organization A while a food manufacturer (whom the retailer is a customer of) has purchased their food traceability solution from Organization B .The retailer wants to provide end to end food traceability to their customers, but this is not possible since the chain of traceability breaks down at the manufacturer who uses a different service or solution. Cactus is used as an architectural component to build an integration for the retailer which ensures that consumers have access to food tracing data regardless of the originating system for it being the product/service of Organization A or Organization B . |
Actors | Organization A , Organization B entities whose business has to do with food somewhere along the global chain from growing/manufacturing to the consumer retail shelves.Consumer : Private citizen who makes food purchases in a consumer retail goods store and wishes to trace the food end to end before purchasing decisions are finalized. |
Goals of Actors | Organization A , Organization B : Provide Consumer with a way to trace food items back to the source.Consumer : Consume food that's been ethically sourced, treated and transported. |
Success Scenario | Consumer satisfaction increases on account of the ability to verify food origins. |
Success Criteria | Consumer is able to verify food items' origins before making a purchasing decision. |
Failure Criteria | Consumer is unable to verify food items' origins partially or completely. |
Prerequisites | 1. Organization A and Organization B are both signed up for blockchain enabled software services that provide end to end food traceability solutions on their own but require all participants in the chain to use a single solution in order to work.2. Both solutions of Organization A and B have terms and conditions such that it is possible technically and legally to integrate the software with each other and Cactus . |
Comments |
W3C Use Case Attribute Name | W3C Use Case Attribute Value |
---|---|
Use Case Title | Wallet Authentication/Authorization |
Use Case | 1. User A has separate identities on different permissioned and permissionless ledgers in the form of private/public key pairs (Public Key Infrastructure).2. User A wishes to access/manage these identities through a single API or user interface and opts to on-board the identities to a Cactus deployment.3. User A performs the on-boarding of identities and is now able to interact with wallets attached to said identities through Cactus or end user applications that leverage Cactus under the hood (e.g. either by directly issuing API requests or using an application that does so. |
Type of Social Interaction | Identity Management |
Narrative | End user facing applications can provide a seamless experience connecting multiple permissioned (or permissionless) networks for an end user who has a set of different identity proofs for wallets on different ledgers. |
Actors | User A : The person or entity whose identities get consolidated within a single Cactus deployment |
Goals of Actors | User A : Convenient way to manage an array of distinct identities with the trade-off that a Cactus deployment must be trusted with the private keys of the identities involved (an educated decision on the user's part). |
Success Scenario | User A is able to interact with their wallets without having to access each private key individually. |
Success Criteria | User A 's credentials are safely stored in the Cactus keychain component where it is the least likely that they will be compromised (note that it is never impossible, but least unlikely, definitely) |
Failure Criteria | User A is unable to import identities to Cactus for a number of different reasons such as key format incompatibilities. |
Prerequisites | 1. User A has to have the identities on the various ledgers set up prior to importing them and must have access to the private |
Comments |
Use Case Attribute Name | Use Case Attribute Value |
---|---|
Use Case Title | Blockchain Migration |
Use Case | 1. Consortium A operates a set of services/use cases on a source blockchain.2. Consortium A decides to use another blockchain infrastructure to support their use case. 3. Consortium A migrates the existing assets to another blockchain. |
Interworking patterns | Value transfer |
Type of Social Interaction | Asset Transfer |
Narrative | A group of members (Consortium A ) are operating a source blockchain (e.g., a Hyperledger Fabric instance) would like to migrate the functionality to a target blockchain (e.g., Hyperledger Besu), in order to expand their reach. However, such migration requires lots of resources and technical effort. The Blockchain Migration feature from Hyperledger Cactus can provide support for doing so, by connecting to the source and target blockchains, and performing the migration task. |
Actors | 1. Consortium members composing the Consortium A : The group of entities operating the source blockchain, who collectively aim at performing a migration to a target blockchain. |
Goals of Actors | Consortium A wishes to be able to operate their use case on the target blockchain. The service is functional after the migration. |
Success Scenario | The consortium agrees on the migration, and it is performed in a decentralized way. Blockchain migration succeeds without issues. |
Success Criteria | Assets have been migrated. An identical history for those assets has been reconstructed on the target blockchain. |
Failure Criteria | 1. It was not migrate the assets. It was not possible to reconstruct the asset history on the target blockchain. |
Prerequisites | 1. All members belonging to Consortium A want to migrate the blockchain. 2. The Consortium A controls the source blockchain. 3. Consortium A has write and execute permissions on the target blockchain |
Comments | An asset is defined as data or smart contracts originating from the source blockchain. This use case relates to use cases implying asset portability (e.g., 2.1) This use case provides blockchain portability, thus reducing costs and fostering blockchain adoption. |
Motivation: The suitability of a blockchain solution regarding a use case depends on the underlying blockchain properties. As blockchain technologies are maturing at a fast pace, in particular private blockchains, its properties might change. Consequently, this creates an unbalance between user expectations' and the applicability of the solution. It is, therefore, desirable for an organization to be able to replace the blockchain providing the infrastructure to a certain service.
Currently, when a consortium wants to migrate their blockchain (e.g., the source blockchain became obsolete, cryptographic algorithms no longer secure, etc), the solution is to re-implement business logic using a different platform, yielding great effort. Data migrations have been performed before on public blockchains [2,3], both recent endeavors to render flexibility to blockchain-based solutions. In those works, the authors propose simple data migration capabilities for public, permissionless blockchains, in which a user can specify requirements for the blockchain infrastructure supporting their service.
Data migration corresponds to capture the set or subset of data assets (information, in the form of bytes) on a source blockchain, and construct a representation of those in a target blockchain. Note that the models underlying both blockchains do not need to be the same (e.g., world state model in Hyperledger Fabric vs account model in Ethereum). To migrate data, it should be possible to capture the necessary information from the source blockchain and to write it on the target blockchain. The history of information should also be migrated (i.e., the updates over the elements considered information).
The task of migrating a smart contract comprises the task of migrating data. In specific, the information should be accessible and writeable on another blockchain. Additionally, the target blockchain's virtual machine should support the computational complexity of the source blockchain (e.g., one cannot migrate all Ethereum smart contracts to Bitcoin, but the other way around is feasible).
Automatic smart contract migration yields risks for enterprise blockchain systems, and thus the solution is non-trivial.
By expressing my preferences in terms of functional and non-functional requirements, Hyperledger Cactus can recommend a set of suitable blockchains, as the target of the migration. Firstly, I could know in real-time the characteristics of the target blockchain that would influence my decision. For instance, the platform can analyze see the cost of writing information to Ethereum, the exchange rate US dollar - Ether, the average time to mine a block, the transaction throughput, and the network hash rate [3]. Based on that, the framework proposes a migration, with indicators such as predicted cost, predicted time to complete migration and the likelihood of success. As Ethereum does not show a desirable throughput, I choose Polkadot's platform. As it yields higher throughput, I then safely migrate my solution from Fabric to Polkadot, without compromising the solution in production. This feature is more useful regarding public blockchains.
Interconnect as many ecosystems as possible regardless of technology limitations
Identities, DLTs, service discovery. Minimize how opinionated we are to really embrace interoperability rather than silos and lock-in. Closely monitor community feedback/PRs to determine points of contention where core Hyperledger Cactus code could be lifted into plugins. Limit friction to adding future use cases and protocols.
Two representations of the same asset do not exist across the ecosystems at the same time unless clearly labelled as such [As of Oct 30 limited to specific combinations of DLTs; e.g. not yet possible with Fabric + Bitcoin]
Each DLT has certain unique features that are partially or completely missing from other DLTs. Hyperledger Cactus - where possible - should be designed in a way so that these unique features are accessible even when interacting with a DLT through Hyperledger Cactus. A good example of this principle in practice would be Kubernetes CRDs and operators that allow the community to extend the Kubernetes core APIs in a reusable way.
Interoperability does not redefine ecosystems but adapts to them. Governance, trust model and workflows are preserved in each ecosystem Trust model and consensus must be a mandatory part of the protocol handshake so that any possible incompatibilities are revealed up front and in a transparent way and both parties can “walk away” without unintended loss of assets/data. The idea comes from how the traditional online payment processing APIs allow merchants to specify the acceptable level of guarantees before the transaction can be finalized (e.g. need pin, signed receipt, etc.). Following the same logic, we shall allow transacting parties to specify what sort of consensus, transaction finality, they require. Consensus requirements must support predicates, e.g. “I am on Fabric, but will accept Bitcoin so long X number of blocks were confirmed post-transaction” Requiring KYC (Know Your Customer) compliance could also be added to help foster adoption as much as possible.
Cross-ecosystem transfer participants are made aware of the local and global implications of the transfer. Rejection and errors are communicated in a timely fashion to all participants. Such transparency should be visible as trustworthy evidence.
Logic exists in each ecosystem to enable complex interoperability use-cases. Cross-ecosystem transfers can be automatically triggered in response to a previous one. Automated procedure, which is regarding error recovery and exception handling, should be executed without any interruption.
Support less secure options, but strictly as opt-in, never opt-out.
Participants in the transaction must have a handshake mechanism where they agree on one of the supported protocols to use to execute the transaction. The algorithm looks an intersection in the list of supported algorithms by the participants.
We believe that increasing or decreasing the total amount of digital assets might weaken the security of blockchain, since adding or deleting assets will be complicated. Instead, intermediate entities (e.g. exchanger) can pool and/or send the transfer.
Our communal modularity should extend to common mechanisms to operate and/or observe transactions on blockchains.
Do not expend opinions on identity frameworks just allow users of Cactus
to leverage the most common ones and allow for future expansion of the list of supported identity frameworks through the plugin architecture.
Allow consumers of Cactus
to perform authentication, authorization and reading/writing of credentials.
Identity Frameworks to support/consider initially:
Adding new protocols must be possible as part of the plugin architecture allowing the community to propose, develop, test and release their own implementations at will.
Means for establishing bidirectional communication channels through proxies/firewalls/NAT wherever possible
Using a blockchain agnostic bidirectional communication channel for controlling and monitoring transactions on blockchains through proxies/firewalls/NAT wherever possible.
- Blockchains vary on their P2P communication protocols. It is better to build a modular method for sending/receiving generic transactions between trustworthy entities on blockchains.
Consortiums can be formed by cooperating entities (person, organization, etc.) who wish to all contribute hardware/network resources to the operation of a Cactus
cluster (set of validator nodes, API servers, etc.).
After the forming of the consortium with it's initial set of members (one or more) it is possible to enroll or remove certain new or existing members.
Cactus
does not prescribe any specific consensus algorithm for the addition or removal of consortium members, but rather focuses on the technical side of making it possible to operate a cluster of nodes under the ownership of separate entities without downtime while also keeping it possible to add/remove members.
A newly joined consortium member does not have to participate in every component of Cactus
: Running a validator node is the only required action to participate, etcd, API server can remain the same as prior to the new member joining.
- Participants can insist on a specific protocol by pretending that they only support said protocol only.
- Protocols can be versioned as the specifications mature
- The two initially supported protocols shall be the ones that can satisfy the requirements for Fujitsu's and Accenture's implementations respectively
Hyperledger Cactus has several integration patterns as the following.
- Note: In the following description, Value (V) means numerical assets (e.g. money). Data (D) means non-numerical assets (e.g. ownership proof). Ledger 1 is source ledger, Ledger 2 is destination ledger.
No. | Name | Pattern | Consistency |
---|---|---|---|
1. | value transfer | V -> V | check if V1 = V2 (as V1 is value on ledger 1, V2 is value on ledger 2) |
2. | value-data transfer | V -> D | check if data transfer is successful when value is transferred |
3. | data-value transfer | D -> V | check if value transfer is successful when data is transferred |
4. | data transfer | D -> D | check if all D1 is copied on ledger 2 (as D1 is data on ledger 1, D2 is data on ledger 2) |
5. | data merge | D <-> D | check if D1 = D2 as a result (as D1 is data on ledger 1, D2 is data on ledger 2) |
Hyperledger Cactus will provide integrated service(s) by executing ledger operations across multiple blockchain ledgers. The execution of operations are controlled by the module of Hyperledger Cactus which will be provided by vendors as the single Hyperledger Cactus Business Logic plugin. The supported blockchain platforms by Hyperledger Cactus can be added by implementing new Hyperledger Cactus Ledger plugin. Once an API call to Hyperledger Cactus framework is requested by a User, Business Logic plugin determines which ledger operations should be executed, and it ensures reliability on the issued integrated service is completed as expected. Following diagram shows the architecture of Hyperledger Cactus based on the discussion made at Hyperledger Cactus project calls. The overall architecture is as the following figure.
Key components are defined as follows:
- Application user: The entity submits API calls to "Cactus Routing Interface". Note: this component is exist outside of Cactus service system.
- Business Logic Plugin: The entity executes business logic and provide integration services that are connected with multiple blockchains. The entity is composed by web application or smart contract on a blockchain. The entity is a single plugin and required for executing Hyperledger Cactus applications.
- Ledger Plugin: The entity communicates Business Logic Plugin with each ledger. The entity is composed by a validator and a verifier as follows. The entity(s) is(are) chosen from multiple plugins on configuration.
- Validator: The entity monitors transaction records of Ledger operation, and it determines the result(success, failed, timeouted) from the transaction records. Validator ensure the determined result with attaching digital signature with "Validator key" which can be verified by "Verifier".
- Verifier: The entity accepts only sucussfully verified operation results by verifying the digital signature of the validator. Note that "Validator" is apart from "Verifier" over a bi-directional channel.
- Cactus Routing Interface: The entity is a routing service between "Business Logic Plugin" and "Ledger Plugin(s)". The entity is also a routing service between Business Logic Plugin and API calls from "Application user(s)".
- Ledger-n: DLT platforms(e.g. Ethereum, Quorum, Hyperledger Fabric, ...)
Key components defined in 4.2.1 becomes ready to serve Cactus application service after following procedures:
- Start
Validator
: TheValidator
ofLedger Plugin
which is chosen for eachLedger
depending the platform technology used (ex. Fabric, Besu, etc.) will be started by the administrator ofValidator
.Validator
becomes ready status to accept connection fromVerifier
after initialization process is done. - Start
Business Logic Plugin
implementation: The administrator of Cactus application service startsBusiness Logic Plugin
which is implemented to execute business logic(s).Business Logic Plugin
implementation first checks availability of dependedLedger Plugin(s)
, then it trys to enable eachLedger Plugin
with customized profile for actual integratingLedger
. This availability checks also covers determination on the status of connectivity fromVerifier
toValidator
. The availability of eachLedger
is registered and maintained atCactus Routing Interface
, and it allows bi-directional message communication betweenBusiness Logic Plugin
andLedger
.
Service API call
is processed as follows:
- Step 1: "Application user(s)" submits an API call to "Cactus routing interface".
- Step 2: The API call is internally routed to "Business Logic Plugin" by "Cactus Routing Interface" for initiating associated business logic. Then, "Business Logic Plugin" determines required ledger operation(s) to complete or abort a business logic.
- Step 3" "Business Logic Plugin" submits API calls to request operations on "Ledger(s)" wrapped with "Ledger Plugin(s)". Each API call will be routed to designated "Ledger Plugin" by "Routing Interface".
- Step 4: "Ledger Plugin" sends an event notification to "Business Logic Plugin" via "Cactus Routing Interface", when its sub-component "Verifier" detect an event regarding requested ledger operation to "Ledger".
- Step 5: "Business Logic Plugin" receives a message from "Ledger Plugin" and determines completion or continuous of the business logic. When the business logic requires to continuous operations go to "Step 3" ,or end the process.
API for Service Application, communication protocol for business logic plugin to interact with "Ledger Plugins" will be described in this section.
Cactus Service API is exposed to Application user(s). This API is used to request for initializing a business logic which is implemented at Business Logic Plugin. It is also used for making inquery of execution status and final result if the business logic is completed.
Following RESTful API design manner, the request can be mapped to one of CRUD operation with associated resource 'trade'.
The identity of User Application is authenticated and is applied for access control rule(s) check which is implemented as part of Business Logic Plugin.
NOTE: we are still open to consider other choose on API design patterns, such as gRPC or GraphQL.
Open endpoints require no authentication
- Login :
POST /api/v1/bl/login
Restricted endpoints requre a valid Token to be included in the headder of the request. A Token can be acquired by calling Login.
- Request Execution of Trade(instance of business logic) :
POST /api/v1/bl/trades/
- Show Current Status of Trade :
GET /api/v1/bl/trades/(id)
- Show Business Logics :
GET /api/v1/bl/logics/
- Show Specification of Business Logic :
GET /api/v1/bl/logics/(id)
- Register a Wallet :
POST /api/v1/bl/wallets/
- Show Wallet List :
GET /api/v1/bl/wallets/
- Update Existing Wallets :
PUT /api/v1/bl/wallets/(id)
- Delete a Wallet :
DELETE /api/v1/bl/walllets/(id)
NOTE: resource trade
and logic
are cannot be updated nor delete
Ledger plugin API is designed for allowing Business Logic Plugin to operate and/or monitor Ledger behind Validator component.
Each Ledger plugin can be implemented to provide common API which absorbs difference between integrating blockchain platforms. The developper can focus on implementing business logic of Business Logic Plugin in applicatio level once Ledger plugin for specific platform, ex. HLF or Besu, was implemented.
APIs of Verifier and Validator are described as the following table:
No. | Component | API Name | Description |
---|---|---|---|
1. | BLP -> Verifier | requestLedgerOperation | Request a verifier to execute a ledger operation |
2. | BLP -> Verifier | getApiList | Get the list of available APIs on Verifier |
3. | BLP -> Verifier | startMonitor | Request a verifier to start monitoring ledger |
4. | BLP -> Verifier | stopMonitor | Rrequest a verifier to stop monitoring ledger |
5. | Validator -> Verifier | connect | request a validator to start a bi-directional communication channel |
6. | Validator -> Verifier | disconnect | request a validator to stop a bi-directional communication channel |
7. | Validator -> Verifier | getVerifierInformation | Get the verifier information including version, name, ID, and other information |
8. | Verifier -> Validator | getValidatorInformation | Get the validator information including version, name, ID, and other information |
The detail information is described as following:
package/ledger-plugin/ledger-plugin.js
-
interface
Verifier
interface Verifier { // BLP -> Verifier getApiList(): List<ApiInfo>; requestLedgerOperation(); startMonitor(); stopMonitor(); // Validator -> Verifier connect(); disconnect(); getVerifierInfo(): List<VerifierInfo>; }
-
class
ApiInfo
,RequestedData
class ApiInfo { apiType: string, requestedData: List<RequestedData> } class RequestedData { dataName: string, dataType: string {"int", "string", ...} }
-
class
VerifierInfo
class VerifierInfo { version: string, name: string, ID: string, otherData: List<VerifierInfoOtherData> } class VerifierInfoOtherData { dataName: string, dataType: string {"int", "string", ...} }
-
function
getApiList()
:List<ApiInfo>
- description:
- Get the list of available APIs on Verifier
- input parameter:
- none
- output sample:
{ { apiType: "sendSignedTransaction", reqeustedData: { signedTx: signedTx(string), } }, { apiType: "getBalance", reqeustedData: { address: address(string), } } }
- description:
-
function
requestLedgerOperation()
:- description:
- Request a verifier to execute a ledger operation
- input parameter:
var params = { apiType: string, progress: string {"escrow", "transfer", ...}, data: List<OperationData> }
- description:
-
class
OperationData
class OperationData { dataName: dataType }
-
function
getVerifierInformation()
:List<ApiInfo>
- description:
- Get the verifier information including version, name, ID, and other information
- input parameter:
- none
- description:
-
function
startMonitor()
:Promise<LedgerEvent>
- description:
- Request a verifier to start monitoring ledger
- input parameter:
- none
- description:
-
function
stopMonitor()
:- description:
- Request a verifier to stop monitoring ledger
- input parameter:
- none
- description:
-
function
connect()
:- description:
- Request a verifier to start a bi-directional communication channel
- input parameter:
- none
- connecting profile:
validatorURL
- authentication credential
- description:
-
function
disconnect()
:- description:
- Request a verifier to stop a bi-directional communication channel
- input parameter:
- none
- connecting profile:
validatorURL
- authentication credential
- description:
-
-
interface
Validator
interface Validator { // Verifier -> Validator getValidatorInfo(): List<ValidatorInfo> }
-
function
getValidatorInformation()
:- description:
- Get the validator information including version, name, ID, and other information
- input parameter:
validatorURL
- description:
-
class
ValidatorInfo
class ValidatorInfo { version: string, name: string, ID: string, otherData: List<ValidatorInfoOtherData> } class ValidatorInfoOtherData { dataName: string, dataType: string {"int", "string", ...} }
-
-
The developper of Business Logic Plugin can implement business logic(s) as codes to interact with Ledger Plugin. The interaction between Business Logic Plugin and Ledger Plugin includes:
- Submit a transaction request on targeted Ledger Plugin
- Make a inquery to targeted Ledger Plugin (ex. account balance inquery)
- Receive an event message, which contains transaction/inquery result(s) or error from Ledger Plugin
NOTE: The transaction request is prepared by Business Logic Plugin using transaction template with given parameters
The communication protocol between Business Logic Plugin, Verifier, and Validator as following:
Hyperledger Cactus is divided into a set of npm packages that can be compiled separately or all at once.
All packages have a prefix of cactus-*
to avoid potential naming conflicts with npm modules published by other Hyperledger projects. For example if both Cactus and Aries were to publish a package named common
under the shared @hyperledger
npm scope then the resulting fully qualified package name would end up being (without the prefix) as @hyperledger/common
but with prefixes the conflict can be resolved as @hyperledger/cactus-common
and @hyperledger/aries-common
. Aries is just as an example here, we do not know if they plan on releasing packages under such names, but it also does not matter for the demonstration of ours.
Naming conventions for packages:
- cmd-* for packages that ship their own executable
- sdk-* for packages designed to be used directly by application developers except for the Javacript SDK which is named just
sdk
for simplicity. - All other packages should be named preferably as a single English word suggesting the most important feature/responsibility of the package itself.
A command line application for running the API server that provides a unified REST based HTTP API for calling code. Contains the kernel of Hyperledger Cactus. Code that is strongly opinionated lives here, the rest is pushed to other packages that implement plugins or define their interfaces. Comes with Swagger API definitions, plugin loading built-in.
By design this is stateless and horizontally scalable.
The main responsibilities of this package are:
The core package is responsible for parsing runtime configuration from the usual sources (shown in order of precedence):
- Explicit instructions via code (
config.setHttpPort(3000);
) - Command line arguments (
--http-port=3000
) - Operating system environment variables (
HTTP_PORT=3000
) - Static configuration files (config.json:
{ "httpPort": 3000 }
)
The Apache 2.0 licensed node-convict library to be leveraged for the mechanical parts of the configuration parsing and validation: https://github.com/mozilla/node-convict
To obtain the latest configuration options you can check out the latest source code of Cactus and then run this from the root folder of the project on a machine that has at least NodeJS 10 or newer installed:
$ date
Mon 18 May 2020 05:09:58 PM PDT
$ npx ts-node -e "import {ConfigService} from './packages/cactus-cmd-api-server/src/main/typescript/config/config-service'; console.log(ConfigService.getHelpText());"
Order of precedent for parameters in descdending order: CLI, Environment variables, Configuration file.
Passing "help" as the first argument prints this message and also dumps the effective configuration.
Configuration Parameters
========================
plugins:
Description: A collection of plugins to load at runtime.
Default:
Env: PLUGINS
CLI: --plugins
configFile:
Description: The path to a config file that holds the configuration itself which will be parsed and validated.
Default: Mandatory parameter without a default value.
Env: CONFIG_FILE
CLI: --config-file
cactusNodeId:
Description: Identifier of this particular Cactus node. Must be unique among the total set of Cactus nodes running in any given Cactus deployment. Can be any string of characters such as a UUID or an Int64
Default: Mandatory parameter without a default value.
Env: CACTUS_NODE_ID
CLI: --cactus-node-id
logLevel:
Description: The level at which loggers should be configured. Supported values include the following: error, warn, info, debug, trace
Default: warn
Env: LOG_LEVEL
CLI: --log-level
cockpitHost:
Description: The host to bind the Cockpit webserver to. Secure default is: 127.0.0.1. Use 0.0.0.0 to bind for any host.
Default: 127.0.0.1
Env: COCKPIT_HOST
CLI: --cockpit-host
cockpitPort:
Description: The HTTP port to bind the Cockpit webserver to.
Default: 3000
Env: COCKPIT_PORT
CLI: --cockpit-port
cockpitWwwRoot:
Description: The file-system path pointing to the static files of web application served as the cockpit by the API server.
Default: packages/cactus-cmd-api-server/node_modules/@hyperledger/cactus-cockpit/www/
Env: COCKPIT_WWW_ROOT
CLI: --cockpit-www-root
apiHost:
Description: The host to bind the API to. Secure default is: 127.0.0.1. Use 0.0.0.0 to bind for any host.
Default: 127.0.0.1
Env: API_HOST
CLI: --api-host
apiPort:
Description: The HTTP port to bind the API server endpoints to.
Default: 4000
Env: API_PORT
CLI: --api-port
apiCorsDomainCsv:
Description: The Comma seperated list of domains to allow Cross Origin Resource Sharing from when serving API requests. The wildcard (*) character is supported to allow CORS for any and all domains, however using it is not recommended unless you are developing or demonstrating something with Cactus.
Default: Mandatory parameter without a default value.
Env: API_CORS_DOMAIN_CSV
CLI: --api-cors-domain-csv
publicKey:
Description: Public key of this Cactus node (the API server)
Default: Mandatory parameter without a default value.
Env: PUBLIC_KEY
CLI: --public-key
privateKey:
Description: Private key of this Cactus node (the API server)
Default: Mandatory parameter without a default value.
Env: PRIVATE_KEY
CLI: --private-key
keychainSuffixPrivateKey:
Description: The key under which to store/retrieve the private key from the keychain of this Cactus node (API server)The complete lookup key is constructed from the ${CACTUS_NODE_ID}${KEYCHAIN_SUFFIX_PRIVATE_KEY} template.
Default: CACTUS_NODE_PRIVATE_KEY
Env: KEYCHAIN_SUFFIX_PRIVATE_KEY
CLI: --keychain-suffix-private-key
keychainSuffixPublicKey:
Description: The key under which to store/retrieve the public key from the keychain of this Cactus node (API server)The complete lookup key is constructed from the ${CACTUS_NODE_ID}${KEYCHAIN_SUFFIX_PRIVATE_KEY} template.
Default: CACTUS_NODE_PUBLIC_KEY
Env: KEYCHAIN_SUFFIX_PUBLIC_KEY
CLI: --keychain-suffix-public-key
Plugin loading happens through NodeJS's built-in module loader and the validation is performed by the Node Package Manager tool (npm) which verifies the byte level integrity of all installed modules.
Contains interface definitions for the plugin architecture and other system level components that are to be shared among many other packages.
core-api
is intended to be a leaf package meaning that it shouldn't depend on other packages in order to make it safe for any and all packages to depend on core-api
without having to deal with circular dependency issues.
Javascript SDK (bindings) for the RESTful HTTP API provided by cmd-api-server
.
Compatible with both NodeJS and Web Browser (HTML 5 DOM + ES6) environments.
Responsible for persistently storing highly sensitive data (e.g. private keys) in an encrypted format.
For further details on the API surface, see the relevant section under Plugin Architecture
.
Contains components for tracing, logging and application performance management (APM) of code written for the rest of the Hyperledger Cactus packages.
Components useful for writing and reading audit records that must be archived longer term and immutable. The latter properties are what differentiates audit logs from tracing/logging messages which are designed to be ephemeral and to support technical issues not regulatory/compliance/governance related issues.
Provides structured or unstructured document storage and analytics capabilities for other packages such as audit
and tracing
.
Comes with its own API surface that serves as an adapter for different storage backends via plugins.
By default, Open Distro for ElasticSearch
is used as the storage backend: https://aws.amazon.com/blogs/aws/new-open-distro-for-elasticsearch/
The API surface provided by this package is kept intentionally simple and feature-poor so that different underlying storage backends remain an option long term through the plugin architecture of
Cactus
.
Contains components responsible for providing access to standard SQL compliant persistent storage.
The API surface provided by this package is kept intentionally simple and feature-poor so that different underlying storage backends remain an option long term through the plugin architecture of
Cactus
.
Contains components responsible for providing access to immutable storage such as a distributed ledger with append-only semantics such as a blockchain network (e.g. Hyperledger Fabric).
The API surface provided by this package is kept intentionally simple and feature-poor so that different underlying storage backends remain an option long term through the plugin architecture of
Cactus
.
Source file: ./docs/architecture/deployment-diagram.puml
TBD
TBD
Participants in the transaction must have a handshake mechanism where they agree on one of the supported protocols to use to execute the transaction. The algorithm looks an intersection in the list of supported algorithms by the participants.
Participants can insist on a specific protocol by pretending that they only support said protocol only. Protocols can be versioned as the specifications mature. Adding new protocols must be possible as part of the plugin architecture allowing the community to propose, develop, test and release their own implementations at will. The two initially supported protocols shall be the ones that can satisfy the requirements for Fujitsu’s and Accenture’s implementations respectively. Means for establishing bi-directional communication channels through proxies/firewalls/NAT wherever possible
Since our goal is integration, it is critical that Cactus
has the flexibility of supporting most ledgers, even those that don't exist today.
A plugin is a self contained piece of code that implements a predefined interface pertaining to a specific functionality of
Cactus
such as transaction execution.
Plugins are an abstraction layer on top of the core components that allows operators of Cactus
to swap out implementations at will.
Backward compatibility is important, but versioning of the plugins still follows the semantic versioning convention meaning that major upgrades can have breaking changes.
Plugins are implemented as ES6 modules (source code) that can be loaded at runtime from the persistent data store. The core package is responsible for validating code signatures to guarantee source code integrity.
An overarching theme for all aspects that are covered by the plugin architecture is that there should be a dummy implementation for each aspect to allow the simplest possible deployments to happen on a single, consumer grade machine rather than requiring costly hardware and specialized knowledge.
Ideally, a fully testable/operational (but not production ready)
Cactus
deployment could be spun up on a developer laptop with a single command (an npm script for example).
Success is defined as:
- Adding support in
Cactus
for a ledger invented in the future requires nocore
code changes, but instead can be implemented by simply adding a corresponding connector plugin to deal with said newly invented ledger. - Client applications using the REST API and leveraging the feature checks can remain 100% functional regardless of the number and nature of deployed connector plugins in
Cactus
. For example: a generic money sending application does not have to hardcode the supported ledgers it supports because the unified REST API interface (fed by the ledger connector plugins) guarantees that supported features will be operational.
Because the features of different ledgers can be very diverse, the plugin interface has feature checks built into allowing callers/client applications to determine programmatically, at runtime if a certain feature is supported or not on a given ledger.
export interface LedgerConnector {
// method to verify a signature coming from a given ledger that this connector is responsible for connecting to.
verifySignature(message, signature): Promise<boolean>;
// used to call methods on smart contracts or to move assets between wallets
transact(transactions: Transaction[]);
getPermissionScheme(): Promise<PermissionScheme>;
getTransactionFinality(): Promise<TransactionFinality>;
addForeignValidator(): Promise<void>;
}
export enum TransactionFinality {
GUARANTEED = "GUARANTEED",
NOT_GUARANTEED = "NOT_GUARANTEED
}
export enum PermissionScheme {
PERMISSIONED = "PERMISSIONED",
PERMISSIONLESS = "PERMISSIONLESS"
}
Identity federation plugins operate inside the API Server and need to implement the interface of a common PassportJS Strategy: https://github.com/jaredhanson/passport-strategy#implement-authentication
abstract class IdentityFederationPlugin {
constructor(options: any): IdentityFederationPlugin;
abstract authenticate(req: ExpressRequest, options: any);
abstract success(user, info);
abstract fail(challenge, status);
abstract redirect(url, status);
abstract pass();
abstract error(err);
}
The X.509 Certificate plugin facilitates clients authentication by allowing them to present a certificate instead of operating with authentication tokens. This technically allows calling clients to assume the identities of the validator nodes through the REST API without having to have access to the signing private key of said validator node.
PassportJS already has plugins written for client certificate validation, but we go one step further with this plugin by providing the option to obtain CA certificates from the validator nodes themselves at runtime.
Key/Value Storage plugins allow the higher-level packages to store and retrieve configuration metadata for a Cactus
cluster such as:
- Who are the active validators and what are the hosts where said validators are accessible over a network?
- What public keys belong to which validator nodes?
- What transactions have been scheduled, started, completed?
interface KeyValueStoragePlugin {
async get<T>(key: string): Promise<T>;
async set<T>(key: string, value: T): Promise<void>;
async delete<T>(key: string): Promise<void>;
}
The API surface of keychain plugins is roughly the equivalent of the key/value Storage plugins, but under the hood these are of course guaranteed to encrypt the stored data at rest by way of leveraging storage backends purpose built for storing and managing secrets.
Possible storage backends include self hosted software [1] and cloud native services [2][3][4] as well. The goal of the keychain plugins (and the plugin architecture at large) is to make Cactus
deployable in different environments with different backing services such as an on-premise data center or a cloud provider who sells their own secret management services/APIs.
There should be a dummy implementation as well that stores secrets in-memory and unencrypted (strictly for development purposes of course). The latter will decrease the barrier to entry for new users and would be contributors alike.
Direct support for HSM (Hardware Security Modules) is also something the keychain plugins could enable, but this is lower priority since any serious storage backend with secret management in mind will have built-in support for dealing with HSMs transparently.
By design, the keychain plugin can only be used by authenticated users with an active Cactus
session. Users secrets are isolated from each other on the keychain via namespacing that is internal to the keychain plugin implementations (e.g. users cannot query other users namespaces whatsoever).
interface KeychainPlugin extends KeyValueStoragePlugin {
}
[1] https://www.vaultproject.io/ [2] https://aws.amazon.com/secrets-manager/ [3] https://aws.amazon.com/kms/ [4] https://azure.microsoft.com/en-us/services/key-vault/
Cactus
aims to provide a unified API surface for managing identities of an identity owner.
Developers using the Cactus
Service API for their applications can support one or both of the below requirements:
- Applications with a focus on access control and business process efficiency (usually in the enterprise)
- Applications with a focus on individual privacy (usually consumer-based applications)
The following sections outline the high-level features of Cactus
that make the above vision reality.
An end user (through a user interface) can issue API requests to
- register a username+password account (with optional MFA) within
Cactus
. - associate their wallets to their
Cactus
account and execute transactions involving those registered wallet (transaction signatures performed either locally or remotely as explained above). - execute a trade which executes a set of transactions across integrated Ledgers.
Cactus
may also executes recovery transaction(s) when the trade was failed with some reason. For example, recovery transactions may be executed to reverse executed transaction result using intermediate account which provide escrow trading service.
Various identities are used at Cactus Service API.
Cactus user ID
- ID for user (behind web service application) to execute a Service API call.
- Service provider assign the role(s) and access right(s) of user in integrated service as part of
Business Logic Plugin
. - The user can add Wallet(s) which is associated with account address and/or key.
Wallet ID
- ID for the user identity which is associated with authentication credential at integrated
Ledger
. - It is recommended to store temporary credential here allowing minimal access to operate
Ledger
instead of giving full access with master secret. - Service API enables user to add/update/delete authentication credential for the Wallet.
Ledger ID
- ID for
Ledger Plugin
which is used at Wallet Ledger ID
is assigned by administrator of integrated service, and provided for user to configure their own Wallet settings.- The connectivity settings associated with the
Ledger ID
is also configured atLedger Plugin
by the administrator.
Business Logic ID
- ID for business logic to be invoked by Cactus user.
- Each business logic should be implemented to execute necessary transactions on integrated
Ledgers
without any interaction with user during its execution. - Business logic may require user to setup access permission with storing credential before executing business logic call.
An application developer using Cactus
can choose to enable users to sign their transactions locally on their user agent device without disclosing their private keys to Cactus
or remotely where Cactus
stores private keys server-side, encrypted at rest, made decryptable through authenticating with their Cactus
account.
Each mode comes with its own pros and cons that need to be carefully considered at design time.
Usually a better fit for consumer-based applications where end users have higher expectation of individual privacy.
Pros
- Keys are not compromised when a
Cactus
deployment is compromised - Operator of
Cactus
deployment is not liable for breach of keys (same as above) - Reduced server-side complexity (no need to manage keys centrally)
Cons
- User experience is sub-optimal compared to sever side transaction signing
- Users can lose access permanently if they lose the key (not acceptable in most enterprise/professional use cases)
Usually a better fit for enterprise applications where end users have most likely lowered their expectations of individual privacy due to the hard requirements of compliance, governance, internal or external policy enforcement.
Pros
- Frees end users from the burden of managing keys themselves (better user experience)
- Improved compliance, governance
Cons
- Server-side breach can expose encrypted keys stored in the keychain
Cactus
can authenticate users against third party Identity Providers or serve as an Identity Provider itself.
Everything follows the well-established industry standards of Open ID Connect to maximize information security and reduce the probability of data breaches.
There is a gap between traditional web/mobile applications and blockchain applications (web 2.0 and 3.0 if you will) authentication protocols in the sense that blockchain networks rely on private keys belonging to a Public Key Infrastructure (PKI) to authenticate users while traditional web/mobile applications mostly rely on a centralized authority storing hashed passwords and the issuance of ephemeral tokens upon successful authentication (e.g. successful login with a password). Traditional (Web 2.0) applications (that adhering security best practices) use server-side sessions (web) or secure keychains provided by the operating system (iOS, Android, etc.) The current industry standard and state of the art authentication protocol in the enterprise application development industry is Open ID Connect (OIDC).
To successfully close the gap between the two worlds, Cactus
comes equipped with an OIDC identity provider and a server-side key chain that can be leveraged by end user applications to authenticate once against Hyperledger Cactus and manage identities on other blockchains through that single Hyperledger Cactus identity.
This feature is important for web applications which do not have secure offline storage APIs (HTML localStorage is not secure).
Example: A user can register for a Hyperledger Cactus account, import their private keys from their Fabric/Ethereum wallets and then have access to all of those identities by authenticating once only against Cactus
which will result in a server-side session (HTTP cookie) containing a JSON Web Token (JWT).
Native mobile applications may not need to use the server-side keychain since they usually come equipped with an OS provided one (Android, iOS does).
In web 2.0 applications the prevalent authentication/authorization solution is Open ID Connect which bases authentication on passwords and tokens which are derived from the passwords. Web 3.0 applications (decentralized apps or DApps) which interact with blockchain networks rely on private keys instead of passwords.
Application user: The user who requests an API call to a Hyperledger Cactus application or smart contract. The API call triggers the sending of the transaction to the remote ledger.
Hyperledger Cactus Web application or Smart contract on a blockchain: The entity executes business logic and provide integration services that include multiple blockchains.
Tx verifier: The entity verifies the signature of the transaction data transmitted over the secure bidirectional channel. Validated transactions are processed by the Hyperledger Cactus Web application or Smart Contract to execute the integrated business logic.
Tx submitter: The entity submits the remote transaction to the API server plug-in on one of the ledgers.
API Server: A module of Hyperledger Cactus which provides a unified interface to control/monitor Blockchain ledger behind it.
Validator: A module of Hyperledger Cactus which verifies validity of transaction to be sent out to the blockchain application.
Lock asset: An operation to the asset managed on blockchain ledger, which disable further operation to targeted asset. The target can be whole or partial depends on type of asset.
Abort: A state of Hyperledger Cactus which is determined integrated ledger operation is failed, and Hyperledger Cactus will execute recovery operations.
Integrated ledger operation: A series of blockchain ledger operations which will be triggered by Hyperledger Cactus. Hyperledger Cactus is responsible to execute 'recovery operations' when 'Abort' is occurred.
Restore operation(s): Single or multiple ledger operations which is executed by Hyperledger Cactus to restore the state of integrated service before start of integrated operation.
End User: A person (private citizen or a corporate employee) who interacts with Hyperledger Cactus and other ledger-related systems to achieve a specific goal or complete a task such as to send/receive/exchange money or data.
Business Organization: A for-profit or non-profit entity formed by one or more people to achieve financial gain or achieve a specific (non-financial) goal. For brevity, business organization may be shortened to organization throughout the document.
Identity Owner: A person or organization who is in control of one or more identities. For example, owning two separate email accounts by one person means that said person is the identity owner of two separate identities (the email accounts). Owning cryptocurrency wallets (their private keys) also makes one an identity owner.
Identity Secret: A private key or a password that - by design - is only ever known by the identity owner (unless stolen).
Credentials: Could mean user a
uthentication credentials/identity proofs in an IT application or any other credentials in the traditional sense of the word such as a proof that a person obtained a masters or PhD.
Ledger/Network/Chain: Synonymous words meaning referring largely to the same thing in this paper.
OIDC: Open ID Connect authentication protocol
PKI: Public Key Infrastructure
MFA: Multi Factor Authentication
1: Heterogeneous System Architecture - Wikipedia, Retrieved at: 11th of December 2019
2: E Scheid and Burkhard Rodrigues, B Stiller. 2019. Toward a policy-based blockchain agnostic framework. 16th IFIP/IEEE International Symposium on Integrated Network Management (IM 2019) (2019)
3: Philipp Frauenthaler, Michael Borkowski, and Stefan Schulte. 2019. A Framework for Blockchain Interoperability and Runtime Selection.