Quick Introduction

An Short Overview of Arcology's Core Technologies

This document is a brief overview of the Arcology Network, consolidating information that is otherwise scattered across multiple sources into a single, concise summary to highlight some key features. For a deeper dive into Arcology, please refer to our detailed documentation here.

For L2s, different rollup chains process their transactions off-chain and independently of each other. Running multiple L2s simultaneously also enables a form of inter-L2 parallel processing. But L2s come at the cost of composability. All contracts involved in a transaction must be deployed on the same L2; otherwise, Cross-L2 communication isn't much simpler than inter-blockchain.

Compounding the issue, L2s are still "single-threaded" and can only process one transaction at a time, much like Ethereum L1. To scale while maintaining composability, individual L2s need fundamental upgrades from the existing architectures.

Parallel Execution Is the Solution

In a sequential processing environment, a single EVM instance handles one transaction at a time, processing transactions one by one. This seriously underutilizes the computational power modern computers offer and limits throughput.

In contrast, parallelization execution leverages modern multicore computer architecture to run multiple tasks in parallel. It is an effective scaling solution in computing. All supercomputers rely on parallel execution to scale, and blockchain scaling should be no different.

Challenges

For a parallel blockchain to work to its full potential, the following intertwined issues in the current blockchain design must be addressed with comprehensive solutions:

  • Deterministic concurrency control: Transaction executions must generate deterministic results, a fundamental requirement for blockchain. Unfortunately, most commonly used concurrency control tools in the centralized systems aren't designed for this, so a blockchain-native concurrency solution is the key.

  • Low performance StateDB: Even with parallel execution, overall throughput won't improve much if other bottlenecks persist; Ethereum's original stateDB design is the single biggest one. The module needs a substantial upgrade to keep up with parallel execution.

  • Parallel contract development: It's a common misconception that a blockchain with a parallel execution engine alone is enough to fully realize the benefits of parallelism. This is not true. Contracts must be designed and built for parallelization to unlock the full potential of the underlying parallel execution architecture.


What is Arcology

Arcology is a parallel rollup network capable of processing multiple transactions simultaneously. It can handle tens of thousands of transactions per second, outperforming all existing high-performance blockchains.

Arcology has a scalable, parallel processing-focused architecture, a redesign of traditional, sequential-centric blockchain systems. Not only does Arcology address the need for parallel execution but also other key bottlenecks in the original Ethereum, including slow storage access. Key features include:

  • EVM equivalent & composable

  • Deterministic parallel transaction execution

  • Dual storage(Parallelized, asynchronous MPT with dedicated execution storage)

  • Event-driven, microservice-based architecture

Optimistic Concurrency Control

Parallel execution relies on an effective concurrency control system to maintain data consistency and integrity. Arcology needs one too. Generally, there are two main concurrency control strategies: Pessimistic Concurrency Control and Optimistic Concurrency Control.

Arcology uses a hybrid control strategy, primarily focusing on an STM-based optimistic approach. Given that optimistic concurrency control is sensitive to conflicts, Arcology also includes a concurrent library in Solidity to help developers write conflict-free contracts.

Adding EVM to the Parallel Framework

The EVM is a single-threaded system by design. To add parallel execution capability, it must either be reimplemented to include the necessary enhancements or wrapped in an external parallel execution framework as a unit of execution. Arcology chose the latter because it doesn't alter the EVM's core structure.

Arcology inserted an intermediate layer between the EVM and the Ethereum StateDB to intercept state accesses. These records are later assessed by a dedicated conflict detection module to protect the state consistency. Arcology’s concurrency control system includes:

  1. An RW Cache that acts as an intermediary between the EVM and the StateDB.

  2. A conflict detector to identify potential conflicts in state access.

  3. A state committer to write clear state transitions to the storage.

  4. The conflict history will be fed back to the scheduler to prevent future occurrences.

Storage Variables Only

In the EVM, there are memory variables and storage variables. Arcology’s concurrency control only tracks storage variables, treating memory variables as local to their transactions since they don’t affect the overall blockchain state directly. In Arcology's concurrent execution design, the system keeps track of three types of state operation during transaction processing:

For more information on conflict rules please check out this link.

Workflow

Here’s how it works:

  1. The system starts multiple EVM instances, each with its own cache. More instances generally mean better performance, but the number should not exceed the available processors.

  2. Each EVM processes a transaction, temporarily saving read and written data in its cache. The EVMs run independently without communicating during processing.

  3. After execution, the data in the caches is sent to the conflict detection module. Transactions with conflicting state access are discarded.

  4. Finally, the valid changes are committed to the StateDB.


Concurrent Data Structures

Optimistic concurrency control is effective when conflict likelihood is low. However, many contracts designed sequentially have contention points that ultimately result in conflicts, preventing them from fully benefiting from parallel execution. To fully harness the advantages of parallelism, developers need mechanisms to mitigate the impact of contention in their code.

The Concurrent Data Structures are a set of CRDT that help developers to write contention free smart contracts that can take best use of Arcology's parallel processing design.

Vending Machine Example

To better understand this, consider the following example from the Ethereum developers docs:

In the original implementation, the cupcakeBalances mapping is a shared variable accessed and modified by multiple functions (refill and purchase), and it is shared among different users interacting with the contract.

pragma solidity 0.8.7;

contract VendingMachine {

    // Declare state variables of the contract
    address public owner;
    mapping (address => uint) public cupcakeBalances;

    // When 'VendingMachine' contract is deployed:
    // 1. set the deploying address as the owner of the contract
    // 2. set the deployed smart contract's cupcake balance to 100
    constructor() {
        owner = msg.sender;
        cupcakeBalances[address(this)] = 100;
    }

    // Allow the owner to increase the smart contract's cupcake balance
    function refill(uint amount) public {
        require(msg.sender == owner, "Only the owner can refill.");
        cupcakeBalances[address(this)] += amount;
    }

    // Allow anyone to purchase cupcakes
    function purchase(uint amount) public payable {
        require(msg.value >= amount * 1 ether, "You must pay at least 1 ETH per cupcake");
        require(cupcakeBalances[address(this)] >= amount, "Not enough cupcakes in stock to complete this purchase");
        cupcakeBalances[address(this)] -= amount;
        cupcakeBalances[msg.sender] += amount;
    }
}

Analysis

If two users, Alice and Bob, attempt to purchase their own cupcakes and the transactions are processed in parallel. These two versions will behave differently.

  • Original Version: Only one transaction will go through due to the obvious conflict caused by the concurrent accesses to cupcakeBalances. The parallel execution of transactions interacting with this contract won't bring any performance benefit.

  • Parallelized Version: Both transactions will go through because the cupcakeBalances variable is concurrently updatable. Alice and Bob will both receive their cupcakes as long as there are enough in stock.

Performance

Last updated