Welcome to Arcology
  • Overview
    • Welcome to Arcology Network
    • Quick Introduction
    • Technical Whitepaper
    • Architecture Design
  • Build on Arcology
    • Concurrent Programming Guide
    • Code Repositories
    • Concurrency Control
  • Network
  • other
    • FAQ
Powered by GitBook
On this page
  • Comparison With Other L2s
  • New Applications
  • What is Arcology
  • Parallel Execution
  • Parallelize EVM
  • Concurrency Control
  • Storage
  • Ethereum Issues
  • Arcology's Dual Storage Design
  • EVM Integration
  • Conflict Rules
  • Conflict Migration
  • How Arcology's Parallel Execution Works
  • Examples
  • 1. Vending Machine Example with Cumulative Variables
  • Benchmarking
  1. Overview

Quick Introduction

An Short Overview of Arcology's Core Technologies

PreviousWelcome to Arcology Network

Last updated 1 month ago

This document is a brief overview of the Arcology Network, consolidating information that is otherwise scattered across multiple sources into a single, concise summary to highlight some key features. For a deeper dive into Arcology, please refer to our detailed .

Parallel Processing is s the most effective way to scale in computer science. All the super computers are parallel machines. You can see it in AI, quantum computing, pretty much anywhere. Whenever there is a need for more processing power that goes beyond what a single thread or processor can handle, Parallel processing is the first choice. So blockchain shouldn’t be an exception.

Single-threaded blockchains fall short of meeting high performance demands. The resource underutilization leads to significant scalability limitations. Arcology is a parallel network capable of processing multiple transactions simultaneously. It can handle tens of thousands of transactions per second, outperforming all existing high-performance blockchains.

Comparison With Other L2s

Layer 2 solutions aim to overcome Ethereum’s scalability and cost challenges, but many still face limitations in throughput, fee stability, and security. Arcology redefines these benchmarks, delivering unmatched performance and efficiency.

Unprecedented Gas Limit

Arbitrium has a targeted gas limit of With each coin transfer costing 21,000 gas, the network can handle roughly 333 of the simplest transactions per second — not to mention the significantly reduced capacity for complex smart contract calls, which consume far more gas.

Optimism, by comparison, is even slower, with a and a block time of 2 seconds. This translates to just 2.5 million gas per second, allowing for only about 119 simple transactions per second, and even fewer for gas-intensive operations.

Arcology Network stands out with its massive gas limit of 1 billion gas per second, targeting 2 billion at mainnet launch—vastly surpassing the 5–10 million targeted gas limits of L2 solutions like Optimism and Arbitrium. This allows Arcology to achieve 10,000–15,000 TPS with 16 cores, more processor core enabling complex applications and reducing transaction costs by up to 3x, making it a scalable and cost-efficient choice for developers.

Stable Fee Under Load

One of the key issues with Ethereum and its L2s is unstable fees during periods of high network activity. Arcology's architecture is designed to prevent fee spikes, maintaining low and stable transaction costs even during periods of high network activity. This contrasts with some L2 solutions where increased demand can lead to significant fee increases.

Enhanced Security

Arcology's parallel execution model eliminates MEV (Maximal Extractable Value) opportunities by processing transactions concurrently, preventing the ordering manipulations that enable sandwich attacks. This architecture allows the construction of DEXs where users can trade securely without the risk of front-running or other MEV-related exploits.

New Applications

Arcology’s architecture fosters innovation by enabling applications that were previously impractical on traditional blockchain platforms.

  • High-Performance DEXs: Enables decentralized exchanges to process tens of thousands of transactions per second, supporting high-frequency trading and complex operations effortlessly.

  • Scalable Gaming: Supports large-scale blockchain games with real-time updates and interactive gameplay, thanks to its massive throughput.

  • Cost-Efficient Operations: Reduces transaction costs for complex smart contracts, making blockchain more accessible for developers and users.


What is Arcology

Arcology has a scalable, parallel processing-focused architecture, a redesign of traditional, sequential-centric blockchain systems. Not only does Arcology address the need for parallel execution but also other key bottlenecks in the original Ethereum, including slow storage access. Key features include:

  • EVM equivalent & composable

  • Deterministic parallel transaction execution

  • Dual storage(Parallelized, asynchronous MPT with dedicated execution storage)

  • Event-driven, microservice-based architecture

Parallel Execution

Parallelization execution is a technology that leverages modern multicore computer architecture to run multiple tasks in at once. Workload is cut into independent tasks and allocated to individual core for more efficient processing. It is an effective scaling solution in computing. All supercomputers rely on parallel execution to scale, and blockchain scaling should be no different.

A parallel blockchain can handle significantly more transactions in a fraction of the time. It is wildly used in many areas. For blockchain, this means:

  • Significantly higher scalability.

  • reduced transaction costs.

Parallelize EVM

The EVM is a single-threaded system by design. To add parallel execution capability, it must either be reimplemented to include the necessary enhancements or wrapped in an external parallel execution framework as a unit of execution. This presents a choice between intra-VM parallelization and inter-VM parallelization.

  • Intra-VM parallelization involves restructuring the execution model to allow simultaneous processing of independent tasks within the same virtual environment. This approach can enhance performance but may introduce complexities due to compatibility challenges with the Ethereum L1 EVM in the long run.

  • Inter-VM parallelization employs multiple VMs to execute tasks concurrently. While it is less efficient due to potential overhead from managing multiple instances, it easily facilitates horizontal scaling and avoids compatibility issues typically associated with modifying the core structure of existing VMs.

Arcology chose the latter because it doesn't alter the EVM's core structure. EVM is constantly evolving and rewriting it from the ground up will cause compatibility headaches in the long run.

Concurrency Control

For the control mechanism to work with blockchains, the following intertwined issues in the current blockchain design must be addressed by the control mechanism:

  • Deterministic concurrency control: Transaction executions must generate deterministic results, a fundamental requirement for blockchain. Unfortunately, most commonly used concurrency control tools in the centralized systems aren't designed for this, so a blockchain-native concurrency solution is the key.

  • Low performance StateDB: Even with parallel execution, overall throughput won't improve much if other bottlenecks persist; Ethereum's original stateDB design is the single biggest one. The module needs a substantial upgrade to keep up with parallel execution.

  • Parallel contract development: It's a common misconception that a blockchain with a parallel execution engine alone is enough to fully realize the benefits of parallelism. This is not true. Contracts must be designed and built for parallelization to unlock the full potential of the underlying parallel execution architecture.


Storage

Storage is the single biggest performance bottleneck in the original Ethereum design.

Ethereum Issues

In the original Ethereum L1 the main storage is an Merkle Patricia Trie (MPT) stored in a KV database. It is simply and straightforward. It suffers from issue due to its design.

1. Read & Write Amplification

  • A single state update modifies multiple trie nodes along the way, all requiring new writes to the LevelDB.

  • The root hash must be recalculated, leading to many redundant disk writes.

  • Fetching account data requires traversing multiple trie levels, each stored separately in the DB.

  • This results in multiple random disk reads, slowing down state access.

2. Sequential Insertion Only

While concurrent reads are OK on MPT, updates much be sequential, which is another challenge because nodes must be added one-by-one, creating a huge bottleneck. This problem only get worse when dealing with workload associated with

Arcology's Dual Storage Design

Arcology's solution utilizes a dual storage system to optimize execution and state management. It consists of:

  1. Flattened Key-Value Datastore: Used for state access during execution. This database is optimized for fast and efficient retrieval of state data. The flattened database is updated synchronously.

  2. StateDB with a Parallelized Merkle Patricia Trie (MPT): Maintained separately to support root hash calculation and Ethereum RPC Enquiries. The StateDB is only updated asynchronously, decoupling from execution to enhance performance.

EVM Integration

A challenge with using the sequential EVM for parallel processing is keeping track of state access that happened between EVM and its StateDB. To keep track of all the state access, Arcology inserted an intermediate layer between the EVM and the storage layer to intercept state accesses. The records are later assessed by a dedicated conflict detection module to protect the state consistency. Only conflict free transactions get the change to persist the state transitions.

During execution, the cache intercepts and records all state access attempts. Once execution is complete, these records are sent to the conflict detector, which checks for potential conflicts before finalizing the updates. Only the transitions from clear Transactions are going to be persisted to the storage.


Conflict Rules

In the EVM, there are memory variables and storage variables. Arcology’s concurrency control only tracks storage, treating memory variables as local to their transactions since they don’t affect the overall blockchain state.

In Arcology, the system keeps track of three types of state operation during transaction processing:

Operation
Read
Full Write
Delta Write
Conflict Determination

Read

Retrieves data only and conflicts with everything except other reads.

Full

Write

Rewrite a storage slot and conflicts with all.

Delta Write

Adds a difference to the original and conflicts with all except other delta writes.


Conflict Migration

To fully harness the advantages of parallelism, developers need mechanisms to mitigate the impact of contention in their contract source code. This is what conflict migration is all about.

Arcology’ concurrent library provides the following types of tools for developers to eliminate contention for their contracts completely:

  • Cumulative Variables: Lazy variables, enable concurrent delta updates with defined upper and lower bounds, ideal for keeping track of amounts.

  • Multiprocessor: Programmatically spawn EVM instances with limited depth and number of instances.

  • Runtime Tools: Includes a pseudo-random number generator and various utility functions.


How Arcology's Parallel Execution Works

  • The system starts multiple sequential EVM instances, each with its own cache. More instances generally mean better performance, but the number should not exceed the available processors.

  • Each EVM processes a transaction, temporarily the written data in its cache. The EVMs run independently without communicating during processing.

  • An RW Cache intercepts the communication between the EVM and the storage layer.

  • A conflict detector to identify potential conflicts in state access.

  • A state committer writes clear state transitions to the execution storage synchronously while writing to the original ETH storage asynchronously.

  • The conflict history will be fed back to the scheduler to prevent future occurrences.

Examples

npm install @arcologynetwork/concurrentlib

1. Vending Machine Example with Cumulative Variables

In the original implementation, the cupcakeBalances mapping is a shared variable accessed and modified by multiple functions (refill and purchase), and it is shared among different users interacting with the contract.

Conflict Analysis: Only one transaction will go through due to the obvious conflict caused by the concurrent updates to cupcakeBalances. The parallel execution of transactions interacting with this contract won't bring any performance benefit.

pragma solidity 0.8.7;

contract VendingMachine {

    // Declare state variables of the contract
    address public owner;
    mapping (address => uint) public cupcakeBalances;

    // When 'VendingMachine' contract is deployed:
    // 1. set the deploying address as the owner of the contract
    // 2. set the deployed smart contract's cupcake balance to 100
    constructor() {
        owner = msg.sender;
        cupcakeBalances[address(this)] = 100;
    }

    // Allow the owner to increase the smart contract's cupcake balance
    function refill(uint amount) public {
        require(msg.sender == owner, "Only the owner can refill.");
        cupcakeBalances[address(this)] += amount;
    }

    // Allow anyone to purchase cupcakes
    function purchase(uint amount) public payable {
        require(msg.value >= amount * 1 ether, "You must pay at least 1 ETH per cupcake");
        require(cupcakeBalances[address(this)] >= amount, "Not enough cupcakes in stock to complete this purchase");
        cupcakeBalances[address(this)] -= amount;
        cupcakeBalances[msg.sender] += amount;
    }
}

Parallelized Version: Both transactions will go through because the cupcakeBalances variable is concurrently updatable. Alice and Bob will both receive their cupcakes as long as there are enough in stock.

pragma solidity 0.8.7;

import "@arcologynetwork/concurrentlib/lib/commutative/U256Cum.sol";

contract VendingMachine {

    // Declare state variables of the contract
    address public owner;
    U256Cumulative cupcakeStock;
    mapping (address => uint) public cupcakeBalances;

    // When 'VendingMachine' contract is deployed:
    // 1. set the deploying address as the owner of the contract
    // 2. set the deployed smart contract's cupcake balance to 100
    constructor() {
        owner = msg.sender;
        cupcakeStock = new U256Cum(0, 100);
    }

    // Allow the owner to increase the smart contract's cupcake balance
    function refill(uint amount) public {
        require(msg.sender == owner, "Only the owner can refill.");
        cupcakeStock.add(amount);
    }

    // Allow anyone to purchase cupcakes
    function purchase(uint amount) public payable {
        require(msg.value >= amount * 1 ether, "You must pay at least 1 ETH per cupcake");
        cupcakeStock.sub(amount);
        cupcakeBalances[msg.sender].add(amount);
    }
}

Benchmarking

Info
Configuration

Contract Name

Vending Machine(Parallelized)

Arcology Version

V 1.9.0

Deployment Mode

Standard-alone

Operating System

Ubuntu 22.04

CPU

AMD Ryzen Threadripper 2950X 16-Core Processor

RAM

128G RAM

Storage

2T M2 SSD

Average gas burned/S:

712,118,171.6

Max gas burned/S

1,005,206,703

Hybrid (Pessimistic + Optimistic)

with s to help developers build parallel contrasts.

The parallel design relies on an effective concurrency control system to maintain data consistency and integrity. Arcology needs one too. Generally, there are two main concurrency control strategies: and .

Arcology uses a hybrid control strategy (Pessimistic + Optimistic), primarily focusing on an optimistic approach. Given that optimistic concurrency control is sensitive to conflicts, Arcology also offers a in Solidity to help developers write conflict-free contracts.

For more information on conflict rules please check out

is only effective in low-contention environments. However, many contracts designed sequentially have contention points that ultimately result in conflicts, preventing them from fully benefiting from parallel execution.

Concurrent Containers: support simultaneous push, read, and delete operations across different transactions.

To better understand cumulative variables work, consider the following example from the :

In the parallelized version, the shared variable cupcakeBalances uses a provided by Arcology, enabling concurrent updates. The parallelized contract is as follows:

Concurrency Control
Pessimistic Concurrency Control
Optimistic Concurrency Control
concurrent library
this link.
Optimistic concurrency control
CRDT
s
Ethereum developers docs
new data type U256Cum
documentation here
7 million per second.
gas limit of 5 million per block
CRDT
Cover

Sequential Execution processes one transaction at a time in order.

Cover

Parallel Execution processes multiple transactions simultaneously.

Cover
  • Rebuild EVM to add concurrency.

  • More flexible and efficient.

  • Compatibility issues

Cover
  • Single EVM instance doesn't scale but multiple EVMs do.

  • Running multiple Sequential EVM instances in parallel to scale.

  • No compatibility issues.

  • Less efficient but support horizontal scaling

Cover

Arbitrium

Cover

Optimism

Cover

Arcology

With 1 billion+ gas/second, Arcology to can achieve 10,000–15,000 TPS with 16 cores, making it a scalable and cost-efficient choice for developers.

With a targeted gas limit of With each coin transfer costing 21,000 gas, the network can handle roughly 333 simplest transactions per second — not to mention the significantly reduced capacity for complex smart contract calls, which consume far more gas.

With a and a block time of 2 seconds. This translates to just 2.5 million gas per second, allowing for only about 119 simple transactions per second, and even fewer for gas-intensive operations.

7 million per second.
https://docs.arbitrum.io/build-decentralized-apps/reference/chain-params
gas limit of 5 million per block
https://docs.optimism.io/stack/differences
Concurrent library