Core Technologies
A Short Overview of Arcology's Core Technologies
This is a brief overview of the Arcology Network. For a quick hands‑on example, please check out the link. To take a deeper dive into Arcology, please refer to our detailed documentation here.
What is Arcology
Parallel Processing is the most effective way to scale in computer science. Supercomputers, AI, quantum computing—every field scales with parallel processing. Blockchain should too. Arcology runs multiple transactions at once, delivering tens of thousands of TPS and leaving single‑threaded chains behind.
Arcology is a parallel Ethereum rollup network capable of processing multiple transactions simultaneously. It can handle tens of thousands of transactions per second, outperforming all existing high-performance blockchains.
Arcology has a scalable, parallel processing-focused architecture, a redesign of traditional, sequential-centric blockchain systems. It not only delivers deterministic parallel execution but also removes key Ethereum bottlenecks, such as slow storage access.
Key features include:
EVM equivalent & composable
Deterministic parallel transaction execution
Hybrid Concurrency Control (Pessimistic + Optimistic)
Dual storage(Parallelized, asynchronous MPT with dedicated execution storage)
Concurrent library with CRDTs to help developers build parallel contrasts.
Event-driven, microservice-based architecture
Comparison With Other L2s
Layer 2 solutions aim to overcome Ethereum’s scalability and cost challenges, but many still face limitations in throughput, fee stability, and security. Arcology redefines these benchmarks, delivering unmatched performance and efficiency.
Optimism
~2.5M
~119
Single‑threaded, fixed block limit
Arbitrum
~7M
~333
Single‑threaded, dynamic block limit
Arcology
2B+ @16 cores
10k–15k+
Parallel, multi‑core & multi‑machine
Comparison With Solana
Both achieve high throughput via parallel execution, but Arcology is Ethereum‑native, cluster‑scalable, and runs unmodified Solidity with tools to eliminate contention entirely.
Aspect
Arcology
Solana
Ethereum Compatibility
100% EVM‑equivalent – works with Ethereum tooling and libraries.
Not EVM‑compatible; requires rewrite to Solana’s model.
Execution Model
Optimistic concurrency – multiple unmodified EVM instances run in parallel; conflicts detected after execution.
Pessimistic concurrency – accounts declared and locked before execution.
Conflict Handling
Detects conflicts post‑execution; rolls back and retries/reschedules only the conflicting transactions.
Aborts immediately if undeclared account is accessed; no retry.
Granularity of Conflict Tracking
Storage‑slot level. With the Concurrent Library, even concurrent writes to the same variable can be safe.
Account‑level – any overlap in declared accounts causes a conflict.
Determinism Guarantee
Guaranteed
Guaranteed
Programming Model
EVM‑equivalent – existing Solidity contracts run without changes; using the Concurrent Library can eliminate all contention.
Solana VM — requires programs to explicitly declare all accounts they will access; undeclared accesses fail.
Developer Overhead
No need to predict access patterns; optional Concurrent Library lets devs eliminate contention entirely.
High – incorrect declarations fail execution and limit parallelism.
Storage Architecture
Dual storage: parallelized execution storage + async‑updated Ethereum MPT.
Account‑based storage model.
Horizontal Scaling
multi‑core + multi‑machine cluster. Built‑in multi‑machine per node for scaling.
single‑machine execution per validator.
MEV Protection
Parallel execution removes transaction ordering manipulation → eliminates sandwich MEV.
MEV possible if ordering not neutralized before execution.
Why Parallel Execution
Parallel execution uses multicore processors to run many tasks at once, splitting workloads into independent tasks for faster processing, enabling far higher transaction throughput in less time.

Sequential Execution processes one transaction at a time in order.

Parallel Execution processes multiple transactions simultaneously.
Parallelize EVM
The original EVM is single‑threaded by design. Adding parallel execution requires either:
The EVM is single-threaded. Parallel execution can be added by:
Intra-VM – Modify the EVM for threading; more efficient but risks Ethereum compatibility.
Inter-VM – Run multiple EVMs in parallel; easy scaling and full compatibility.
Arcology uses inter-VM to keep the EVM unmodified and ensure long-term compatibility.
Concurrency Control
The parallel design relies on an effective concurrency control system to maintain data consistency and integrity. Arcology needs one too. Generally, there are two main concurrency control strategies: Pessimistic Concurrency Control and Optimistic Concurrency Control.
For the control mechanism to work with blockchains, the following intertwined issues in the current blockchain design must be addressed by the control mechanism:
Deterministic concurrency control: Transaction executions must generate deterministic results, a fundamental requirement for blockchain. Unfortunately, most commonly used concurrency control tools in the centralized systems aren't designed for this, so a blockchain-native concurrency solution is the key.
Low performance StateDB: Even with parallel execution, overall throughput won't improve much if other bottlenecks persist; Ethereum's original stateDB design is the single biggest one. The module needs a substantial upgrade to keep up with parallel execution.
Parallel contract development: It's a common misconception that a blockchain with a parallel execution engine alone is enough to fully realize the benefits of parallelism. This is not true. Contracts must be designed and built for parallelization to unlock the full potential of the underlying parallel execution architecture.
Arcology uses a hybrid control strategy (Pessimistic + Optimistic), primarily focusing on an optimistic approach. Given that optimistic concurrency control is sensitive to conflicts, Arcology also offers a concurrent library in Solidity to help developers write conflict-free contracts.
Storage
Storage is the single biggest performance bottleneck in the original Ethereum design.
Ethereum Issues
In the original Ethereum L1 the main storage is an Merkle Patricia Trie (MPT) stored in a KV database. It is simply and straightforward. It suffers from issue due to its design.
1. Read & Write Amplification
A single state update modifies multiple trie nodes along the way, all requiring new writes to the LevelDB.
The root hash must be recalculated, leading to many redundant disk writes.
Fetching account data requires traversing multiple trie levels, each stored separately in the DB.
This results in multiple random disk reads, slowing down state access.
2. Sequential Insertion Only
While concurrent reads are OK on MPT, updates much be sequential, which is another challenge because nodes must be added one-by-one, creating a huge bottleneck. This problem only get worse when dealing with workload associated with
Dual Storage
Arcology's solution utilizes a dual storage system to optimize execution and state management. It consists of:
Flattened Key-Value Datastore: Used for state access during execution. This database is optimized for fast and efficient retrieval of state data. The flattened database is updated synchronously.
StateDB with a Parallelized Merkle Patricia Trie (MPT): Maintained separately to support root hash calculation and Ethereum RPC Enquiries. The StateDB is only updated asynchronously, decoupling from execution to enhance performance.
EVM Integration
A challenge with using the sequential EVM for parallel processing is keeping track of state access that happened between EVM and its StateDB. To keep track of all the state access, Arcology inserted an intermediate layer between the EVM and the storage layer to intercept state accesses. The records are later assessed by a dedicated conflict detection module to protect the state consistency. Only conflict free transactions get the change to persist the state transitions.
During execution, the cache intercepts and records all state access attempts. Once execution is complete, these records are sent to the conflict detector, which checks for potential conflicts before finalizing the updates. Only the transitions from clear Transactions are going to be persisted to the storage.
Conflict Detection & Migration
In concurrency control, a conflict occurs when two transactions access the same data simultaneously and at least one modifies it. Such conflicts should be prevented when possible, and must always be detected and handled correctly to ensure state consistency.
Detection
Arcology tracks only storage variables, as memory is local to each transaction. During transaction processing, the system records three types of state operations. Once a generation of transactions is processed, these state access records are fed to the conflict detector for analysis. Transactions causing conflicts will be reverted to protect the state.
Arcology keeps track of the following types of operations and determines conflict status accordingly.
Read
Retrieves data only and conflicts with everything except other reads.
Full
Write
Rewrite a storage slot and conflicts with all.
Delta Write
Adds a difference to the original and conflicts with all except other delta writes.
Concurrent Library for Conflict Migration
Optimistic concurrency control is effective in low-contention environments. but many sequentially designed contracts have contention points that cause conflicts when run concurrently.
To fully leverage parallelism, developers must reduce contention in their contract code. Arcology’s concurrent library is designed for this purpose, providing tools to eliminate contention entirely:
Cumulative Variables: Lazy variables, enable concurrent delta updates with defined upper and lower bounds, ideal for keeping track of amounts.
Multiprocessor: Spawn EVM instances programmatically with controlled depth and instance limits.
Runtime Tools: Pseudo‑random number generator and utility functions.
How Arcology's Parallel Execution Works
The scheduler analyzes incoming transactions for dependencies, conflict history, resource needs, and priorities. Developers may define dependencies to reduce conflicts. Transactions are grouped into generations—commutative sets likely to run in parallel—processed sequentially.
Arcology executes multiple sequential EVM instances in parallel, each with its own cache. An RW cache intercepts storage access, a conflict detector identifies state conflicts, and a state committer writes confirmed changes to execution storage synchronously while updating Ethereum storage asynchronously. Conflict history is fed back to the scheduler to minimize future contention.

Last updated