Scheduler

In optimistic concurrency control, transactions are allowed to proceed without acquiring locks initially. Conflict detection happens during the validation phase before committing. Transaction causing conflict will be reverted and re-executed to maintain data consistency. This approach assumes conflicts are infrequent, so the success of OCC depends largely on keeping conflicts to a minimum.

Additionally, in a sequential execution design, transaction ordering wouldn't affect execution efficiency much. But in parallel execution, all other factors being equal, execution efficiency is deeply affected by how transactions are organized and planned. Generally, the goal is always to pack as many parallelizable transactions into a single generation as possible to maximize execution efficiency.

In Arcology, the scheduler optimizes transaction processing. It aims to find a balance between minimizing conflict and rollbacks, while maximizing parallel execution. It significantly reduces the chances of conflicts and the need for rollbacks and is an integral part of the concurrency control system.

Workflow

When a list of transactions enters the scheduler, the scheduler first performs a static analysis to assess transaction characteristics such as dependencies, conflict history, resource requirements, and priorities. Contract developers can also manually specify dependency statuses to avoid unnecessary conflicts and rollbacks.

The scheduler takes into account several factors to create a scheduling plan:

  • Resources Available: The availability of system resources, such as CPU cores, and memory directly impacts the scheduling plan. The scheduler allocates resources efficiently to maximize utilization and throughput. The scheduler can prevent resource contention, bottlenecks, and performance degradation.

  • Conflict History: Interactions with contracts may have caused conflicts in the past. The scheduler maintains this history and assigns transactions involving potentially conflicting contract interactions to different generations to minimize conflicts.

  • Transaction Characteristics: Different transactions have varying characteristics that influence their scheduling. For example:

    1. Transaction Priority: High-priority transactions might need to be scheduled before lower-priority ones.

    2. Resource Requirements: Some transactions require more resources than others. The scheduler needs to allocate resources appropriately to ensure fair treatment.

Eventually, the scheduler produces a series of transaction sets, called generations, for the transaction processors to execute.

Generation

A generation is a set of commutative transactions whose final state is invariant with respect to execution order. Transactions within the same generation are eligible for full parallel execution, while generations are processed sequentially. All transactions in one generation must finish before the next generation begins.

A block consists of 1 to N generations. In the best case, all transactions fit into a single generation, fully leveraging parallel execution. In the best case, all transactions in the block are placed in a single generation, enabling full parallel processing. In the worst case, each generation contains only one transaction, which is equivalent to serial execution.

Deferred Transaction

In parallel computing, a common issue is that not all tasks can run safely in parallel — many contain serial portions or shared-resource dependencies that require synchronization. Many well-designed solutions often follow the same principles as Divide-and-Conquer algorithms.

Arcology’s Deferred Execution mechanism also falls into this category. In addition, since the design targets a blockchain environment with full EVM compatibility, it must respect the natural boundaries of transaction independence and determinism. This is exactly what Deferred Execution is designed to achieve.

A Deferred Transaction serves as the controlled synchronization point for the invocations of the same function made in the previous generation. It acts as the explicit join step, executing only after all relevant prior calls have completed, and is used to aggregate or finalize their effects.

Why Happens When Deferred Enabled

Contract developers can enable deferred execution in the contract constructor, instructing the scheduler to create a deferred transaction whenever possible. At the beginning of each block, the scheduler scans all transactions in the current generation and identifies calls to contracts and functions with deferred execution enabled. If more than one such call to the same function appears in the same generation, the scheduler inserts an additional generation and moves one of these calls to it as a deferred transaction.

Per generation, EOA transactions to the same (contract, selector) with no known conflicts run in parallel; one is deterministically deferred to the next generation, where the generation boundary serves as the controlled synchronization point.

The deferred transaction acts as the join that finalizes/aggregates prior results.

Example:

There are four transactions calling the same contract and functions. The scheduler has no prior knowledge of any conflicts between them.

  • Without deferred execution enabled: All four transactions are placed in a single generation, allowing full parallel execution.

  • With deferred execution enabled for both: Only three transactions remain in the first generation, while one is deferred to the next generation. This creates an aggregation point for the transactions in the previous generation.

First

With

Conflict Feedback

The scheduler keeps track of all conflict information reported by the conflict detector to optimize its execution scheduling for future blocks. Conflict history provides insights into how transactions have interacted in the past. By analyzing past conflicts and their resolutions, the scheduler can better predict potential conflicts and place conflict-prone transactions into serial execution sequences.

Last updated