Class ConcurrentList<E>

java.lang.Object
com.cedarsoftware.util.ConcurrentList<E>
Type Parameters:
E - the type of elements held in this list
All Implemented Interfaces:
Serializable, Iterable<E>, Collection<E>, Deque<E>, List<E>, Queue<E>, RandomAccess

public final class ConcurrentList<E> extends Object implements List<E>, Deque<E>, RandomAccess, Serializable
A high-performance thread-safe implementation of List, Deque, and RandomAccess interfaces, specifically designed for highly concurrent environments with exceptional performance characteristics.

This implementation uses a revolutionary bucket-based architecture with chunked AtomicReferenceArray storage and atomic head/tail counters, delivering lock-free performance for the most common operations.

Architecture Overview

The list is structured as a series of fixed-size buckets (1024 elements each), managed through a ConcurrentHashMap. Each bucket is an AtomicReferenceArray that never moves once allocated, ensuring stable memory layout and eliminating costly array copying operations.

Performance Characteristics

Operation Performance Comparison
OperationArrayList + External SyncCopyOnWriteArrayListVectorThis Implementation
get(index)🔴 O(1) but serialized🟡 O(1) no locks🔴 O(1) but synchronized🟢 O(1) lock-free
set(index, val)🔴 O(1) but serialized🔴 O(n) copy array🔴 O(1) but synchronized🟢 O(1) lock-free
add(element)🔴 O(1)* but serialized🔴 O(n) copy array🔴 O(1)* but synchronized🟢 O(1) lock-free
addFirst(element)🔴 O(n) + serialized🔴 O(n) copy array🔴 O(n) + synchronized🟢 O(1) lock-free
addLast(element)🔴 O(1)* but serialized🔴 O(n) copy array🔴 O(1)* but synchronized🟢 O(1) lock-free
removeFirst()🔴 O(n) + serialized🔴 O(n) copy array🔴 O(n) + synchronized🟢 O(1) lock-free
removeLast()🔴 O(1) but serialized🔴 O(n) copy array🔴 O(1) but synchronized🟢 O(1) lock-free
add(middle, element)🔴 O(n) + serialized🔴 O(n) copy array🔴 O(n) + synchronized🟡 O(n) + write lock
remove(middle)🔴 O(n) + serialized🔴 O(n) copy array🔴 O(n) + synchronized🟡 O(n) + write lock
Concurrent reads❌ Serialized🟢 Fully parallel❌ Serialized🟢 Fully parallel
Concurrent writes❌ Serialized❌ Serialized (copy)❌ Serialized🟢 Parallel head/tail ops
Memory efficiency🟡 Resizing overhead🔴 Constant copying🟡 Resizing overhead🟢 Granular allocation

* O(1) amortized, may trigger O(n) array resize

Key Advantages

  • Lock-free deque operations: addFirst, addLast, removeFirst, removeLast use atomic CAS operations
  • Lock-free random access: get() and set() operations require no synchronization
  • Optimal memory usage: No wasted capacity from exponential growth strategies
  • Stable memory layout: Buckets never move, reducing GC pressure and improving cache locality
  • Scalable concurrency: Read operations scale linearly with CPU cores
  • Minimal contention: Only middle insertion/removal requires write locking

Use Case Recommendations

  • 🟢 Excellent for: Queue/stack patterns, append-heavy workloads, high-concurrency read access, producer-consumer scenarios, work-stealing algorithms
  • 🟢 Very good for: Random access patterns, bulk operations, frequent size queries
  • 🟡 Acceptable for: Moderate middle insertion/deletion (rebuilds structure but still better than alternatives)
  • ❌ Consider alternatives for: Frequent middle insertion/deletion with single-threaded access

Thread Safety

This implementation provides exceptional thread safety with minimal performance overhead:

  • Lock-free reads: All get operations and iterations are completely lock-free
  • Lock-free head/tail operations: Deque operations use atomic CAS for maximum throughput
  • Minimal locking: Only middle insertion/removal requires a write lock
  • Consistent iteration: Iterators provide a consistent snapshot view
  • ABA-safe: Atomic operations prevent ABA problems in concurrent scenarios

Implementation Details

  • Bucket size: 1024 elements per bucket for optimal cache line usage
  • Storage: ConcurrentHashMap of AtomicReferenceArray buckets
  • Indexing: Atomic head/tail counters with negative indexing support
  • Memory management: Lazy bucket allocation, automatic garbage collection of unused buckets

Usage Examples


 // High-performance concurrent queue
 ConcurrentList<Task> taskQueue = new ConcurrentList<>();
 
 // Producer threads
 taskQueue.addLast(new Task());     // O(1) lock-free
 
 // Consumer threads  
 Task task = taskQueue.pollFirst(); // O(1) lock-free
 
 // Stack operations
 ConcurrentList<String> stack = new ConcurrentList<>();
 stack.addFirst("item");            // O(1) lock-free push
 String item = stack.removeFirst(); // O(1) lock-free pop
 
 // Random access
 String value = stack.get(index);   // O(1) lock-free
 stack.set(index, "new value");     // O(1) lock-free
 
Author:
John DeRegnaucourt ([email protected])
Copyright (c) Cedar Software LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

License

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
See Also: