Was this article helpful?
to mark as helpful
Enjoyed this article?
Get more engineering insights delivered weekly.
Comments
to join the discussion
to mark as helpful
Get more engineering insights delivered weekly.
to join the discussion
Jagdish Salgotra
Jul 27, 2025 · 25 min read · Project Loom
Discover how Structured Concurrency brings order to parallel programming. Master the StructuredTaskScope API to manage hierarchical tasks, ensure automatic cancellation, and prevent resource leaks in concurrent Java applications.
Your article assistant
Ask me anything about this article. I'll provide answers with relevant sources.
Try asking:
Note This series uses Java 21 as the baseline. Structured concurrency snippets in this part (
StructuredTaskScope, JEP 453) use preview APIs and require--enable-preview.
Long CompletableFuture chains look clean in reviews and painful in incident calls.
The common issues are predictable: hard-to-follow stack traces, cleanup gaps that appear only under load, and cancellation logic that behaves inconsistently between happy-path and failure-path traffic.
"Why is Service C still running when Service A failed?" is a common production question with ad-hoc async orchestration.
Here's what most of us end up writing when we need to call multiple services:
// From StructuredConcurrencyComparison
CompletableFuture<String> cf1 = CompletableFuture.supplyAsync(() -> simulateService("Service-A", 200));
CompletableFuture<String> cf2 = CompletableFuture.supplyAsync(() -> simulateService("Service-B", 300));
CompletableFuture<String> cf3 = CompletableFuture.supplyAsync(() -> simulateService("Service-C", 100));
CompletableFuture.allOf(cf1, cf2, cf3).join();
String result = String.format("Results: %s, %s, %s",
cf1.get(), cf2.get(), cf3.get());
logger.info(result);
Common failure patterns:
Structured concurrency is based on a simple principle: related tasks should be managed together as a unit. When one fails, the group can fail fast and clean up automatically.
Let's see how the same dashboard service looks with structured concurrency:
// From StructuredExample
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Subtask<String> fetch1 = scope.fork(() -> fetchFromService1());
Subtask<String> fetch2 = scope.fork(() -> fetchFromService2());
Subtask<String> fetch3 = scope.fork(() -> fetchFromService3());
scope.join();
scope.throwIfFailed();
String result = fetch1.get() + fetch2.get() + fetch3.get();
logger.info("Combined result: {}", result);
}
What changes in practice:
The key point is lifecycle scoping: task start, failure, cancellation, and cleanup are managed together.
Perfect for operations where all results are needed:
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Subtask<String> fetch1 = scope.fork(() -> fetchFromService1());
Subtask<String> fetch2 = scope.fork(() -> fetchFromService2());
Subtask<String> fetch3 = scope.fork(() -> fetchFromService3());
scope.join();
scope.throwIfFailed();
String result = fetch1.get() + fetch2.get() + fetch3.get();
logger.info("All services completed: {}", result);
}
What cleanup behavior gives you:
Perfect for redundant services or fastest-wins scenarios:
try (var scope = new StructuredTaskScope.ShutdownOnSuccess<String>()) {
scope.fork(() -> slowService("Service-A", 1000));
scope.fork(() -> slowService("Service-B", 500));
scope.fork(() -> slowService("Service-C", 200));
scope.join();
String result = scope.result();
logger.info("First successful result: {}", result);
}
Use this when redundant providers can satisfy the same request and only the first successful result matters.
These benchmark numbers are from one simplified test environment. Real workloads vary widely based on downstream behavior, JVM configuration, and traffic shape.
Test Environment:
Scenario: User dashboard requiring data from 3 services:
Expected optimal time: ~300ms (limited by slowest service)
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
var task1 = scope.fork(() -> simulateService("Service-A", 200));
var task2 = scope.fork(() -> simulateService("Service-B", 300));
var task3 = scope.fork(() -> simulateService("Service-C", 100));
scope.join();
scope.throwIfFailed();
String result = String.format("Results: %s, %s, %s",
task1.get(), task2.get(), task3.get());
logger.info(result);
}
Performance Results (1000 iterations):
PERFORMANCE COMPARISON
Expected time: ~300ms
StructuredTaskScope: 312ms (overhead: 12ms)
CompletableFuture: 328ms (overhead: 28ms)
DETAILED METRICS
StructuredTaskScope average: 2.47ms per operation
CompletableFuture average: 3.21ms per operation
Performance improvement: 23% faster with StructuredTaskScope
Memory efficiency: 18% less memory allocation per operation
How to read these numbers:
Failure scenario: One service fails after 50ms. How quickly can we detect failure and stop related work?
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Subtask<String> fetch1 = scope.fork(() -> fetchFromService("Service-1", 300, false));
Subtask<String> fetch2 = scope.fork(() -> fetchFromService("Service-2", 200, true));
Subtask<String> fetch3 = scope.fork(() -> fetchFromService("Service-3", 100, false));
scope.join();
scope.throwIfFailed();
} catch (Exception e) {
logger.error("One or more services failed: {}", e.getMessage());
logger.error("All remaining tasks were cancelled automatically");
}
Resource Cleanup Results:
ERROR HANDLING PERFORMANCE
Structured Concurrency:
- Error detection: 52ms
- Automatic cancellation: YES
- Resource cleanup: AUTOMATIC
- Wasted compute time: 0ms
CompletableFuture:
- Error detection: 50ms
- Automatic cancellation: NO
- Resource cleanup: MANUAL REQUIRED
- Wasted compute time: 450ms (other tasks keep running)
Critical Finding: StructuredTaskScope prevents resource waste by
automatically cancelling related tasks on first failure.
The practical takeaway is straightforward: cancellation behavior is built into the scope instead of bolted on per call chain.
jdk.VirtualThreadPinned events for diagnosisscope.throwIfFailed() can hide task failures; this is a common review item in Java 21 preview codeUse this when you need a strict deadline and can return partial or fallback data.
public String shortTimeoutExample() throws Exception {
Instant deadline = Instant.now().plusMillis(300);
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
var slowTask = scope.fork(() -> simulateSlowService("slow-service", 500));
var fastTask = scope.fork(() -> simulateSlowService("fast-service", 100));
scope.join();
if (Instant.now().isAfter(deadline)) {
throw new TimeoutException("Request exceeded 300ms deadline");
}
scope.throwIfFailed();
return String.format("Timeout Results: %s, %s", slowTask.get(), fastTask.get());
}
}
Use this for workflows with staged dependencies.
public String processOrder(String orderId) throws Exception {
var validationResult = scopedHandler.runInParallel(
() -> validatePayment(orderId),
() -> validateInventory(orderId),
() -> validateShipping(orderId)
);
if (allValidationsPassed(validationResult)) {
return scopedHandler.runInParallel(
() -> chargePayment(orderId),
() -> reserveInventory(orderId),
() -> scheduleShipping(orderId)
).toString();
}
return "Order validation failed: " + validationResult;
}
Signs your CompletableFuture code needs structured concurrency:
A practical migration sequence:
Week 1-2: Replace Simple Cases
// Before: Complex exception handling
CompletableFuture<String> cf1 = CompletableFuture.supplyAsync(() -> simulateService("A", 1));
CompletableFuture<String> cf2 = CompletableFuture.supplyAsync(() -> simulateService("B", 1));
CompletableFuture.allOf(cf1, cf2).join();
cf1.get();
cf2.get();
// After: Clean structured concurrency
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
var task1 = scope.fork(() -> simulateService("A", 1));
var task2 = scope.fork(() -> simulateService("B", 1));
scope.join();
scope.throwIfFailed();
task1.get();
task2.get();
}
Week 3-4: Tackle Complex Orchestrations
Week 5+: Monitor and Optimize
DO:
DON'T:
joinUntil() for time-sensitive operationspublic <T> T runWithFallback(Callable<T> primary, Callable<T> fallback) throws Exception {
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
var primaryFuture = scope.fork(primary);
scope.join();
try {
scope.throwIfFailed();
return primaryFuture.get();
} catch (Exception e) {
logger.warn("Primary task failed, using fallback: {}", e.getMessage());
return fallback.call();
}
}
}
private static String generatePrometheusMetrics() {
StringBuilder metrics = new StringBuilder();
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
metrics.append("# HELP jvm_memory_used_bytes Used memory in bytes\n");
metrics.append("# TYPE jvm_memory_used_bytes gauge\n");
metrics.append("jvm_memory_used_bytes{area=\"heap\"} ").append(heapUsage.getUsed()).append("\n");
metrics.append("# HELP jvm_memory_max_bytes Maximum memory in bytes\n");
metrics.append("# TYPE jvm_memory_max_bytes gauge\n");
metrics.append("jvm_memory_max_bytes{area=\"heap\"} ").append(heapUsage.getMax()).append("\n");
ManagementFactory.getGarbageCollectorMXBeans().forEach(gc -> {
metrics.append("# HELP jvm_gc_collection_seconds Time spent in GC\n");
metrics.append("# TYPE jvm_gc_collection_seconds counter\n");
metrics.append("jvm_gc_collection_seconds{gc=\"").append(gc.getName()).append("\"} ")
.append(gc.getCollectionTime() / 1000.0).append("\n");
});
return metrics.toString();
}
In Part 5, we'll explore advanced structured concurrency patterns for real-world scenarios: conditional cancellation, progressive result collection, and fault-tolerant orchestration.
We'll cover:
--enable-previewPart 4 complete. We focused on making parallel orchestration easier to reason about and safer to operate.
Series Navigation:
Failure-handling differences often become apparent first in migrated paths.