Structured Concurrency, AOT, WASM, and GraalVM: Conference-spree Intermezzo - JVM Weekly vol. 129
So, we’ll summarize the conferences collectively next week, and today I’ll return to what tigers (at least one tiger) love the most.
Darn, since I'm at one conference after another (I have a Love-Hate relationship with this May), I planned to focus on them in the following edition, but then I realized you’d get bored, and so would I.
So, we’ll summarize the conferences collectively next week, and today I’ll return to what tigers (at least one tiger) love the most.
I hope I read the room properly 😉.
The main reason I decided to interrupt the plan of sharing conference updates was the new preview of Structured Concurrency. Well, this caught me by surprise – OpenJDK did something unexpected.
Usually, by the time we get to late-stage preview APIs in Java, we’re just seeing cosmetic changes, which is why JEP 505: Structured Concurrency (Fifth Preview) - the fifth preview of the feature - stands out. It brings some noticeable improvements to structured concurrency, a key component of the Loom project (you probably haven’t heard that name in a while, right?) and working with virtual threads.
In previous versions (JEPs 428, 438, and four earlier previews – JEP 453, 462, 480, 499), the main structure was StructuredTaskScope, traditionally created through a constructor. The API already offered consistent task life cycle management, cancellation propagation, and better observability. However, configuring completion policies was limited, and code readability suffered in more complex cases.
Preview 5 changes this in several ways. StructuredTaskScope is now opened via static factories (StructuredTaskScope.open()), and the default call creates a “fail-fast” scope – when a task throws an exception, the others are cancelled, and join() propagates it. A key addition is the Joiner interface, allowing the definition of custom joining policies (e.g., “first success cancels the rest” anySuccessfulResultOrThrow() or “all must succeed” allSuccessfulOrThrow()).
Simple sample from JEP itself.
Response handle() throws InterruptedException {
try (var scope = StructuredTaskScope.open()) {
Subtask<String> user = scope.fork(() -> findUser());
Subtask<Integer> order = scope.fork(() -> fetchOrder());
scope.join(); // Join subtasks, propagating exceptions
// Both subtasks have succeeded, so compose their results
return new Response(user.get(), order.get());
}
}
The scope now also inherits ScopedValue for child tasks, and an extended thread dump in JSON format shows the hierarchy of StructuredTaskScope, making diagnostics easier.
Perhaps this change unconventional for Previews has made structured concurrency a topic of discussion lately, but it’s not the only interesting JEP that has come up recently. We also have JEP 515: Ahead-of-Time Method Profiling by Igor Veresov & John Rose.
For years, Java has suffered from the so-called cold start: HotSpot needs to gather method profiles before JIT can start aggressive optimizations, meaning the application runs slower right after startup. In cloud environments, where microservice instances are constantly rebooted, this "warm-up" process becomes a significant problem. The Leyden project thus proposed shifting expensive operations to pre-startup time, and JEP 515 is one of the solutions in line with this strategy: it suggests collecting profiling data in a special training run of the application and storing it for later in an AOT-cache, so that the next JVM start has a "hot" method map ready.
The mechanism is simple: on the first run, we add the flag -XX:AOTMode=record, and the JVM saves both loaded classes (via JEP 483) and new method profiles to the cache. On the next run, we provide the file via the -XX:AOTCache=app.aot option, and the JVM loads the profiles before the compiler starts, allowing it to generate code at a higher optimization level right away, reducing the (according to JEP’s demo example) warm-up time by about 20%.
JEP 515 fits with other pieces of the Leyden puzzle: AOT-cache classes (JEP 483) have already reduced startup time, and the planned JEP 516 will add an object cache neutral to GC, speeding up applications that use ZGC. While eliminating the warm-up issue has been a recurring theme in Graal Native Image presentations for years, moving profiles to the cache could be the "missing puzzle piece" in the fight against the cold start, as highlighted in discussions on Hacker News and Reddit (another topic that’s gained attention outside the Java bubble). Thanks to JEP 515, Java 25 moves closer to the ideal of being "hot from the first millisecond," which, in practice, means cheaper auto-scaling (without the need for constantly running, warmed environments Just-in-Case) and less patience required at each redeploy.
And since we’re digging deep into JVM optimization topics, it would be a sin not to check what’s happening in Project Valhalla.
It’s been a while since we’ve been describing "raw" feature sketches from mailing lists in JVM Weekly, but when John Rose (author of mentioned JEP 515) shares a fresh document on the Atomic Value Access API, it’s hard to ignore – this is the missing piece between VarHandle and the upcoming value types from Valhalla. His note explains why the classic pair VarHandle + Unsafe::getValue isn’t enough when flat entities start running around the heap like pancakes.
Let me quickly recap the problem: VarHandle debuted in JDK 9 (precisely in JEP 193) as a sleek, safe successor to Unsafe, providing uniform access to fields, statics, and arrays in all memory modes. The API worked well with Panama (off-heap segments) and virtual threads, but with Valhalla came a problem – VarHandle can’t distinguish whether the memory holds a reference, a half-flat 64-bit, or a fully flattened record; and without this knowledge, JITs and GC have to guess.
The Valhalla Atomic Value‑Access API introduces a LayoutKind enum that lets the JVM know whether a field holds a normal object reference, a flattened value that fits in one 64‑bit atom (ATOMIC_FLAT), a larger flattened value requiring non‑atomic access (NON_ATOMIC_FLAT), or a nullable/buffered variant, so it can immediately choose the correct load‑store strategy.
Three helper queries - isFlat, isConsistent, and isPortableAtomic - let library code ask if the layout is flat, if the entire value is a single, consistent 64‑bit atom, and if that atom can be handled safely on every platform. When all checks pass, the primitive copyConsistentValue copies those 64 bits from one variable to another with the same layout and, if a hidden reference is involved, triggers an internal GC hook so the pointer remains valid, giving libraries a race‑free way to implement fast getters and setters for value classes without sacrificing portability or garbage‑collector correctness
In other words, another step so that Java 26 (or whatever version adopts Valhalla – I want to believe) can truly say "you write a class, it runs like a primitive," without descending into the dark alleys of Unsafe.
We're on the topic of Valhalla, so here’s another, a bit older treat I came across at 2 a.m. (true story - RSS feeds + insomnia + a new dog licking your face at random hours = problem). Brian Goetz's document In Defense of Erasure is a brief history of why Java still trusts type erasure and why Valhalla isn’t planning to abandon it in favor of full reification.
Goetz reminds us that erasure saved backward compatibility by not requiring JVM reconfiguration or testing of hundreds of thousands of classes, which debuted with generics in Java 1.5 - and that's why it remains a "layer of hygiene" between the rich language model and the reduced bytecode.
However, Valhalla adds selective specialization to the old mechanism: where it makes sense (e.g., List<int> or new value types), the compiler will generate a specialized class, but elsewhere, a lightweight erased version will be used to avoid bloating the code and breaking binary compatibility. This balance is essential, and Goetz contrasts it with the "reify everything" approach known from C# or C++.
Not the newest, but still a very good read.
I'm in the mood for internal-related news, I’m adding another gem from the GraalVM drawer - SkipFlow, recently described by David Kozák on the project blog. This experimental extension to Native Image appeared in GraalVM for JDK 24, but only now did I have the time to dig through the paper explaining its operational mechanisms.
What does it bring? Lighter container images without changing a single line of code and without lengthening build times – in some cases, the compilations are even slightly faster.
What’s the magic (of course, in Arthur C. Clarke’s definition)? SkipFlow teaches classic points-to-analysis (static code analysis that figures out which objects each variable or reference can point to at runtime) to also track primitive values and branch conditions (predicate edges), so it can eliminate methods earlier that, in the final binary, will never run. The whole trick happens before the Graal compilation phase, so the compiler gets a smaller call graph, and the analysis overhead usually decreases instead of increasing.
The numbers look promising: tests on Spring/Micronaut microservices, as well as popular Renaissance and DaCapo packages, show a reduction of reachable methods by ~9% and binaries by even 7% (Renaissance) or over 11% (DaCapo). More importantly, the overall analysis time dropped by about 4.4%, as SkipFlow removes excess code before the main compilation. In practice, this translates into less memory in containers and faster rollouts – every extra MB counts when CI/CD has hundreds of images daily.
So if your native microservices are getting bloated after each release, SkipFlow might be a free improvement before you start manually cutting dependencies. The mechanism is disabled by default in JDK 24, but Oracle has announced that it will be enabled by default in JDK 25.
And still more GraalVM news.
Back in March, GraalVM 24 introduced the first wave of WebAssembly features, including stable .wasm loading via import in GraalJS – but we only saw the true potential of it on March 27 at Wasm I/O 2025 in Barcelona, where Patrick Ziegler and Fabio Niephaus from Oracle Labs presented the fresh Native Image → Wasm backend and called it "the next step toward Write Once, Run Anywhere for the browser."
On stage, they showed how a regular JDK 25 EA project can be compiled with a single command native-image --tool:svm-wasm into a .wasm file (~1 MB without optimization) and then run in a JS shell with full garbage collection and exception handling. The creators highlighted three key benefits of the new backend.
First, full Wasm GC support: the Graal compiler emits a memory structure based on GC Proposal for WebAssembly, so no additional GC runtime is needed in JavaScript, nor is manual resource management required.
Second, the project enables interop between Java and JavaScript – the built module exposes Java methods as regular JS functions, and from JVM code, host functions can be called via GraalWasm, opening the door to extending existing web applications with Java-written logic.
Third, the backend can "swap" JDK I/O-dependent parts for Wasm-appropriate methods, so only the real application logic, not the entire Java ecosystem, reaches Wasm.
From a developer’s perspective, the biggest win is gaining a new distribution method: the same Helidon service compiled on Linux x64 can now run in a wasmtime container, in the browser via ES-module-import, or in an embedded JVM as a GraalWasm module – all from the same artifact. What’s more, the March release of GraalVM 24 stabilized the js.webassembly flag, so the same binary code can be loaded in both Node-style JS and GraalJS on the JVM without experimental switches. This allows companies to move part of their Java logic closer to the client (edge computing, offline PWAs) without rewriting it in TypeScript or Rust.
Oracle Labs also outlined a roadmap for the second half of 2025: default inclusion of WASI Preview 1, support for Relaxed SIMD proposal (already merged in 24.2), and continued work on a thread- and network-aware runtime so that Wasm modules can take advantage of java.net and java.util.concurrent without a heavy sandbox. When you add the ability to generate native images both as executables (like .exe) and .wasm from a single source, GraalVM becomes the most universal Java compiler.
To wrap up – since we’re surfing the runtime news, let’s take a look at the recent post Present and Future of Kotlin for Web by Artem Kobzar and Zalim Bashorov . In it, JetBrains declares its full support for Kotlin Multiplatform on the front end and lays out five priorities – better IDE tools, pushing Kotlin/Wasm + Compose Multiplatform to Beta status, compatibility mode for older browsers, "civilizing" interoperability in Kotlin/JS, and jumping to the latest ECMAScript standard, as well as pushing Kotlin/Wasm + Compose Multiplatform to Beta. Do we have “Year of WASM”?
The second thread is specific improvements on the compiler side - incremental builds, which, according to the post, already shorten build times by up to 2x, as well as better Wasm & JS debugging support, and a better browser debugger. Thanks to this, Compose Web is catching up to other platforms, and the team also highlights that the Wasm runtime now uses the previously mentioned WasmGC support across all major engines (Safari joined in December 2024).
The next set of changes is clearing up @JsExport
– after removing restrictions, most classes will be exportable “with a click,” and where we don’t control the sources, we’ll get a DSL in Gradle that generates the necessary wrappers. Meanwhile, Kotlin/JS is switching by default to ES-Latest with a Babel fallback.
The roadmap for the coming months already shows multithreading in Wasm, per-module compilation (faster dev-loop, lazy loading), support for modern bundlers (esbuild/Vite/bun), and an automatic generator for wrappers from TypeScript types. If these plans succeed, Kotlin will no longer be "a JVM curiosity on the front end," but a real contender in the JS/Wasm world, connecting backend, mobile, and browser UI with a single codebase.
And get ready for more Kotlin news because next week I’m heading to KotlinConf, so I’m expecting even more announcements.
And I’ll leave you with a song I can’t get out of my head.
I even finally started playing Undertale on my long-neglected PS Vita yesterday. I hope it is as good as people say.