Back to the Roots: Let's Talk About Leyden, Loom, and Valhalla - JVM Weekly vol. 96
Today, we're returning to what I hope JVM Weekly readers enjoy the most - the JVM.
In the beginning, I have a small explanation, as my plans for the current edition got a bit disrupted. I had hoped that by today we would have most of the videos from the JVM Language Summit, especially the ones from the Valhalla and Leyden presentations, because those projects are currently among the most important. However, these videos are still coming in, so instead of making another "waiting" edition, I decided to tackle some pending JVM-related topics (because new releases may be cool, but there's only so much of that one can handle).
Therefore, I suspect that this topic will spill over into two editions—today, we'll cover Leyden, Loom, and Valhalla, and next week, we'll at least touch on Liliput and Babylon. I think we'll also provide a broader summary of the JVM Language Summit and its most interesting announcements. So, after this long introduction, enjoy yourself.
1. What's New in Project Leyden
Alright, let's start by reminding ourselves what the hero of this section is.
Project Leyden is an initiative under the OpenJDK project aimed at improving Java application performance by reducing startup time, decreasing resource usage, and speeding up the achievement of optimal performance. It's a rather broad "umbrella" that could probably cover a lot, but the project focuses on shifting some operations that are usually performed dynamically during the application's runtime to the preparation stage – the so-called Ahead-of-Time (AOT), which has been a hot topic lately.
By shifting some operations to the AOT stage, Leyden reduces the burden associated with dynamic loading and class compilation during application startup. Importantly, Leyden is not a replacement for native GraalVM images but rather an evolution of the JVM itself, offering an alternative path to various optimizations, e.g., through the development of the Class Data Sharing (CDS) mechanism or improvements in AOT compilation handling.
Why am I writing about this now (besides the fact that Leyden is always worth writing about)? The JEP 483: Ahead-of-Time Class Loading & Linking has undergone a significant overhaul. The dynamic nature of the Java platform, involving loading and linking classes and executing static initializers in real-time, while providing flexibility, significantly burdens the startup process. This causes applications that use extensive frameworks, like Spring, to take seconds or even minutes to start (although I haven't seen minutes in a long time), which is undesirable in production environments.
The proposed solution involves preloading and linking classes in a process called "training run." During this run, the JVM records information about the loaded classes in a special cache, which can then be used to instantly load those classes in subsequent application launches. As a result, typical startup operations, such as scanning JAR files or executing static code blocks, are performed earlier, reducing application startup time by up to about 42% (at least that's what the JEP authors reported in two tested cases mentioned in the proposal).
This solution is fully compatible with existing applications, requiring no changes to the code or the process of launching them, making it an interesting alternative to the different toolchain brought by GraalVM. Moreover, JEP 483 lays the foundation for further improvements, including future work on improving JVM warm-up time, which could allow applications to start instantly in an optimized state.
Further improvements are planned for the future, such as simplifying the cache creation process, supporting more advanced training tools, and integrating with various garbage collectors, including ZGC. Long-term development directions also include the possibility of supporting more dynamic class-loading scenarios.
How does this translate into practice? The Quarkus team (specifically Maria Arias de Reyna Dominguez, Andrew Dinn, and Sanne Grinovero) recently shared their impressions in a publication with a very to-the-point title Project Leyden. Although Quarkus already significantly speeds up application startup through build-time optimizations, with Leyden, these optimizations can be even more effective as the JVM gains, among other things, the mentioned new capabilities of preloading and linking classes. The combination of these two technologies can reduce application startup time to an absolute minimum while maintaining the advantages of a dynamic JVM. Leyden can also open the door to better integration with functionalities like native images in GraalVM, offering developers more optimization options depending on the project's needs. Overall, the publication touches on topics I mostly skimmed over today, so if you're still hungry for more, I refer you to it.
The teams of both projects (Quarkus and the OpenJDK team working on Project Leyden) declare a willingness to collaborate, which could bring new solutions and tools that automate and improve processes related to application startup. As a result, Quarkus (and other frameworks) could become even more efficient and flexible tools.
2. What's New in Virtual Threads
Performance, however, comes in many forms. Although Loom has delivered its "core," and it's been a bit quieter around it, that doesn't mean the development has stopped. We have another big JEP related to the topic.
But before we get to it, let's look at a case that will give us some context about the challenges faced by those wanting to use virtual threads themselves. The Netflix JVM Ecosystem team recently published the results of their research on the benefits of adopting new Java features, particularly the virtual threads introduced in Java 21, on a scale as large as that of Netflix. These results are quite interesting.
Virtual threads are designed to reduce the complexity and burden of managing high-throughput applications, allowing threads to suspend and resume efficiently during blocking operations. However, during the migration to virtual threads in their Spring Boot and Tomcat-based services, Netflix engineers encountered an issue where instances stopped handling traffic. The problem was attributed to the pinning of virtual threads to OS threads during synchronized operations, causing the Tomcat server to stop processing requests because all available OS threads were occupied, resulting in delays in processing requests.
The investigation revealed that virtual threads were getting blocked while waiting to acquire locks, particularly when performing operations in synchronized blocks or methods. In one case, all OS threads were occupied by pinned virtual threads, preventing further progress. This situation led to a deadlock-like state, where the application could not proceed despite signals to continue. Analyzing thread and heap dumps, the team identified that this deadlock was caused by the inability to unpin virtual threads due to a lack of available OS threads in the fork-join pool. This issue highlights the complexity of integrating virtual threads with existing synchronization mechanisms, which is a challenge Netflix hopes to address in future Java releases. Despite these challenges, Netflix remains optimistic about the potential performance improvements virtual threads can bring and continues to refine their use in production environments.
And rightly so, as the JDK developers are already working on various fixes to the existing mechanism. JEP draft: Adapt Object Monitors for Virtual Threads introduces an update to the HotSpot virtual machine. The main goal is to eliminate issues where virtual threads remain "pinned" to their platform carrier threads during these operations so that virtual threads can detach during blocking in a synchronized block or method, or while waiting in Object.wait()
. By allowing virtual threads to detach from carrier threads in such situations, the proposal aims to make more Java libraries compatible with virtual threads without requiring extensive code changes and improve diagnostic capabilities to identify cases of pinning. The JEP also discusses the impact on diagnostics, particularly how the JDK Flight Recorder (JFR) jdk.VirtualThreadPinned
event will be extended to capture more cases where virtual threads block and pin their carrier threads. I had the opportunity to describe this topic before, but we've finally got a full JEP, so it was worth revisiting.
Let's return once more to community topics, as a very interesting case study has appeared on Infoq. The article Java Virtual Threads: a Case Study by the IBM team (Laura Cowen, Rich Hagarty, Vijay Sundaresan, Gary DeVal) presents a detailed analysis of the performance of Java's virtual threads compared to the existing thread pool management system used by Open Liberty. The assessment, which focused on typical cloud-native workloads, found that while virtual threads offer faster startup times and a reduced memory footprint per thread, they generally perform worse in computationally intensive tasks compared to Liberty's autonomous thread pool. Moreover, the study highlighted unexpected performance issues with virtual threads, particularly in some Linux kernel environments, due to interactions between the Linux scheduler and Java's ForkJoinPool
.
Despite the potential benefits of virtual threads in handling high concurrency levels, the article concludes that Liberty's thread pool remains more efficient for most typical workloads in Liberty environments. As a result, the authors decided not to replace the existing thread pool with virtual threads in Liberty. The findings suggest that while virtual threads may simplify the development of multithreaded applications in Java, developers need to be aware of their limitations and consider their specific use cases before choosing virtual threads in production environments.
Okay, let's give Virtual Threads a break, as I always try to remind everyone that Loom brings more than just new mechanisms. One of them is Structured Concurrency – a concurrency management paradigm popularized by Kotlin, which has gained popularity due to the great control it offers developers, its expressiveness, and the ease of understanding subsequent events in the code. Java recently adopted this approach as a preview feature available in Java 21 and 22. However, although the foundations of the API for structured concurrency are already being laid, and we shouldn't expect major changes here, the current implementation in Java is considered low-level, which opens the door for more advanced, user-friendly libraries based on these primitives.
The Jox library created by Adam Warski from Software Mill is an example of such an initiative, offering a high-level API focused specifically on structured concurrency in Java. Jox bases its API on Channels, once again inspired by Kotlin, specifically the paper Fast and Scalable Channels in Kotlin Coroutines by Nikita Koval, Dan Alistarh, and Roman Elizarov. The library introduces the concept of "supervised scopes," which ensure that all concurrent operations (splits) within the scope are properly managed, especially when exceptions occur.
import jox.Channel;
class Demo {
public static void main(String[] args) throws InterruptedException {
// creates a rendezvous channel (buffer of size 0—a sender &
// receiver must meet to pass a value)
var ch = new Channel<Integer>();
Thread.ofVirtual().start(() -> {
try {
// send() will block, until there's a matching receive()
ch.send(1);
System.out.println("Sent 1");
ch.send(2);
System.out.println("Sent 2");
ch.done();
System.out.println("Done");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
});
System.out.println("Received: " + ch.receive());
System.out.println("Received: " + ch.receive());
// we're using the "safe" variant which returns a value when the
// channel is closed, instead of throwing a ChannelDoneException
System.out.println("Received: " + ch.receiveSafe());
}
}
This approach simplifies concurrent programming by ensuring that once a scope ends, all associated threads have also been terminated, preventing issues such as abandoned threads. The library also offers various combinators, such as parallel execution and races, which further enhance the utility of structured concurrency. Although Jox is based on Java's StructuredTaskScope
, it offers a more accessible and less error-prone interface, making it easier for developers to write robust concurrent applications.
The development of Jox is inspired by its predecessor, Ox, a Scala library that explores advanced structured concurrency features, such as error handling with the Either data type. The Ox project influenced Jox, and although Jox is still in its early stages, it aims to introduce similar capabilities to Java. By leveraging structured concurrency, Jox and similar libraries promise to make concurrent programming in Java not only more efficient but also more intuitive – something we should all wish for. Looking at the Coroutines API, Loom still has a lot to offer in terms of available abstractions.
And since we're comparing Java to Kotlin, I'll take this opportunity to remind you of the blog Dave Leeds on Kotlin. The author doesn't reinvent the wheel in his publications – it's rather a fairly standard introduction to Kotlin, focusing on topics such as collections, nullability, or regular expressions. What sets the author apart not only from other bloggers but also from the very good documentation from JetBrains is the fact that he decided to explain individual concepts using cartoonish graphics. As a longtime fan of comic art, I must admit that this makes a very good impression and should be interesting not only for beginners but also for seasoned Kotlin developers.
The author's latest text is about Coroutines and explains how they work from a programmer's perspective in a very accessible way. While seasoned developers may not find much new here (although they might be surprised, and I think it's worth checking out just to see how to explain technical concepts in an accessible way), if I wanted to introduce, say, a Java programmer to the world of Kotlin, it would be hard for me to find a more accessible source than Dave's blog.
3. What's New in Project Valhalla
Finally, let's return to Project Valhalla, especially in the context of null.
Our hero is the JEP draft: Null-Restricted and Nullable Types (Preview), which aims to introduce the ability to add nullability markers in Java, allowing developers to express whether reference types can accept a null value. It introduces two new markers: Foo!
, which indicates a null-restricted type, and Foo?
, which indicates a nullable type. By default, types in Java remain undefined in terms of nullability, but the new syntax allows for precisely specifying the programmer's intent.
class Box<T> {
boolean set;
T? val; // nullable field
public Box() { set = false; }
public void set(T val) { this.val = val; set = true; }
public T? getOrNull() { // nullable result
return set ? val : null;
}
public T! getNonNull(T! alt) { // null-restricted result
return (set && val != null) ? (T!) val : alt;
}
}
New null-restricted types must always be initialized before their first use, which is enforced by the compiler. Additionally, nullable types (Foo?
) and null-restricted types (Foo!
) can be used in complex data structures, such as arrays or parameterized generic types, increasing flexibility and code safety.
This solution allows for detecting and handling potential null-related issues at both compile-time and runtime. Conversions between nullable types and null-restricted types are automatically checked during compilation and execution. For example (there are more similar cases), when converting a nullable type to a null-restricted type, and the value is indeed null, the JVM will generate a NullPointerException
.
The JEP also introduces changes to developer tools, such as the javac
compiler, which now supports the new types and warns about potential issues related to type conversions. The impact of these changes also extends to reflection and serialization mechanisms, where new APIs have been introduced to handle null-restricted types.
Here are my two cents. While the above changes are indeed a nice step toward increasing code safety and precision, they also present new challenges for developers. While adding the ability to precisely specify whether a type can accept a null value is undoubtedly beneficial, the need to maintain backward compatibility means that Java must unfortunately continue to support default undefined nullability of types. Better something than nothing, but this may make it difficult to completely eliminate null-related issues. Additionally, developers will now have to consciously decide to use the new types and learn new patterns – and before these become second nature, there will likely be some added cognitive load.
Moreover, automatic conversions between nullable types and null-restricted types, while necessary for flexibility, introduce the risk of unexpected runtime exceptions. This means that developers must be particularly careful when working with these types to avoid errors that may be difficult to detect during the compilation phase. I fear that after the first burn, many will think twice before using the new mechanism.
Although it didn't make it into today's edition (and I learned about it just before publication), a video on the latest Valhalla news – Valhalla - Where Are We? – is scheduled to premiere tomorrow afternoon. If there are any interesting tidbits, I'll make an errata next week.
The latest early Valhalla version is available at jdk.java.net/valhalla. It doesn't include the above null features, but it introduces functionality previously announced in JEP 401: Value Classes and Objects (Preview). This allows the declaration of value classes and records in Java that lack object identity and have only final fields, enabling instance comparison using the ==
operator based on the values of the fields.
And if you don't feel like playing with Early Access Build, you can use run.mccue.dev to test code from the mentioned EA Valhalla version in your browser. I think it's a cool option to play around with the new APIs.