Make Final Mean Final... and the other JEPs - JVM Weekly vol. 125
It’s been a while since we’ve seen JEPs! But you know, not the fully baked ones, not the ones that have advanced far in maturity.
No, I’m talking about JEPs fresh out of the pan, almost drafts. And since a few of these have recently popped up, I wanted to dedicate today’s edition to them. These are projects that are just at the beginning of development but are already quite interesting.
JEP draft: HTTP/3 for the HTTP Client API
First up is JEP draft: HTTP/3 for the HTTP Client API, a proposal by Daniel Fuchs to extend our familiar HttpClient with support for the new-cool protocol world – HTTP/3. It’s technically new-cool from 2022, but you know – the world of protocols doesn’t evolve at the same breakneck speed as AI models.
HTTP/3 is the latest version of the HTTP protocol, standardized by the IETF. HTTP/3 is based on the QUIC protocol, which operates over UDP rather than TCP, marking a fundamental shift in the transport layer.
QUIC combines the functionality of TCP and TLS (with TLS 1.3 encryption built into the protocol) and enables connections to be established without the traditional handshake, greatly reducing the time required to start data transmission. Unlike HTTP/2, HTTP/3 also eliminates the head-of-line blocking problem by allowing independent data streams to be sent concurrently and independently – delays in one stream don’t affect the others. QUIC also implements modern congestion control and retransmission mechanisms at the stream level, improving connection reliability in unstable environments like mobile networks.
The result? Faster and more responsive content loading, especially in high-latency or variable-quality network conditions.
The goal of this proposal is to add HTTP/3 support to the existing Java HttpClient with minimal changes to the API. Developers will be able to choose HTTP/3 as the preferred protocol version by setting an option in HttpClient
or HttpRequest
. By default, HttpClient will still use HTTP/2, but switching to HTTP/3 will be simple and smooth.
Now, let’s talk about changes to the Java Flight Recorder...
JEP draft: JFR Method Timing & Tracing
As a reminder, Java Flight Recorder (JFR) is a built-in diagnostic tool in the JVM that allows you to record detailed information about the behavior of a Java application with minimal performance overhead. It was originally developed by JRockit (before Oracle acquired it), and is now an integral part of OpenJDK starting from version 11.
Application profiling involves analyzing which components consume certain resources, such as memory or CPU time. JEP draft: JFR Method Timing & Tracing by Erik Gahlin extends JFR capabilities with a mechanism for precise timing and method invocation tracing at the bytecode level – without needing to modify the source code. Until now, developers had to rely on tools like JMH, debuggers based on the Java Debugger Interface, or ad-hoc logs – which are inconvenient or even impossible in some production environments. Sampling profilers (such as async-profiler) were alternatives, but they didn’t allow observation of every individual method.
The new JFR feature will enable precise tracking of specific methods – both from the application and external libraries or JDK classes – helping with diagnosing issues like application startup problems, resource leaks (e.g., database connections), or optimizing performance bottlenecks.
Two new JFR events will be introduced: jdk.MethodTrace
and jdk.MethodT
iming, configurable by a filter specifying which methods to track or measure. These filters are flexible: you can specify particular methods (Class::method), entire classes, or even annotations (e.g., @jakarta.ws.rs.GET), allowing you to measure, for instance, all REST endpoints. New events can also be easily correlated with others (e.g., contention, I/O), providing a fuller picture of the application’s behavior.
Although this functionality may add some CPU overhead, it’s ideal for short-term analyses and works natively in the JVM without the need for an agent.
Cause you know, Java had agents before it was cool.
BTW: If you’re looking for good materials on async-profiler, you won’t find anyone better than Krzysztof Ślusarski and his blog.
JEP Draft: JFR CPU-Time Profiling (Experimental)
The second JEP related to JFR is JFR CPU-Time Profiling (Experimental), which proposes adding CPU time profiling capabilities to the JDK Flight Recorder on Linux systems. The authors of this project are Jaroslav Bachorik , Johannes Bechberger , and Ron Pressler . The goal is to provide developers with a tool for precisely monitoring CPU usage by Java applications, enabling better performance optimization.
Currently, JFR allows profiling of heap memory allocations and execution time (i.e., "wall-clock" time). However, execution time doesn’t always reflect actual CPU usage, especially in applications that heavily rely on I/O operations. Hence, adding CPU time profiling will provide a better understanding of which methods are taxing the CPU, which is key for optimizing application throughput.
The proposed extension introduces a new JFR event jdk.CPUTimeSample, which uses the CPU timer mechanism available in Linux kernels since version 2.6.12. This allows for regular sampling of the call stack of each thread executing Java code at specified CPU time intervals. The user can configure the sampling frequency via a throttle property. With proper configuration, CPU time profiling can add a low enough overhead to be used in a production environment.
Now, time for my favorite new draft, which also gave the title to this edition.
JEP Draft: Prepare to Make Final Mean Final
Prepare to Make Final Mean Final. Sounds serious, right? But don’t worry, let’s go through this together.

The authors of this proposal – Ron Pressler & Alex B. – want us to stop treating the final keyword as just a decoration. The plan is to introduce warnings when trying to modify fields marked as final via deep reflection. Eventually, in future versions of Java, such actions will be blocked by default. Why? To increase application security and give the JVM more optimization opportunities. However, if someone truly needs this functionality, they will have to explicitly enable it when starting the application.
Why does this matter? Final fields are like promises of immutability. Developers rely on them, assuming that once a value is assigned, it will remain unchanged. Unfortunately, reflection currently allows these promises to be broken, which can lead to hard-to-detect bugs and performance issues. The new proposal aims to restore integrity to these declarations.
Additionally, from a performance perspective, "expecting that a final field cannot be reassigned is also significant." The more the JVM knows about class behavior, the more optimizations it can apply. For instance, being able to trust that final fields are never reassigned allows the JVM to perform constant folding, an optimization that eliminates the need to load values from memory, as the value can be embedded directly in the machine code generated by the JIT compiler. Constant folding is often the first step in a chain of optimizations that can together provide significant speed-ups.
And now for something even more technical:
JEP draft: G1: Improve Application Throughput with a More Efficient Write-Barrier
The G1 Garbage Collector in HotSpot VM has long balanced between low pause times and reasonable throughput, but in some cases, it’s been losing up to 20% performance compared to collectors like Parallel GC. The primary cause of this was the complex write-barrier structure responsible for marking memory cards (cards) when references are modified in the heap. Cards are small, fixed-size memory blocks (e.g., 512 bytes) that the entire heap is divided into – each card has a corresponding byte in a special table (card table), indicating whether it contains modified references that may be relevant for GC. The G1 write-barrier consisted of numerous instructions (around 50 on x86-64) that not only marked the appropriate card as "dirty" (i.e., potentially containing relevant references) but also synchronized access to shared memory with GC threads and buffered card locations for later reanalysis. The high memory cost, operation fragmentation, and lack of reference locality resulted in excessive CPU usage and limited machine code optimization.
In the new approach, presented in JEP draft: G1: Improve Application Throughput with a More Efficient Write-Barrier, costly synchronization is removed by using dual card tables. Application and GC threads now operate on separate structures: the application marks changes on one table (application card table), while GC checks the previous one (refinement table). When G1 heuristics detect an excess of marked cards that could overload the next GC pause, an atomic switch occurs – the application continues marking on a new (previously empty) table, and GC works asynchronously on the previous one without synchronization. Additionally, reference data to the young generation (always collected) is stored directly in the card table, eliminating the need to duplicate it in remembered set structures.
The new write-barrier in G1 has been greatly simplified – it only performs a few quick checks (whether the write is not null, doesn't concern the same region, and whether the card is not already marked), and then marks the card as "dirty." As a result, its size is reduced by about 60%, improving code performance and memory locality. As a result, many applications, especially those that heavily modify objects, gain from 5% to 15% more throughput. CPU usage is also reduced, GC operation simplified, and pause times shortened. Despite adding a second card table (about 2MB per 1GB of heap), some applications even use less memory thanks to the removal of older, more complex structures.
The above is a simplified description. A much better and more accurate one can be found by Thomas Schatzl (who, along with Ivan Walulya , co-authored the original JEP) in the text New Write Barriers for G1.
We can also learn some interesting things about existing JEPs. Primitive Types in Patterns, instanceof, and switch will likely receive its third preview, and Structured Concurrency its fifth. The fact that the Vector API has reached its tenth incubation can probably only be commented on in one way.

There are positives, though: we will probably see stabilization of Scoped Values and Module Import Declarations soon!
And since we’re talking JEPs, let me end with one from JDK 24.
In his latest post, Gunnar Morling looks at JEP 483, which might not stand out among the new features of Java 24, but has the potential to significantly impact JVM application startup times – without modifying the code. JEP 483 introduces ahead-of-time (AOT) class loading and linking, meaning that classes can be loaded and linked during the build phase rather than at application startup. This is an extension of the Application CDS mechanism, moving more work from runtime to build time. This is part of the broader OpenJDK initiative – Project Leyden, which aims to shorten application startup times and improve runtime performance.
In the post, Gunnar builds an AOT cache for Apache Kafka 4.0, measuring real gains. By generating a configuration (kafka.aotconf
) and creating an archive (kafka.aot) with JMX disabled, Kafka starts 59% faster (690 ms → 285 ms). A similar experiment with Apache Flink (mini-cluster) showed a 51% reduction in time to first message. In both cases, even the class analysis without linking (a feature similar to AppCDS) provided noticeable speedups. Although the current form requires a "training" application run to generate the class list, simplifications are planned.
Morling also compares Leyden with GraalVM Native Image. While GraalVM offers even better startup times and lower memory usage, it requires deep configuration and limits JVM dynamics. Leyden offers a less invasive alternative, with lower entry cost and better support for the traditional Java application model. JEP 483 is just the beginning – more proposals are on the way, including those for AOT compilation of methods and code.
I hope you enjoyed this return to the classic JEP format.
And for now....