Kotlin Roadmap update and JDK 23 Release Followups - JVM Weekly vol. 101
This week - we have Kotlin!
1. Kotlin Roadmap Update
Last week we talked about the present of JDK in the form of its new release, but now let's look into the future. The Kotlin team has published another iteration of its roadmap, giving us a glimpse into the future of the language. This is the first update after the release of Kotlin 2.0, making it all the more interesting.
Let's start with the achievements compared to the previous roadmap iteration. Since the last update, a lot has been checked off the task list. The K2 compiler is now stable, and Kotlin/Wasm has been optimized for standalone WebAssembly virtual machines. In the multiplatform domain, memory management in Kotlin/Native has been enhanced, while the older approach has been deprecated – out with the old, in with the more robust one. Additionally, cross-platform klib compilation has been stabilized, simplifying binary compatibility for library authors.
The tools have seen the public release of the K2 plugin for IntelliJ IDEA, significantly improving code analysis performance and stability. New priorities on the roadmap include enhancing compiler diagnostics, generating default JVM methods, public release of export to Swift, CMS GC, and support for Xcode 16. On the other hand, some elements have been dropped from the roadmap, such as support for SwiftPM and the stable release of Dokka (as the developers first need to address memory issues).
Starting with language changes, planned Kotlin language changes include adding static extensions, collection literals, type unions for error handling, named destructuring, and immutability support. Improvements are also being discussed for Kotlin Multiplatform, KDoc documentation, UUID support, type resolution improvements, explicit backing fields, context parameters, safeguards in 'when' expressions, multiline interpolation, handling break and continue outside the local context, and references to synthetic properties in Java. An important step is also changing the way default methods for JVM interfaces are generated, introducing new compiler flags that replace the older ones, recommending the use of -Xjvm-default=all-compatibility
or -Xjvm-default=all
to manage default methods in interfaces.
There is also a plan to create guidelines for the structure of compiler messages, improve type mismatch diagnostics, and introduce a unique identifier (KT-59109) for diagnostic issues, to facilitate searching and understanding by users. Additionally, it focuses on reviewing diagnostics that highlight entire declarations (KT-70752) to prevent unnecessary highlighting of large code segments in the editor. The UX of compiler messages significantly affects the overall tooling experience, so improving their readability and presentation in the IDE is crucial.
Background changes are also happening to allow Kotlin to better integrate with the JVM platform and other ecosystems. The introduction of default methods JVM will allow for more flexible interface implementation, making it easier to manage code between Kotlin and Java. Additionally, enhancements in function inlining and generating more optimized bytecode will enable Kotlin to align even better with the JVM, minimizing performance overhead.
The toolset is also undergoing a solid modernization. For instance, the team is preparing for the stable release of the K2 mode in IntelliJ IDEA, which can already be tested in some editions. There is also a plan for a new Build Tools API since most current build tool integrations (plugins) don’t have many common elements and those that don’t always work across all build systems. As a result, many features are available only for specific systems (e.g., Kotlin Multiplatform and Gradle) or behave differently in various plugins, leading to inefficiencies and complexities when supporting new systems like Bazel or Buck. The new API aims to change this.
Of course, there are more changes – these are just my personal picks. If you're interested, I encourage you to review the entire roadmap.
2. Some Follow-Ups to JDK 23 Release
This is classic – after every new JDK release, several texts emerge that couldn't catch the initial wave and JEP presentation, but are still worth mentioning.
The first is a regular publication on Security by Sean Mullan: JDK 23 Security Enhancements. It’s important to remember that every new JDK, in addition to JEPs, also brings incremental enhancements and bug fixes, which are rarely mentioned – including in the area of security. This time, we get new debugging tools, cryptographic optimizations, and improvements in certificate management. One of the most significant additions is a new KeyStore for macOS, enabling access to root certificates stored in the system KeyChain. This resolves issues with establishing HTTPS connections caused by the lack of trust certificates in the certificate chain on this system. Just set the system property javax.net.ssl.trustStoreType to KeychainStore-ROOT to improve TLS connection handling.
Other noteworthy enhancements include improved performance of the SecureRandom
class and a new PKCS11 configuration attribute (an open standard defining a platform-independent API for cryptographic devices) that allows applications to bypass restrictions on the use of older algorithms – at their own risk, of course. Additionally, there are improvements in Kerberos debugging – an interesting network authentication protocol that securely identifies users and services in a network using a ticketing system, eliminating the need to transmit passwords.
Continuing with regular publications, Thomas Schatzl published JDK 23 G1/Parallel/Serial GC changes. And there's a lot to write about, as JDK 23 introduces several improvements to the "stop-the-world" collectors in OpenJDK, even if they aren't as extensive as in previous releases. The biggest change in Parallel GC is replacing the old Full GC algorithm with a more traditional one based on the Mark-Sweep-Compact method. The new algorithm eliminates the issue of scanning objects during compaction – the process of moving objects in memory to create a single continuous block of free space, reducing heap fragmentation and improving memory management – which was slow in the previous version and caused significant performance drops. As a result, performance has been improved, and cache usage reduced. In the case of G1 GC, a longstanding issue with incorrect buffer management during reference processing has been fixed, and pause times and memory usage have been optimized. Serial GC has continued work on cleaning and refactoring code.
Thomas also looks to the future, emphasizing that a key focus for future work is reducing the amount of memory used by remembered sets in G1 GC, which store information about references (pointers) between objects located in different memory regions. The team is working on merging remembered sets for multiple regions to remove unnecessary, duplicate entries and save memory. The first step is to combine the remembered sets for the young generation regions, which already brings noticeable savings. Further plans include more advanced changes, such as JEP 475: Late Barrier Expansion for G1, which has already been announced as a (highly likely) part of JDK 24.
And speaking of Garbage Collectors, I can't help but recommend Michał Piotrowski article Current state of GC in Java 23. This is a holistic overview that provides a broader picture of the current memory management landscape in Java 23. The author guides us through the evolution of GC mechanisms since Java 8, showing how we moved from simple, single-threaded solutions to today's advanced algorithms, such as G1, Shenandoah, and ZGC. You'll find descriptions of the latest changes and challenges facing Java, e.g., how to meet the growing demands related to containers, cloud, and microservices. The text also shows how to practically use and fine-tune the GC in your applications to squeeze maximum performance out of the JVM. The author suggests which settings are worth modifying and which are best left alone to avoid "overturning the valves." I highly recommend it if you feel you need a bit of background.
Continuing the topic of typically overlooked changes, this might be the first time I'll draw your attention to a bug fix. This one (JDK-8180450 - secondary_super_cache does not scale well) is exceptionally interesting, so it's worth dedicating some space to appreciate those who do the tedious work in JDK.
Some context: When an object calls a method, the JVM must check which class (or interface) the method belongs to in the inheritance hierarchy. In complex class hierarchies or intensive use of interfaces, such lookup can be costly in terms of performance. And as engineers, we use cache when we have a problem, hence secondary_super_cache was placed in the Klass structure – a cache that stores information about the superclasses and interfaces a given class belongs to. Instead of searching the entire class hierarchy each time, the JVM can use the previously stored information in this cache.
However, cache needs to be filled. During certain workloads, updates to the Klass::secondary_super_cache
field cause excessive cache line invalidation traffic, leading to system slowdowns. The problem arises when the cache becomes unstable (which is typical for single-element caches) and can occur in multithreaded applications, such as application servers, database systems, or applications handling large volumes of queries and processing diverse objects. For example, an e-commerce application searching a product catalog or an online game managing many objects in the game world. In such cases, various threads simultaneously test object classes against multiple interfaces, e.g., verifying if a given object meets specific conditions or properties. This causes continuous, rapid changes in the cache used by the JVM, leading to thread conflicts and excessive CPU memory traffic, slowing down the entire application. It especially clashed with new language capabilities like Pattern Matching.
The solution was to introduce a mechanism limiting the frequency of secondary_super_cache
field updates. A counter was added next to the problematic field. Before each cache update, the counter is checked – if it reaches a certain threshold (meaning that many threads are trying to update the cache simultaneously), the update is blocked. This avoids continuous changes that cause conflicts. This counter is designed to "fade" after a while, allowing normal operation to resume when intense load subsides. Alternative solutions, such as local thread counters or placing the counter in the MethodData object, were considered too complex compared to the problem this bug caused.
And finally, let's leave the JDK and move on to GraalVM, as Micronaut introduces support for Micronaut Python based on the GraalPy project – a Python implementation for JVM built on GraalVM. This allows developers to use Python modules as Java Beans with simple annotations. Thanks to this, methods in Java interfaces are automatically mapped to their corresponding Python functions, facilitating the creation of multilingual applications. Integration includes adding the micronaut-graalpy dependency to the project and using a special Maven plugin, and will debut in the upcoming Micronaut Framework 4.7.0.
3. And Finally – Returning to the Topic of Sustainability
To conclude this rather extensive edition, let's touch on one of my favorite topics: sustainability.
The article Measuring Java Energy Consumption by Mirko Stocker shows how to use tools like Java Microbenchmark Harness (JMH) and JoularJX to measure performance and energy consumption of individual methods. The study compares various Java collection data structures, such as ArrayList, LinkedList, and those from external libraries.
Although JMH (a widely used benchmarking tool) does not directly measure energy consumption, it prepares the Java Virtual Machine (JVM) before actual measurements, allowing for more accurate results.
The author then combines JMH and JoularJX to analyze energy consumption by various data structures, modifying benchmark parameters, such as collection size, to observe how performance and energy consumption are correlated. JoularJX, in turn, measures the energy consumption of Java applications at a detailed level, down to individual methods. By integrating with the JVM, JoularJX can break down energy consumption and provide information on which methods consume the most, showing how JoularJX measures energy consumption.
The results show a direct correlation between a method’s execution time and its energy consumption – which, although seemingly natural, is not as obvious as papers like Energy Efficiency across Programming Languages: How does Energy, Time, and Memory Relate? teach us. This correlation is particularly evident in code that runs without interruption. It suggests that within a single platform, performance can be used as an indicator of energy efficiency, even if direct energy measurement provides more detailed information.
So if you've been looking for a reason to write efficient code, the answer is simple – for the planet 😃.