JVM Weekly #2 - Loom, Loom, Loom... And Cleaners. And Micronaut
1. The community tests Project Loom
As expected, we received a veritable rash of texts about Project Loom. Many of them are really interesting, so we will start with a small selection.
Let's start with a publication by Gunnar Morling. The piece was inspired by a discussion between Ron Pressler (Loom's creator) and Tim Fox of Vert.x, from which we learn about some non-obvious features of Loom's implementation. The Loom scheduler has no idea whether a method is blocking or not - instead, JDK's classic blocking model has been "taught" to release an operating system thread when a Loom Virtual Thread asks it to. However, there were doubts about the Fairness of such a solution, connected with the risk of "starving" specific threads. Such a problem may arise in the case of tasks that are not IO intensive, but CPU intensive. There is no mechanism that would allow the expropriation of such a long-running CPU-based task.
In his article, Gunnar presents some graphs explaining very clearly how this problem manifests itself in practice. In simple terms - if we start more long-lived threads than there are viable enforcers available, classic Threads tend to end more or less at the same time. In Loom, this distribution will be heterogeneous - some threads will finish faster than their predecessor, but many of them will have to wait longer. Ron Pressler argues that this won't be much of a problem in practice, and there is also already code ready to solve this - it just hasn't been merged yet. In general, the problem is solvable - e.g. Go programmers have somehow dealt with it by allowing goroutine longer than 10ms to be "expropriated". Well, that’s the reason why we have the Preview version - to catch such a cases.
In addition to CPU-related constraints (or is it a lack of constraints?), there should also be restrictions on possible memory usage. After all, each thread is a resource that requires (even if minimal) system resources. However, experiments by Heinz M. Kabutz of the journal JavaSpecialist show that the limits can be much further away than they should be on common sense. In his little experiment, the "physical" limit (taking into account memory usage) on the number of virtual threads created should be around 8 million - his snippet (which you can find at the Gazillion Virtual Threads link) was able to crank out 36 billion! How is this possible? Well, it turns out that parked (stopped) threads don't occupy any memory at all and you can create as many of them as you like. The disadvantage is that they can be swept by GC at any time.
Continuing the Loom theme, in addition to creating a cosmic amount of resources, the community is already beginning to invent new ways to use this technology, some very creative. Fanatics of distributed systems will certainly be familiar with Jepsen - a tool for verifying the assurances of database and system developers as to their consistency (in the sense of "consistency" in CAP Theorem - because it is not that obvious at all). In this article by James Baker (who works as Engineering Group Lead at Palantir Technologies) you will learn not only how Jepsen works (and also ricochets off FoundationDB - really fascinating stuff), but also why Loom can be a very good tool for anyone who needs to simulate strange edge cases when it comes to the order of writes and reads. It turns out that the flow control we get with the new toy on the JVM is deterministic enough to be used in this context as well. Using Java's Project Loom to build more reliable distributed systems is a piece of hard-blooded meat for anyone interested in topics like Paxos or logical clocks.
Finally, since we are already covering texts on Loom, let me remind you of the classic (well, August 2020 is already a classic) article On the performance of user-mode threads and coroutines by Ron Pressler. For people unfamiliar with the topic, it is a (quite accessible, despite a large number of equations) basis for discussion on the performance of Loom.
Oh, and Structured Concurrency however will appear in incubation in JDK 19. This release promises to be even more interesting than the last.
Sources
The problem of feasting philosophers - Wikipedia, the free encyclopedia
Using Java's Project Loom to build more reliable distributed systems - James Baker
On the performance of user-mode threads and coroutines - Inside.java
2. Alternative to Finalizers - JDK 9 and Cleaners
Well, Loom is the future of the JVM. But do you know what its past is? Finalizers. And Oracle has been trying hard to make us aware of this in the last week, publishing two articles about their alternative, Cleaners.
What are finalizers? Probably most of our readers are aware of this, but to level things up - we're talking about special blocks of code within an object, which fire when the Garbage Collector cleans up the objects. They allowed, among other things, to clean open file links or database connections - generally things outside the JVM area. Sounds good in theory, but over the years finalizers have developed a (deservedly) opinion that they cannot be trusted because in many cases they simply... didn't work. This, and the level of complexity involved in their implementation (which is probably connected) made the JDK developers finally decide to get rid of them, so in JDK 18 they have been deprecated under JEP 421: Deprecate Finalization for Removal. This is nothing to cry about, as over the years the JVM has seen many alternative, more specialized solutions, such as try-with-resources or just Cleaners.
Cleaners appeared in JDK 9 and are another approach to cleaning the state after the Garbage Collector has cleaned an object. However, they solve a major problem that accompanies finalizers - that behavior to be run after cleaning of the object was defined as... part of the cleaned object. This led to charming race conditions as to whether the cleaning function would run before the object was finally removed by the GC.
Cleaners solve this problem in a very clever way - instead of being part of an object, they create (in the form of a lambda) an additional one, registered in parallel to the object they are cleaning. Thanks to this, no odd race condition occurs.
class Foo {
private static final Cleaner cleaner = Cleaner.create();
private final char[] data;
Foo(char[] chars) {
final char[] array = chars.clone();
cleaner.register(this,
() -> Arrays.fill(array, (char)0))
this.data = array;
}
}
If you want to learn more, the official inside.java blog has republished two excellent publications on their site, which are very good studies on the topic: Replacing finalizers with cleaners and Testing cleaner cleanup, both originally published on the Musing on Java Core Libraries blog. In them, you will find more details, as well as examples of what cleaners can be used, if you do not have any ideas for exploring this functionality yet.
Sources
3. Release Radar: Micronaut 3.5
New week, new Radar. A rather modest one, with only one new version.
The new Micronaut 3.5 had its premiere last week. Apart from dependency upgrades and compatibility improvements (including a new GraalVM), it brings three major new features for developers. Owners of large Micronaut projects (although smaller ones should also benefit) will be pleased with the introduction of incremental compilation for Gradle builds, which should reduce build time and give a better developer experience.
Some people will probably be interested in support for Turbo in Micronaut Views. As the Turbo’s creators themselves write:
Turbo is a collection of techniques for creating fast, incrementally improved web applications without using a lot of JavaScript. All the logic is stored on the server, and the browser only takes care of the final HTML code.
Going beyond the cliché, this templating system has some interesting aces up its sleeve, such as "pushing" HTML to the browser using WebSockets. If you get tired of creating SPAs, Turbo could be an interesting alternative. The people behind the whole concept are the original contributors of Ruby on Rails - and there are reasons why this framework was considered the pinnacle of programmer productivity in its time.
The last of the big additions is a new module, Micronaut MicroStream. MicroStream is a kind of "serialization on steroids", a persistence engine for storing Java objects in a wide variety of formats and storage, such as S3 or MongoDB. Originally part of another framework, Helidon, it has now been integrated with Micronaut
A little bonus: last week Spring.io published the text Preparing for Spring Boot 3.0, where you can find some steps you should follow before the big release in autumn. Apart from the obvious, like upgrading to Java 17 and checking if the project works, there are some less obvious suggestions. That's why it is suggested to have a look today, maybe it is time to slowly prepare the backlog?