Best of Foojay.io August 2024 Edition – JVM Weekly vol. 98
A month has passed, and it’s time for another review of Foojay articles.
In May, I announced that JVM Weekly joined the Foojay.io family. Foojay.io is a dynamic, community-driven platform for OpenJDK users, primarily Java and Kotlin enthusiasts. As a hub for “Friends of OpenJDK,” Foojay.io gathers a rich collection of articles written by industry experts and active community members, providing valuable insights into the latest trends, tools, and practices in the OpenJDK ecosystem.
Instead of selecting specific articles and dedicating entire sections to them in the weekly newsletter, I focus once a month on choosing a few interesting articles that might be useful or at least broaden your horizons by presenting intriguing practices or tools.
Unsupported OpenJDK Distributions are at Risk of Non-Compliance with DORA
We are definitely running out of acronyms in this industry. Many of you are probably familiar with the DORA term - a set of key performance indicators used to measure the effectiveness of software delivery, focusing on deployment frequency, lead time for changes, mean time to recovery, and change failure rate. However, it turns out that soon (at least in Europe) the industry will be talking about yet another DORA, as we are getting a repeat of GDPR—a regulation many of you have probably not heard of but will soon have to hastily implement. As I work for a consulting company, some partners have already approached us for help with the topic, so I have firsthand experience of what will happen in the next months.
DORA, the Digital Operational Resilience Act, is a regulation introduced by the European Union aimed at increasing the operational resilience of financial institutions in the digital domain. It aims to ensure that financial institutions are capable of surviving, responding to, and recovering from any disruptions and threats related to information and communication technologies (ICT). By establishing comprehensive risk management frameworks, incident reporting mechanisms, and regular resilience testing, DORA seeks to protect the stability and integrity of the financial sector against cyber threats and other ICT disruptions.
The article by Geertjan Wielenga Unsupported OpenJDK Distributions are at Risk of Non-Compliance with DORA highlights the risks of using unsupported OpenJDK distributions in the context of DORA compliance. Unsupported software (including JDK) may lack critical updates and security patches, leading to increased vulnerabilities and operational risks. Even if, as an engineering organization, we can accept this risk in our risk assessment, it may turn out that the company as a whole has no right to do so. Such recklessness stands in direct conflict with DORA’s requirements, which demand that financial institutions have solid IT risk management practices... and that they can defend these regulations in the event of an audit. Using unsupported JDKs may hinder effective incident detection and reporting, compromise resilience tests, and weaken overall cybersecurity posture, leading to non-compliance with the rather strict DORA requirements.
Fearmongering, some of you might say, but I've dealt with corporations too often to assure you that at some point, someone from the Legal team will come and start asking uncomfortable questions. On the other hand, if you’ve been looking for a way to finally upgrade to a new version of JDK you might not get a better opportunity if you don’t use this transition window to move to an OpenJDK that is officially supported. Regulations are there to be used... also for the greater good of the development team 😃.
If you want to learn more, Geertjan and Simon Ritterr, Deputy CTO of Azul, have prepared an entire series of educational articles about DORA. I think they will help you better understand (if it’s, of course, part of your responsibilities, and trust me - it’s definitely somebody’s responsibility in the company you work for 😉) whether and how DORA may potentially affect your company.
OpenTelemetry Tracing on Spring Boot, Java Agent vs. Micrometer Tracing
Metrics (and the standards and frameworks for collecting them) are a surprisingly fascinating topic, and the dynamics of this area, with the evolution of approaches, never cease to amaze me. A little spoiler: next week, we'll have MicroProfile 7.0 in the Release Radar, which will touch on this topic quite a bit. Today, however, we’ll take a look at various approaches to collecting metrics in a Sparing application.
In the latest article OpenTelemetry Tracing on Spring Boot, Java Agent vs. Micrometer Tracing, Nicolas Frankel delves into three different methods for implementing tracing in a Spring Boot application: using OpenTelemetry Java Agent version 1.x, OpenTelemetry Java Agent version 2.x, and Micrometer Tracing. The author uses a simple Spring Boot application with one endpoint to demonstrate the differences. By analyzing the different versions of the OpenTelemetry agent and comparing them to Micrometer Tracing, the article highlights the advantages and disadvantages of each method, paying particular attention to ease of integration, customization options, and impact on application code.
Micrometer Tracing, a vendor-neutral observability facade, allows developers to integrate tracing into JVM-based applications with minimal overhead and great flexibility when it comes to choosing the specific format—similar to how SLF4J works for logging. Micrometer requires additional dependencies and environment configuration but offers the advantage of using the same API to handle both metrics and traces. On the other hand, the OpenTelemetry Java Agent simplifies configuration as it requires no code modifications; being an agent, it is attached to the runtime environment transparently to the application code. Interestingly, the transition from version 1 to version 2 of the agent reveals differences in handling Spring-specific annotations, highlighting the need for clearly defining traces to avoid unnecessary noise.
If you're wondering which tracing solution to choose—this is a must-read. If not, save it for later. A very practical overview with some interesting tips.
How does it feel to test a compiler?
Have you ever wondered what it’s like to test a compiler?
In the article How does it feel to test a compiler?, Alexander Zakharenko, a QA engineer from the Kotlin/Native team, presents some unique challenges associated with compiler testing. Unlike traditional web or mobile application testing, testing a compiler involves a complex program that translates high-level language code into machine-readable code. Alex emphasizes the importance of understanding both the frontend (responsible for analyzing code) and backend (responsible for generating code) of the compiler, making the tasks involve testing language constructs, ensuring proper integration with libraries, and handling various compilation parameters—often in multiple combinations. Testing compilers is not only about testing "functional" compilation results but also non-functional parameters such as memory management (garbage collection) and ensuring compatibility with various Kotlin Native platforms, including iOS, WebAssembly, and embedded systems.
Working on a niche product like Kotlin/Native offers unique challenges, including a unique testing pyramid—from unit tests to end-to-end tests—as well as conducting exploratory tests for features that are difficult to automate. Therefore, the tools and methods used in compiler testing are both diverse and highly specialized. Alex uses basic tools like bash
scripts and vim
, as well as IntelliJ IDEA, but also advanced debugging tools like lldb
, which allow for deeper problem analysis. From my perspective, the most unique aspect is the use of pairwise testing, facilitated by the pict
tool, to effectively manage the huge combinations of compiler parameters.
I highly recommend it if you'd like to get a taste of challenges from a different part of the industry (unless, of course, some compiler QA engineers are reading this).
Running JavaFX applications on ARM with Azul Zulu & JavaFX Nodes versus Canvas
Did you know that JavaFX is still alive? Every time I hear about it, I’m surprised, only to forget and be surprised again later.
But jokes aside, JavaFX is a project that still has a lot of life in it, and its biggest "prophet" is undoubtedly Frank Delaport (who you might also know from hosting the Foojay.io podcast and maintaining the Raspberry Pi peripheral support project in Java—Pi4J). In the past month, he has shared two articles about JavaFX.
The article highlights that the Raspberry Pi platform is an excellent choice for testing Java and JavaFX applications on ARM. The example demonstrated in the article includes a simple JavaFX application that displays the Java and JavaFX version as well as a frames-per-second counter. Testing such applications on ARM allows for quick prototyping and simulation of support for edge applications, where the app must connect sensors with cloud services, providing information to the operator on-site. From this perspective (personal aside: I studied Automation and Robotics, so I encountered more than one "industrial" screen during my studies), JavaFX does indeed seem to be an interesting option for creating stable user interfaces.
The next text in the series is much more focused on the technical details of the framework: it discusses the comparison between using Node and Canvas in JavaFX. Frank conducted an experiment in which he compared the differences between using Node and Canvas to display large numbers of elements. The result shows that Nodes are ideal for static content, such as forms or tables, whereas Canvas offers more flexibility in generating dynamic or custom content. A benchmark revealed that Canvas allows displaying about ten times more objects than Node before the frame rate drops, suggesting that Canvas is more efficient for animations and dynamic drawing.
Lastly, I wanted to mention that Frank compiles a monthly JavaFX Monthly, in which he gathers all the interesting links on the topic of JavaFX. If you're among JavaFX users, it’s a must-follow.
Creating cloud-native Java applications with the 12-factor app methodology
Ah, I almost shed a tear for the era when 12-Factor Apps were the hottest trend everyone was talking about. Defined by developers at Heroku (another relic from the past), the approach provides a set of best practices for building cloud-native applications that are robust, scalable, and easy to maintain. The 12-factor app is based on key principles grouped around three main areas: configuration management, scalability, and deployment flexibility. Applications should be easily configurable, with configuration separated from source code, allowing for environment differentiation (e.g., production, staging). It is also important to minimize dependence on local infrastructure, making them independent and easily portable. Scalability is achieved through designing stateless applications and dynamic resource management. Additionally, the application should be easy to monitor, manage logs, and continuously deploy, thanks to the automation of the deployment process.
The article Creating cloud-native Java applications with the 12-factor app methodology presents a practical application of these 12 factors, demonstrated using a demo application that leverages various open-source technologies (including Open Liberty, Quarkus, MicroProfile, Vert.x, and Kubernetes) to illustrate how each of the 12 factors can be effectively implemented to build resilient, cloud-native Java applications. By analyzing this demo, developers can get a practical, isolated example of how these individual principles can be applied to create scalable and portable applications, increasing their ability to develop solutions tailored to cloud environments.
But that's not all. The original 12-factor apps methodology provides good basic guidelines for building and deploying cloud-native applications. However, with advances in cloud technologies since its inception nearly a decade ago, there has been a need to extend these principles to include several new aspects. That's why the original 12 factors have evolved into the 15 factors, developed by Kevin Hoffman, known as Beyond the Twelve-Factor App. This is discussed in a follow-up to the featured text—Beyond the 12 factors: 15-factor cloud-native Java applications.
The 15-factor methodology builds on the original concepts, introducing three new factors: API First, Telemetry, and Authentication and Authorization. The API First factor emphasizes the importance of designing clear and reusable APIs, which are key to seamless integration in a distributed cloud services ecosystem. Telemetry expands on the original logging factor, focusing on real-time data collection and monitoring, providing essential insights into application performance and user interactions in a distributed environment. Finally, the Authentication and Authorization factor underscores the need for robust security measures to protect cloud-native applications, ensuring secure access and compliance with stringent data protection requirements.
I must admit, the article was the first time I encountered the 15-factor methodology, and I'll need to form a practical opinion on how its various aspects hold up.
And that’s all for today. See you next week in a bit more standard edition!