Feature Freeze for JDK 25: What Will the New Edition Bring? - JVM Weekly vol. 133
Starting last Thursday, the JDK 25 entered the Rampdown phase!
Starting last Thursday, the new Java enters the Rampdown phase – this means that the feature list has been frozen and no further new features are to be expected.
As a long-term support (LTS) version (yeah, I know there is no-LTS versions), JDK 25 will receive at least five years of Premier Support from Oracle. The production release is scheduled for September 16, following a second rampdown phase starting July 17, with two release candidates expected on August.
Now, we will go through the complete list of changes in the new edition.
Exegi Monumentum - New Stable APIs
JEP 506: Scoped Values
JEP 506 completes the series of preview versions starting with Java 20 and introduces Scoped Values as a stable feature of the platform. A Scoped Value is a container for an immutable value visible only within the dynamic scope of calls: a method establishes the binding, and the entire chain of its direct and indirect calls - including newly created child threads - can access that value. This mechanism is lighter than ThreadLocal, uses less memory (critical when dealing with millions of virtual threads), and eliminates side-effects related to forgotten cleanup.
Problem it solves: ThreadLocal is often overused to pass context across application layers. Its values are inherited by child threads, causing memory overhead, and lacking a clear “end of life” forces manual remove(), risking leaks. In the era of virtual threads, where each request can get its own lightweight thread, the cost of these copies becomes prohibitive. A way to pass immutable data with a clearly visible lifetime and safe inheritance by child threads was needed.
The solution: Scoped Values are typically declared as static final. The method ScopedValue.where(key, value).run(runnable)
binds the value for the duration of the given block. Within this block, any call to key.get()
returns the value, and once the block exits, the binding disappears, preventing leaks. Bindings are automatically inherited by threads created within the scope (e.g., via StructuredTaskScope
), so child code sees the same context without memory copying.
import static java.lang.ScopedValue.where;
import java.lang.ScopedValue;
public class Example {
private static final ScopedValue<String> USER_ID = ScopedValue.newInstance();
public static void main(String[] args) {
where(USER_ID, "42").run(() -> { // establish binding
greet(); // within the scope
}); // USER_ID.get(); → IllegalStateException outside scope
}
private static void greet() {
System.out.println("Hello, user " + USER_ID.get());
}
}
What changed since the fourth preview (JEP 487): The only functional change is the removal of support for null in ScopedValue.orElse
; it now always requires a non-null fallback value. Other than that, the feature simply leaves preview status, completing its long refinement cycle and allowing both users and library authors to adopt it confidently.
JEP 511: Module Import Declarations
JEP 511 finalizes the import module declaration, which allows a single statement to import all packages exported by a given module and those of the modules it "reads." This simplifies the use of modular libraries in non-modular code and shortens .java files by eliminating long lists of imports.
Problem it solves: Since the module system was introduced, non-modular code had to rely on regular imports and possibly define a module to access full APIs. In practice, this led to duplicate import lists, discouraging use of modular libraries and thus hindering adoption. Wildcard package imports (import p.*;
) didn’t help because a library might expose many packages from multiple dependent modules with non-obvious names.
The solution: The new syntax:
import module M;
acts like a wildcard import at the module level. After this declaration, the compiler sees all public classes from packages exported by module M, as well as those from modules required by M. The declaration works in both modular and non-modular source files and does not require module-info.java
. It was previously tested under JEP 476 (JDK 23) and JEP 494 (JDK 24) and is now stabilized without changes.
Usage example
// Grants access to all exported classes from java.sql
import module java.sql;
public class Demo {
public static void main(String[] args) throws Exception {
var conn = java.sql.DriverManager.getConnection("jdbc:sqlite:test.db");
// ...
}
}
After compilation, the code behaves as if it contained a regular import for every package exported by java.sql, without requiring the project to be modular.
JEP 512: Compact Source Files and Instance Main Methods
JEP 512 finalizes (after four preview versions) the "simplified path" for small programs. It enables compact source files without class declarations, with an instance main() method, and a new java.lang.IO
class for basic input/output. The goal is to reduce boilerplate for beginners and small CLI tools, while maintaining compatibility with existing tools and a smooth upgrade path to production code.
Problem it solves: The traditional public static void main(String[] args)
and mandatory class/package/import declarations introduce newcomers to complex concepts (access modifiers, static, arrays, System.out
) before they can do anything useful. They also make writing quick scripts and CLI tools harder. Previous workarounds like single-file execution, still required full class syntax.
The solution: A "compact source file" may contain only fields and methods. The compiler generates an invisible, final class in the unnamed package and requires an executable main method.
main can be non-static and parameterless. If such a method exists, the launcher creates an instance via the default constructor and calls it. The java.lang.IO class (previously in java.io) offers five static print/println/readln methods, removing the need to know about System.out or BufferedReader. Packages from java.base are automatically imported, so List.of() and others are available without import.
// File: HelloWorld.jc (.java still works)
void main() {
IO.println("Hello, World!");
}
You can run the program like any single-file Java source:
java HelloWorld.java
or compile and run it in the usual two-step flow.
What has changed since the preview version?: The name was changed from “simple source file” to “compact source file”, and static import of IO methods are no longer implicit to avoid "magic" behavior.
JEP 513: Flexible Constructor Bodies
JEP 513 allows constructors to contain regular statements before calling another constructor (super(…)
or this(…)
). These statements may not use the still-unfinished object (i.e., call methods on this) but can assign fields, validate arguments, calculate intermediate values, or log events.
This change closes a long-standing gap between the JVM - which has always allowed this, and the language, which required super(…)
or this(…)
to be the constructor’s first statement.
Problem it solves: Until now, you couldn’t even validate constructor parameters or initialize fields before calling another constructor. This was especially frustrating when migrating complex Java code to Kotlin, where Kotlin had its own semantics. It led to awkward code (duplicated validation, factories instead of constructors, static helper methods) and created artificial limitations not enforced by the JVM.
The solution: The language spec is relaxed: constructor bodies may contain any statements before an explicit super(…)/this(…)
call, as long as:
Every execution path contains exactly one such call (to avoid race conditions),
code before the call does not access the incomplete instance except by assigning its fields,
The call is not inside a try block.
The JVM’s rules on safe object initialization (e.g., superclass constructor is called once) remain unchanged - no VM changes are needed.
class Point {
private final int x, y;
Point(int x, int y) {
if (x < 0 || y < 0) // validation before super()
throw new IllegalArgumentException();
this.x = x; // allowed field assignment
this.y = y;
// super() is implicitly called
}
}
class ColoredPoint extends Point {
private final Color color;
ColoredPoint(int x, int y, Color c) {
Objects.requireNonNull(c); // validate and assign
this.color = c;
super(x, y); // explicit call after prior statements
}
}
With flexible constructors, code becomes more natural: validation and initialization logically precede base constructor calls, and developers don’t need workarounds or code duplication.
JEP 510: Key Derivation Function API
JEP 510 stabilizes - without further changes - the interface introduced in JDK 24 as a preview: a small but critical component of the cryptography library that allows implementation-agnostic use of KDFs (Key Derivation Functions), such as HKDF or Argon2. These functions derive new keys from a base secret key, salt, and context data.
Problem it solves: Java had a rich set of cryptographic primitives but lacked a unified API for key derivation functions (KDFs). Each project implemented HKDF or Argon2 differently, with inconsistent conventions and no standard integration with Security Providers. This hindered the use of post-quantum schemes like Hybrid Public Key Encryption, which rely on KDFs in their key establishment.
The solution: A new factory class KDF.getInstance("HKDF-SHA256") provides a ready-made HKDF implementation. Future algorithms (e.g., Argon2) can be added by external providers, written in Java or native code. Parameters are passed via specialized classes like HKDFParameterSpec, and the result is returned as a SecretKey or byte array. This lets TLS libraries, KEM modules (JEP 452), and PKCS#11 drivers rely on a shared API rather than duplicating code.
// Create an HKDF object and derive an AES-256 key
KDF kdf = KDF.getInstance("HKDF-SHA256");
AlgorithmParameterSpec params =
HKDFParameterSpec.ofExtract()
.addIKM(initialKeyMaterial)
.addSalt(salt)
.thenExpand(info, 32);
SecretKey aesKey = kdf.deriveKey("AES", params);
Changes since preview (JEP 478): The only change is the removal of the "preview" status - the API signature, KDF/KDFSpi classes, and parameter objects remain the same.
Performance - VM Internals
JEP 519: Compact Object Header
JEP 519 promotes compact object headers (introduced in JDK 24 as an experimental feature) to a fully supported production option in JDK 25. When the flag -XX:+UseCompactObjectHeaders is enabled, each object in HotSpot uses only one machine word for its header instead of two, reducing memory requirements and improving data locality. Compressed headers remain disabled by default, but no longer require -XX:+UnlockExperimentalVMOptions.
Problem it solves: The standard 64-bit HotSpot object header consists of two words: a mark word and a class pointer. With millions of objects, this results in tens of megabytes of pure overhead. The previous -XX:+UseCompressedClassPointers
option only partially reduced that cost; full 64-bit headers still doubled GC traffic and hurt CPU cache efficiency.
The solution: The compact format combines the mark word, a compressed class pointer, and a few bits of meta-information into a single 64-bit value. The JVM retains full header functionality (e.g., hashCode, synchronization, biased locking) while freeing up the second word for application data. The option has undergone large-scale production testing (e.g., at Amazon across hundreds of services) and is ready for widespread use. Users enable it with a single flag:
java -XX:+UseCompactObjectHeaders -jar app.jar
Four extra bits are reserved for future projects (like Valhalla), allowing the format to evolve without resizing the header again.
JEP 521: Generational Shenandoah
JEP 521 promotes the generational mode of the Shenandoah GC from experimental (introduced in JEP 404 in JDK 24) to a fully supported production option. You can now run the generational GC with just -XX:+UseShenandoahGC
-XX:ShenandoahGCMode=generational
, without unlocking experimental flags.
Problem it solves: Shenandoah's single-generation mode eliminates long stop-the-world pauses but loses the benefits of generational heap design: most objects die young, so it's cheaper to reclaim them in a young generation and avoid frequent scans of long-lived data. ZGC already has a generational mode in production (JEP 439), leaving Shenandoah users to choose between low latency and efficient young object collection.
The solution: After a year of stabilization and testing, the generational mode reached production quality: all bugs were resolved, and memory/CPU usage now matches or outperforms the single-generation version. In JDK 25, the -XX:+UnlockExperimentalVMOptions
requirement is removed, and all other Shenandoah options remain unchanged, making migration as simple as flipping a flag. The default mode is still single-generation, but JDKs can now safely enable the generational variant in production environments.
JEP 503: Remove the 32-bit x86 Port
JEP 503 removes all source code and build support for the 32-bit x86 architecture from the JDK mainline. The port was already marked for removal in JDK 24 (JEP 501), and now, in JDK 25, its elimination simplifies building, testing, and future development by removing the need to maintain 32-bit-specific code paths.
Problem it solves: Maintaining the 32-bit x86 code required additional testing, CI setup, and separate branches in HotSpot, slowing down development and increasing regression risk. The server market switched to 64-bit over a decade ago, and even the last major 32-bit client system - Windows 10 32-bit - loses support in October 2025. New features (e.g., advanced virtual thread scheduling) had to either ignore 32-bit or implement costly workarounds, offering no real value to users.
The solution: All code, build scripts, and tests dependent on x86_32 were removed. The build system (make/Gradle) no longer generates 32-bit artifacts. Documentation and supported platform lists were updated to clearly state the minimum hardware requirements: Intel/AMD x86-64 or other supported 64-bit architectures.
Users still wishing to run the JDK on 32-bit x86 systems can do so using the so-called Zero JDK.
JEP 514: Ahead-of-Time Command-Line Ergonomics
JEP 514 simplifies use of Ahead-of-Time (AOT) class caches by adding a single flag that combines training the application and generating the cache in one java invocation.
Problem: AOT caches (introduced in JEP 483) speed up application startup by using pre-loaded and pre-linked classes stored in .aot files. Until now, using them required two separate steps: first recording workload behavior with -XX:AOTMode=record
, then building the cache with -XX:AOTMode=create
. This discouraged regular use and left behind intermediate config files.
The solution: The new option -XX:AOTCacheOutput=<file>
instructs the java launcher to handle both steps internally: it runs the app in training mode, then automatically builds the cache. The temporary configuration file is created and deleted behind the scenes, so users only see a single artifact: the .aot file. A new environment variable JDK_AOT_VM_OPTIONS
allows specifying options solely for the cache creation phase (e.g., memory limits) without affecting the training run.
The old two-step flags remain supported for advanced use cases.
# One-step cache creation
java -XX:AOTCacheOutput=app.aot -cp app.jar com.example.App
# Run with an existing cache (as before)
java -XX:AOTCache=app.aot -cp app.jar com.example.App
This makes daily AOT cache generation as easy as adding a flag - likely boosting adoption and laying the groundwork for future optimizations under Project Leyden.
JEP 515: Ahead-of-Time Method Profiling
JEP 515 extends AOT caches by allowing method execution profiles to be saved during the application’s “training” run. On subsequent (production) runs, HotSpot can load these profiles immediately, allowing the JIT compiler to generate highly optimized native code from the very first second. This helps the application reach peak performance much faster.
Problem it solves: Currently, the JVM spends the first few seconds (or minutes) collecting stats to identify “hot” methods. This warm-up period delays peak performance and, in serverless or short-lived jobs (e.g., cronjobs), may account for most of the process’s lifetime. The AOT cache from JEP 483 eliminates class loading and linking delays, but doesn’t help the JIT, which still waits for profile data.
The solution: During the training run, the JVM saves not just class data to the AOT cache but also complete method execution profiles. This same .aot file can then be reused in production, where profiles are available immediately - allowing the JIT to compile hot methods right away without guessing. This shorter warm-up translates to reduced CPU spikes and better response times.
The workflow and flags remain identical to those introduced in JEP 514.
Observability
JEP 509: JFR CPU-Time Profiling (Experimental)
JEP 509 extends JDK Flight Recorder with support for sampling CPU time per thread on Linux. Unlike traditional "execution-time" profiling, the new jdk.CPUTimeSample event precisely shows how many CPU time individual methods consume, making it easier to identify true throughput bottlenecks.
Problem it solves: Existing "execution-time" profiling mixes actual computation with idle time (e.g., blocking I/O). As a result, a method that spends 90% of its time waiting may look just as “hot” as a compute-heavy method, even though it barely uses the CPU. Without CPU-time metrics, it’s hard to determine what’s really taxing the processor, especially in I/O-heavy applications.
The solution: HotSpot uses the Linux perf_event_open syscall to generate signals after each specified increment of per-thread CPU time. Each signal triggers a jdk.CPUTimeSample event with a stack trace, which is recorded alongside the existing jdk.ExecutionSample events in the JFR file. Sampling frequency is configurable (e.g., throttle=10ms or 500/s), and any lost samples are reported via a new event jdk.CPUTimeSamplesLost.
This feature is marked as @Experimental, but doesn’t require any special JVM flags - just enable the event.
# Start recording with CPU-time sampling (500 samples/sec) and save to file
java -XX:StartFlightRecording=jdk.CPUTimeSample#enabled=true,\
jdk.CPUTimeSample#throttle=500/s,filename=cpu.jfr MyApp
You can then use jfr view cpu-time-hot-methods cpu.jfr or generate flame graphs in JDK Mission Control to instantly see which methods burn the most CPU time.
You can learn more details in Java 25’s new CPU-Time Profiler by JEPs author, Johannes Bechberger. Thanks for the correction to this part Johannes, much appreciated
JEP 520: JFR Method Timing & Tracing
JEP 520 enhances JDK Flight Recorder in JDK 25 with two new events for each method call: jdk.MethodTrace
(who calls whom) and jdk.MethodTiming
(precise duration), implemented using lightweight bytecode instrumentation at runtime. This enables full method-level tracing without custom logging or the sampling inaccuracy common to traditional profilers.
Problem it solves: Until now, full “call tracing” required manually adding logs or using Java Agents, which could be cumbersome. JFR only offered stack sampling, which didn’t provide method durations or call counts - making it hard to pinpoint small but frequently invoked hot spots.
The solution: HotSpot introduces an optional instrumentation pass: when the new events are enabled, it inserts lightweight timing and method ID recording into the prologue and epilogue of methods.
Each method call produces a pair of events: MethodTrace → MethodTiming. JFR aggregates them into comprehensive reports showing number of calls, total time, and mean time per call. The filter=package|class|method option can be used to narrow the scope to selected packages, minimizing overhead.
# Run the app and trace methods from the com.example.service package
java -XX:StartFlightRecording=method-trace,settings=default,\
jdk.MethodTrace#filter=com/example/service/**,filename=trace.jfr -jar app.jar
In JDK Mission Control, you get a table of methods with columns like "Calls", "Total ns", and "Mean ns", plus a call tree, allowing you to quickly identify the most expensive hot paths in just one short recording session.
New Frontier - New APIs
JEP 502: Stable Values (Preview)
JEP 502 introduces the StableValue<T>
class, enabling deferred immutability. A StableValue starts out “empty,” but can be initialized exactly once on first access. After being set, the JVM treats it as a constant - offering the same optimizations as a final field -without requiring eager initialization. This allows large applications to start faster by deferring the creation of expensive objects until they’re truly needed.
Problem it solves: Final fields guarantee immutability and enable optimizations like constant folding, but must be initialized immediately - in a constructor or static initializer. For heavyweight components (loggers, network connections, repositories), this slows down startup and often initializes resources that may never be used. Alternatives (lazy holders, double-checked locking, concurrent maps) either give up on optimizations or require complex, error-prone synchronization code.
The solution: StableValue offers the best of both worlds: thread-safe, one-time initialization and deferred evaluation. Methods like orElseSet(…)
, supplier(…)
, and list(…)
support common usage patterns - from single values to lazy suppliers and on-demand lists. If the StableValue field is itself final, the JVM can optimize access as if the field were always constant, with no need to check for null or synchronize.
// Example: lazy, thread-safe logger initialization
import java.util.logging.Logger;
import java.lang.StableValue;
class OrderController {
private final StableValue<Logger> logger = StableValue.of();
private Logger getLogger() {
return logger.orElseSet(() -> Logger.getLogger(OrderController.class.getName()));
}
void submitOrder() {
getLogger().info("order started");
// …
getLogger().info("order submitted");
}
}
After the first call to getLogger()
, the Logger
is created and stored; later calls reuse the same instance. The JVM can optimize access just like with a final field, with no null checks or synchronization overhead.
JEP 470: PEM Encodings of Cryptographic Objects (Preview)
JEP 470 adds small, immutable classes PEMEncoder and PEMDecoder for converting X.509 keys, certificates, or CRLs between binary DER format and text-based PEM, simplifying cryptographic workflows in Java.
Problem it solves DER (Distinguished Encoding Rules) is a binary format for ASN.1 structures - great for machines but unreadable and hard to transmit. PEM (Privacy-Enhanced Mail) wraps DER in Base64 and adds
-----BEGIN …-----
headers, making it easier to email, store in Git, or include in server configs.
Until now, Java lacked an official API for DER ↔ PEM conversion. Developers had to write custom parsers or rely on third-party libraries, increasing the risk of bugs and complicating security-related code.
The solution: JEP 470 introduces two static entry points (PEMEncoder.of()
and PEMDecoder.of()
) with an API style similar to Base64. The method encodeToString(obj)
produces PEM-formatted output, optionally encrypting private keys with a password. The method decode(text, type) reconstructs the object and validates headers.
For rare object types, there's a generic PEMRecord class, and extensibility is ensured via the marker interface DEREncodable.
// Minimal example - converting a private key to PEM and back
String pem = PEMEncoder.of().encodeToString(privateKey); // DER → PEM
PrivateKey key = PEMDecoder.of().decode(pem, PrivateKey.class); // PEM → DER
JEP 505: Structured Concurrency (Fifth Preview)
JEP 505 significantly revamps the Structured Concurrency API, now in its fifth preview. The API treats groups of concurrent tasks as a single "family unit" of work, enabling unified cancellation, exception propagation, and observability. This makes concurrent code more readable and helps prevent thread leaks.
The problem it solves: Traditional concurrency using ExecutorService
and Future
allows launching threads anywhere, but does not enforce parent-child relationships. If one task fails, others may keep running, leading to resource leaks, slow cancellation, and confusing thread dumps. In servers using lightweight virtual threads, this is especially problematic due to the high concurrency.
The solution: The StructuredTaskScope
class creates a scope (now initialized via the static method open()
) in which you fork subtasks and wait for their completion or failure. The default policy is “all succeed or first failure,” but you can customize this by passing a Joiner
strategy. The code clearly models parent-child task relationships: if one task throws, others are canceled, and the scope ends with an exception - no leaks.
What changed in the 5th preview?
New way to create scopes: Instead of using constructors like new
StructuredTaskScope.ShutdownOnFailure()
, you now use the static factoryStructuredTaskScope.open(...)
. The default version waits for all subtasks to succeed or stops on the first failure.Joiner instead of subclasses: The logic for deciding when the scope is "done" has been moved into the Joiner interface. Built-in factories include
anySuccessfulResultOrThrow()
(first success wins) andallSuccessfulOrThrow()
(all must succeed). You can also write your own strategies without subclassing.join() returns results directly:
join()
now returns a result based on the Joiner (e.g.,Stream<Subtask<?>>
or a single result) or throws FailedException. The old two-stepjoin().throwIfFailed()
is gone.Builder-style configuration: The open() method accepts a lambda configurator to set the scope name, timeouts (withTimeout(Duration)), or custom thread factories - making it easy to assign readable thread names or deadlines.
New exceptions and stricter rules: Invalid usage (e.g., calling fork() outside the owning thread) throws StructureViolationException, and timeouts throw TimeoutException. The scope enforces correct usage via try-with-resources.
Improved observability: The thread dump’s JSON format now includes StructuredTaskScope hierarchy, so tools can visualize the full task tree in one place.
Quick example with the new API
Response handle() throws InterruptedException {
try (var scope = StructuredTaskScope.open()) { // default policy
var user = scope.fork(() -> findUser()); // subtask 1
var order = scope.fork(() -> fetchOrder()); // subtask 2
scope.join(); // waits & propagates
return new Response(user.get(), order.get()); // combine if successful
} // auto-cancel on failure
}
The code clearly defines thread lifetime boundaries: everything happens inside a try block; if one subtask fails, the other is canceled automatically, and the exception propagates to the parent thread.
Nihil Novi Sub Sole - (Forever) Preview Features
JEP 507: Primitive Types in Patterns, instanceof and switch (Third Preview)
JEP 507 brings back - for the third time - the preview feature that extends pattern matching to primitive types (int, long, double, etc.) across all pattern contexts. This means that both instanceof and switch can now match and safely cast primitive values without boxing, unifying data handling in Java.
Problem it solves: Previous versions of pattern matching only supported reference types. Differentiating between int, double, or boolean required verbose if statements or traditional switch cases, followed by manual casting. This led to repetitive, noisy code and often triggered autoboxing when working with collections or streams. The lack of a unified syntax made it harder to write generic tools that operate on both primitive and object types.
The solution: JEP 507 extends compiler rules so that primitive types can be used anywhere a class type pattern was previously allowed:
instanceof now supports patterns like obj instanceof int i, which automatically casts and binds to i.
switch expressions and statements accept primitive patterns in all branches, allowing elegant dispatch for both reference and primitive values.
Patterns can be nested (e.g., inside records or arrays), enabling a consistent model for data deconstruction.
// instanceof with a primitive pattern
Object o = 42;
if (o instanceof int n) {
System.out.println("Double the value: " + (n * 2));
}
// switch using both primitive and reference patterns
static String describe(Number num) {
return switch (num) {
case int i -> "int %d".formatted(i);
case long l -> "long %d".formatted(l);
case double d -> "double %.2f".formatted(d);
case Float f -> "Float %.2f".formatted(f);
default -> "other number";
};
}
The third preview introduces no semantic changes compared to the second (JEP 488). The authors simply seek more feedback from real-world projects before finalizing the feature.
JEP 508: Vector API (Tenth Incubation)
JDK 25 delivers the tenth incubation release of the Vector API, found in the jdk.incubator.vector module. This API lets developers write explicit vectorized algorithms in plain Java. The JIT compiler (C2) maps the code at runtime to SIMD instructions (e.g., AVX, SSE, NEON, SVE), allowing applications to achieve performance far beyond traditional scalar loops.
Problem it solves: HotSpot's automatic vectorization is fragile: it supports a limited set of operations, depends heavily on loop shapes, and can break with minor code changes. Developers who rely on SIMD power (for ML, signal processing, cryptography) often resort to JNI or specialized C/C++ libraries. The Vector API removes this barrier - offering a high-level abstraction that guarantees Java expressions will be translated into vector instructions without writing native code and while retaining portability.
The solution: The Vector API centers around three key classes: VectorSpecies<E> describes the shape (e.g., 256 bits as 8 ints), the abstract Vector<E> represents the data, and operations are selected from VectorOperators.
The tenth incubation brings one API change and two notable implementation improvements:
VectorShuffle + MemorySegment - Shuffle tables can now directly read/write off-heap data using the Foreign Memory API.
FFM-based linkage – instead of HotSpot-specific stubs, the API now calls SVML/SLEEF native math functions via the Foreign Function & Memory API, reducing C++ complexity and improving maintainability.
Auto-vectorization for Float16 – basic operations (add, mul, div, sqrt, fma) on half-precision floats are now automatically mapped to SIMD instructions on supported x86-64 processors.
The API remains in incubator status: to compile, you must include the module (--add-modules jdk.incubator.vector) and enable preview features (--enable-preview). It’s expected to move to preview only once Project Valhalla delivers value classes.
// Simple SIMD loop: c[i] = -(a[i]^2 + b[i]^2)
static final VectorSpecies<Float> S = FloatVector.SPECIES_PREFERRED;
void vec(float[] a, float[] b, float[] c) {
for (int i = 0; i < S.loopBound(a.length); i += S.length()) {
var va = FloatVector.fromArray(S, a, i);
var vb = FloatVector.fromArray(S, b, i);
va.mul(va).add(vb.mul(vb)).neg().intoArray(c, i);
}
// scalar fallback for array tail
}
This example is directly translated by HotSpot into AVX/NEON instructions, delivering throughput far exceeding that of an equivalent scalar loop.
And one last announcement at the end:
🚨 Call for Papers - Confitura 2025
Interested in speaking in front of one of Poland’s longest-running and most respected software communities?
Confitura 2025, a software conference organized by and for the community, will take place on September 19–20, 2025 at ADN Conference Centre in Warsaw.
The conference is organized by a dedicated team of Java developers, architects, and industry professionals, and features multiple tracks covering everything from advanced coding practices to emerging trends in software engineering. Known for its inclusive atmosphere, affordable pricing, and strong emphasis on networking, Confitura traditionally gathers over 1,300 attendees, 32 speakers, and 30 presentations.
⏰ CFP deadline: July 31, 2025. Don’t miss your chance - Submit your talk proposal before the end of July!
👉 More info: 2025.confitura.pl