- Core
- HotSpot
- JEP 404: Generational Shenandoah (Experimental)
- JEP 450: Compact Object Headers (Experimental)
- JEP 479: Remove the Windows 32-bit x86 Port
- JEP 483: Ahead-of-Time Class Loading & Linking
- JEP 475: Late Barrier Expansion for G1
- JEP 490: ZGC: Remove the Non-Generational Mode
- JEP 491: Synchronize Virtual Threads without Pinning
- JEP 501: Deprecate the 32-bit x86 Port for Removal
- Language specification
- Security
- Tools
- General
- Thoughts
- Lookahead
- Resources
Java 24 will be released on the 18th of March 2025, and the full list of enhancements is now available tallying up to an astounding 24 changes. This certainly doesn’t raise the expectations for release 25!
Let us take a closer look at what’s awaiting us!
The core Java library has once again been enriched with a multitude of new features, including several preview APIs, driving the language ever forward to ensure it can meet today’s needs. As expected, some enhancements like the Vector API remain in incubation, while others, such as the Gatherer API, have now landed.
Let’s take a look at the exciting new features!
This JEP will ensure that warnings are issued in a consistent manner in case of JNI or FFM usage that goes against integrity by default. These warnings and future restrictions can be selectively disabled by enabling those specific interfaces when needed.
When using a native function we’ll get a WARNING
at startup unless we use the JVM --enable-native-access
option to authorize it, or add the Enable-Native-Access
manifest entry.
Using native functions without authorization is illegal, hence the aptly called --illegal-native-access
which allows us to control the JVM’s behaviour, with the default being a warning.
The options are:
-
warn (default): a warning will be emitted in the JVM logs
-
allow: use is authorized without restriction
-
deny: use is refused, an IllegalCallerException will be thrown
Violating this will result in a warning being emitted by the JVM in the following form:
WARNING: A restricted method in java.lang.System has been called
This JEP which was initially a preview in 22 and 23 aims to provide a standard API for parsing, generating and transforming Java class files.
It is geared towards helping developers handle the swifter release cadence, and thus class-file format changes. The API would evolve alongside the class file format, which would reduce the reliance of library and framework developers on third-party developers to update and test their class-file libraries. Frameworks and libraries making use the standard API will support class files from the latest JDK automatically.
This JEP enhances the Stream API with new intermediate operations to facilitate new data operations that previously weren’t easily achievable. This will be handled by facilitating the creation of custom intermediate operations to manipulate the streams, rather than a lot of new methods of the specifically requested functions.
We’re already seeing some interesting libraries pop up with useful gatherers such as Gatherers4j
I’ve already experimented quite a bit with API during the latest Advent of Code, and it certainly made some challenges a lot easier..
For more details the Inside Java newscast number 57 is certainly also worth a watch.
Scoped values are being introduced as a preview API for the fourth time. It will enable methods to share immutable data both within threads and with child threads, offering better efficiency and clarity than thread-local variables, especially when used with virtual threads and structured concurrency.
The fourth iteration mostly remains the same, the only changes is the removal of callWhere()
and runWhere()
from ScopedValue thus leaving us with a fluent API. After this change we can only use bound scopes values through the call()
and run()
methods from ScopedValue.Carrier.
The ninth iteration API of an API for vector computations. It aims to be a way to express these in a manner that compiles reliably to optimal vector instructions on supported CPU architectures to achieve better performance than equivalent scalar computations.
This iteration adds a new Float16
which supports 16-bit variables in IEEE 754 binary16 format using the Vector API.
The developers have confirmed that the API will remain in incubation until project Valhalla enters preview itself so that the API can leverage the expected performance and in-memory representation enhancements.
A lot of low-level frameworks use the Unsafe
class for faster memory access, however this class was always as an internal class and is officially unsupported and undocumented.
In "recent" releases we’ve received alternative methods to achieve the same thing, and as communicated when they were deprecated in JDK23 are slowly being phased out.
As of JDK 24 this will result in an WARNING
, and in a future release the invocations will by default escalate to an exception.
The better approaches are:
VarHandle
landed in Java 9 and allows us a modern and type-safe way to handle variable access.
This API, also known as the FFM API landed in JDK 22 through JEP 454 and allows us to safely interact with native memory and functions.
This proposal aims to address some of the frequently encountered challenges in concurrent programming by introducing an API that treats groups of related tasks running in different threads as a single unit of work.
The proposed approach, also known as structured concurrency will help us streamline error handling, cancellation, and observability, making concurrent code more reliable and easier to manage.
The main target is promoting a style of concurrent programming that eliminates common risks such as thread leaks and cancellation delays, while also improving the observability of concurrent code. The API itself is centered around StructuredTaskScope
, which allows us to define a clear hierarchy of tasks and subtasks, ensuring that all subtasks are completed or cancelled before the parent task exits. This enforces a structured flow of execution, similar to single-threaded code, where subtasks are confined to the lexical scope of their parent task. The API also provides built-in shutdown policies (e.g., ShutdownOnFailure
and ShutdownOnSuccess
) to handle common concurrency patterns, such as cancelling all subtasks if one fails or succeeds. We can also define our own shutdown policies.
By reifying the task-subtask relationship at runtime, structured concurrency makes it easier for us to reason about and debug concurrent programs. It also integrates well with observability tools, such as thread dumps, which can now display the hierarchical structure of tasks and subtasks.
This enhancement does not aim to replace existing concurrency constructs like ExecutorService
or Future
, but rather to complement them by offering a more structured and reliable alternative for managing concurrent tasks.
JEP 499 builds on previous previews (JEP 428, 437, 453, 462, and 480) and is designed to work seamlessly with virtual threads (JEP 444), which enables lightweight, high-throughput concurrency. There are no changes in this preview.
For now, it’ll remain in preview to gather further feedback, so please do try it out and share your feedback! It continues in JEP Draft 8340343, Structured Concurrency (Fifth Preview).
Proper attention has been paid to make sure that the HotSpot remains efficient and maintainable by addressing such key challenges as the pinning issue plaguing Virtual threads, further garbage collection enhancements, and the deprecation and removal of several legacy sections.
Let us take a look at what was changed.
This aims to enhance the Shenandoah GC with an experimental generational mode, without impacting non-generational.
With non-generational we mean that the heap was not split into multiple zones each containing objects of different ages. It’s based upon the Weak Generational Hypothesis which stats that most objects die young. The collection of dead objects is very cheap, so if we separate the two we can target young objects to clean the heap more efficiently.
The intent is to reduce the sustained memory footprint, and reduce resource consumption. Initially it’ll only support X64 and AArch64, with more instruction sets being supported later.
It was originally intended to land in JDK21. but it was dropped at that time due to identified risks (see for reference this jdk-dev mailing list entry) to make sure than when it landed it would deliver the best possible result.
We can enable it through the following JVM options: -XX:+UnlockExperimentalVMOptions -XX:ShenandoahGCMode=generational
.
This enhancement was inspired by the experiments done in function of Project Lilliput which found that many workloads have an average object size of 32 to 64 bytes.
It proposes to reduce the object header size in the HotSpot JvM from between 96 and 128 bits down to 64 bits on 64-bit architectures. An eye is also being kept on it not introducing any major throughput or latency overhead. It is disabled by default given its experimental nature to avoid unintended consequences.
As planned after the deprecation for removal in JDK 21 (JEP 449) the Windows 32 bit X86 bit source code and build support has been removed.
This feature aims to enhance Java application startup performance by monitoring an application during one run and creating a cache of preloaded and pre-linked classes for subsequent runs.
It seeks to improve startup time by leveraging the typically consistent startup process of applications, without requiring changes to application code, command-line usage, or build tools.
This approach provides a foundation for future improvements in startup and warmup time, with the current implementation focusing on caching classes loaded by built-in class loaders from the class path, module path, and JDK itself.
One major differentiator from GraalVM which offers a similar functionality all JVM functionalities are preserved. Classes not present in the archive will be dynamically loaded by the JVM.
At the moment this process involves three steps (there are plans to streamline the process of cache creation):
-
Performing a training run to record its AOT configuration (into a file called
app.aotconf
) ⇒java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \ -cp app.jar dev.simonverhoeven.DemoApp
-
Use the configuration to create a cache (into a file called
app.aot
) ⇒java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \ -XX:AOTCache=app.aot -cp app.jar
-
Running our application with the cache (note that in case it can’t be used the JVM will issue a warning, and then continue) ⇒
java -XX:AOTCache=app.aot -cp app.jar dev.simonverhoeven.DemoApp
On my local machine, Spring PetClinic startup with AOT cache took 2.7 seconds versus 4.6 without.
Simplifies the G1 Garbage Collection barrier implementation, which stores information about the application memory access by moving the expansion of the expansion thereof to later in the C2 JIT’s compilation pipeline.
This makes the G1 barriers more comprehensible, and reduces the C2 execution time when using the G1 collector. Additionally, it guarantees the preservation of the C2 invariants while preserving the quality of C2 generated code.
The non-generational mode of the Z Garbage Collector (ZGC) will be removed to reduce the current maintenance cost of supporting two different modes as to speed up the development of new features.
To provide some context for this JEP: Virtual Threads are lightweight threads managed by the JVM, designed to have minimal overhead compared to traditional platform threads. They are particularly well-suited for I/O-bound or highly concurrent applications, as they enable efficient scaling without the resource constraints associated with platform threads. When executed, virtual threads are mounted onto platform threads (also called carrier threads), with the JVM handling scheduling and context switching. This abstraction simplifies concurrent programming by reducing the need for complex thread-pooling or asynchronous programming constructs. Virtual threads were introduced as part of Project Loom and formalized in JEP 444.
An issue up until now was that Java synchronization didn’t unmount the platform thread, thus the platform thread was pinned to the virtual thread which negatively impacted the scalability of virtual threads. For example, if too many threads are pinned to the platform threads available to the JVM we can run into starvation, or even deadlock issues.
This JEP aims to resolve this issue by making it possible for Virtual Threads that block in such cases to release their underlying platform threads. This will almost fully eliminate cases of VT being pinned to platform threads and resolve one of the most frequently encountered performance issue when adapting them.
Netflix also shared an interesting writeup on this issue on their TechBlog called Java 21 Virtual Threads - Dude, Where’s My Lock?.
The final remaining 32-bit x86 port which is the one for Linux is being deprecated, and thus all downstream ones. After the 32-bit x86 port is removed, the only way to run Java programs on 32-bit x86 processors will be the architecture-agnostic Zero port of the JDK.
The Java language continues to evolve with various enhancements to make it more flexible, expressive and convenient to use. There are enhancements for people of all skill levels geared towards making sure it can meet today’s and tomorrow’s needs.
Let’s take a look at the latest preview features, including updates to pattern matching, flexible constructors, and simplified module imports, and how they’ve evolved based on community feedback.
This JEP first introduced as 455 returns without any changes. It aims to enhance pattern matching by allowing primitives in all pattern contexts, and allowing one to use them with instanceof and switch as well.
This proposed Java language feature allows statements before explicit constructor invocations, enabling more natural field initialization. As a preview feature in JDK 22 and 23, it introduces two constructor phases: a prologue and epilogue respectively to help developers place initialization logic more intuitively while preserving existing instantiation safeguards. This proposal has not changed compared to the second preview.
This will allow us to easily import all packages exported by a module, this facilitates the reuse of modular libraries without requiring the importing code to be within a module itself. It will also allow beginners to more easily use third-party libraries and core Java classes without needing to know their exact location within the package hierarchy.
Compared to the first revision there are two additions:
-
the restriction that no module is able to declare a transitive dependency on the
java.base
module has been lifted, and thejava.se module
now transitively requires thejava.base
module -
type-import-on-demand declarations are now allowed to shadow module import declarations.
For example: import module java.base;
.
This preview which hasn’t changed from it’s previous iteration where it was known as implicitly Declared Classes and Instance Main Methods,
would enable simplified programs by allowing them to be defined in an implicit class and an instance method void main()
.
Java’s security continues to evolve with enhancements such as quantum-resistant algorithms and the removal of outdated features. These changes help ensure that Java remains a secure and future-proof platform. Not only are today’s security needs tackled, but the language is also preparing for tomorrow’s challenges such as the rise of quantum computing.
Let’s dive into the key security updates!
This proposal aims to introduce an API to derive additional keys from a secret key and other data through cryptographic algorithms as Key Derivation Functions (KDFs). KDF is part of the cryptographic standard PKCS #11, and are one of the key elements needed to implement Hybrid Public Key Encryption (HPKE). HPKE is a post-quantum cryptographic algorithm designed to be resistant to quantum computers.
The Security Manager, deprecated in Java 17, has now been permanently disabled in JDK 24 and is slated for complete removal in a future release. Originally designed to secure untrusted code (e.g., applets), the Security Manager provided a set of checks for actions like thread creation and file access. However, it was complex to maintain and had a significant performance footprint when enabled. Its removal has led to the deletion of over 14,000 lines of code, simplifying the JDK.
Opinions on the removal are divided. While most developers are unaffected, some platforms—particularly those with plugin systems like Kafka Connect, Elasticsearch, and Kestra—face challenges. These systems rely on executing untrusted code, and the Security Manager provided a built-in mechanism for enforcing security policies. Without it, developers must now implement alternatives such as sandboxing (e.g., Docker, GraalVM) or custom access controls using Java agents.
While modern security needs are better addressed by tools like containers and OS-level sandboxing, the transition away from the Security Manager requires effort and may not fully replicate its functionality for all use cases.
An interesting read on this topic is also: Stuart Marks - Detoxifying the JDK Source Code.
JEP 496 introduces an implementation of the key encapsulation mechanism based on a quantum-resistant algorithm Module-Lattice (ML-KEM).These are used to secure symmetric keys over unsecured communication channels using public key cryptography.
Module-Lattice-Based Key Encapsulation Mechanism (ML-KEM), is a quantum-resistant algorithm standardized by NIST in FIPS 203, to secure symmetric keys over unsecured channels using public key cryptography. It is designed to withstand attacks from future quantum computers, which could break current algorithms like RSA and Diffie-Hellman using Shor’s algorithm. While quantum computers capable of such attacks are still far off (requiring thousands of Qubits, compared to today’s ~64 Qubit systems), the transition to quantum-resistant algorithms is urgent to protect against "harvest now, decrypt later" threats.
This proposal integrates ML-KEM into Java’s security APIs, thus providing implementations for:
-
KeyPairGenerator
-
KEM
-
KeyFactory
And supporting three parameter sets:
-
ML-KEM-512
-
ML-KEM-768
-
ML-KEM-1024
It also facilitates key generation and certificate signing via the keytool command.
This integration directly into the JDK enables a smooth adoption of quantum-resistant cryptography across all supported platforms, which will help future-proofing our applications against quantum computing threats. This aligns with NIST's recommendation to transition to post-quantum algorithms within the next decade.
JEP 497 introduces the Module-Lattice-Based Digital Signature Algorithm (ML-DSA), a quantum-resistant algorithm standardized by NIST in FIPS 204, to enhance the security of Java applications. It is designed to withstand attacks from future quantum computers, which could break current algorithms like RSA and Diffie-Hellman using Shor’s algorithm. While quantum computers capable of such attacks are still far off (requiring thousands of Qubits, compared to today’s ~64 Qubit systems), the transition to quantum-resistant algorithms is urgent to protect against "harvest now, decrypt later" threats.
This proposal integrates ML-DSA into Java’s security APIs, thus providing implementations for:
-
KeyPairGenerator
-
KEM
-
KeyFactory
And supporting three parameter sets:
-
ML-DSA-44
-
ML-DSA-65
-
ML-DSA-87
This integration directly into the JDK enables a smooth adoption of quantum-resistant cryptography across all supported platforms, which will help future-proofing our applications against quantum computing threats. This aligns with NIST's recommendation to transition to post-quantum algorithms within the next decade.
The tooling ecosystem is evolving to better meet today’s deployment and image creation needs, helping developers address the demands of modern development workflows.
This enhancement reduces the JDK’s size by roughly 25% by enabling the jlink
tool to create custom run-time images without relying on the JDK’s JMOD files. Note: Since this feature is not enabled by default, and not all vendors may choose to implement this feature the JDK needs to be built with the --enable-linkable-runtime
option, the resulting JDK omits JMOD files, and jlink
can extract modules directly from the run-time image itself. This capability is particularly advantageous in cloud environments, where smaller container images improve deployment efficiency.
The jlink
tool in such a JDK can consume modules from the run-time image, modular JARs, or JMOD files, preferring the latter if available. For example, creating a run-time image with only java.base
and java.xml
works as usual:
$ jlink --add-modules java.xml --output image
$ image/bin/java --list-modules
java.base@24
java.xml@24
The resulting image is about 60% smaller than a full JDK run-time image. For more complex cases, such as linking an application module (app
) and its dependency (lib
), the process remains straightforward:
$ jlink --module-path mlib --add-modules app --output app
However, there are some restrictions: jlink
cannot create images containing itself (jdk.jlink
), fails if user-editable configuration files (e.g., java.security
) are modified, and does not support cross-linking or --patch-module
.
Beyond the major enhancement proposals, Java 24 includes a variety of smaller updates, removals, and deprecations that further improve the language.
Some examples:
Additions:
-
Unicode 16 support which enables better handling of modern text and emojis
-
New Java Flight Recorder events to further enhance observability
-
New MXBean to Monitor and Manage Virtual Thread Scheduler complementing the enhancements from project Loom
-
Support for including security properties files so security properties can more easily be managed in a centralized manner
Removals:
-
Linux Desktop GTK2 Support as all Linux distributions supported by JDK 24 provide GTK3 support
-
JDK1.1 Compatible Behavior for "EST", "MST", and "HST" Time Zones, the appropriate zone IDs listed in ZoneId.SHORT_IDS should be used
Deprecations:
-
jstatd tool to further reduce dependencies on Remote Method Invocation, the monitoring of local VMs using the Attach API isn’t impacted
-
debugd subcommand of the jhsdb tool as this remote debug server is not widely used, nor documented and dependencies on Remote Method Invocation are being reduced as much as possible
-
jrunscript tool as this tool is no longer functional given the removal of the JavaScript engine in Java 15 as part of JEP 372
Issues:
-
Undefined type variables no longer resulting in null, but rather throwing a
TypeNotPresentException
-
Single-line leading
///
dangling comments will no longer trigger a warning
As always, I recommend checking out the release notes
Once again, an impressive list of enhancements has been delivered, and significant progress has been made on long-running projects like Loom.
We’re seeing some powerful new features such as Stream Gatherers which open up new possibilities for data processing, AOT class loading which may significantly reduce startup times for our applications, and quantum-resistant algorithms which help ensure Java remains secure in the face of future threats.
Java 24 offers improvements for both newcomers and seasoned developers, underscoring the language’s enduring relevance. The strong focus on security addresses today’s critical challenges, while reductions in resource consumption help lower Java’s ecological footprint and increase the cost-effectiveness for cloud and enterprise environments.
Given what we’ve seen in this release, and what’s already in the works for Java 25 I’m certainly optimistic about the future!
You can find the latest version of this article, and code examples in my java24-demo repository.
General availability of Java 25 is planned for September 2025, and while at the time of writing there are no JEPs targeted at it yet, we can already make some guesses based upon the submitted candidates and drafts.
Some of the ones I hope and expect to see land are:
-
JEP-495: Simple Source Files and Instance Main Methods (Fourth Preview) - which would make the language more accessible to new developers.
-
JEP-502: Stable Values (Preview) - which would bring us immutable value holders that are at most initialized once as it would help us move towards deferred immutability through
StableValue
andStableSupplier
. -
JEP draft 8340343: Structured Concurrency (Fifth Preview) - structured concurrency has received quite a bit of feedback so far, so I hope to see it land, but we’ll have to see. It will certainly help out when writing multithreaded applications, and it’ll be nice to see project Loom progress.
-
JEP draft 8326035: CDS Object Streaming - proposes to add an object archiving mechanism for Class-Data Sharing (CDS) in the Z Garbage Collector (ZGC) since it’ll enhance the usage of the AOT cache from project Leyden
-
JEP draft 8300911: PEM API (Preview) - introduces an easy-to-use API for encoding and decoding Privacy-Enhanced Mail (PEM) format as it helps underscore Java’s commitment to enabling highly secure applications and address one of the pain points expressed in the Java Cryptographic Extensions Survey in April 2022.
While some of these are still in draft form and subject to change they do already give us a nice glance at what the architects are looking into. Furthermore, these changes help highlight Java’s roadmap and continuous evolution. If you’re as excited as I am about all these changes, and want to provide feedback I highly recommend trying out the preview builds!
Some useful resources to dive deeper into the Java ecosystem, and stay up-to-date are:
-
The release notes - The official source for all changes, including new features, bug fixes, and deprecations
-
The Java version almanac - A great resource with details on distributions, and API differences between various releases
-
Foojay - A magnificent Java community offering articles, tutorials, and discussions on the latest in the Java ecosystem
-
SDKman! - a great tool to manage the installation of various tools and languages
-
Inside Java - News updates by Java team members at Oracle
-
Java Community Process - the place where people can propose, discuss, and approve new features through a Java Specification Request (JSR)