Matthew Tyson
Contributing writer

Project Loom: Understand the new Java concurrency model

how-to
Nov 15, 20239 mins
JavaProgramming LanguagesSoftware Development

Project Loom massively increases resource efficiency while preserving backward compatibility with Java threads. Here's a look at Loom and the roadmap ahead.

Loom, threading, textiles
Credit: Afonasey/Shutterstock

Loom is a newer project in the Java and JVM ecosystem. Hosted by OpenJDK, the Loom project addresses limitations in the traditional Java concurrency model. In particular, it offers a lighter alternative to threads, along with new language constructs for managing them. Already the most momentous portion of Loom, virtual threads are part of the JDK as of Java 21.

Read on for an overview of Project Loom and how it proposes to modernize Java concurrency.

Virtual threads in Java

Traditional Java concurrency is managed with the Thread and Runnable classes, as shown in Listing 1.

Listing 1. Launching a thread with traditional Java


Thread thread = new Thread("My Thread") {
      public void run(){
        System.out.println("run by: " + getName());
      }
   };
   thread.start();
   System.out.println(thread.getName());

Traditional Java concurrency is fairly easy to understand in simple cases, and Java offers a wealth of support for working with threads.

The downside is that Java threads are mapped directly to the threads in the operating system (OS). This places a hard limit on the scalability of concurrent Java applications. Not only does it imply a one-to-one relationship between application threads and OS threads, but there is no mechanism for organizing threads for optimal arrangement. For instance, threads that are closely related may wind up sharing different processes, when they could benefit from sharing the heap on the same process.

To give you a sense of how ambitious the changes in Loom are, current Java threading, even with hefty servers, is counted in the thousands of threads (at most). Loom proposes to move this limit toward millions of threads. The implications of this for Java server scalability are breathtaking, as standard request processing is married to thread count.

The solution is to introduce some kind of virtual threading, where the Java thread is abstracted from the underlying OS thread, and the JVM can more effectively manage the relationship between the two. Project Loom sets out to do this by introducing a new virtual thread class. Because the new VirtualThread class has the same API surface as conventional threads, it is easy to migrate.

Continuations and structured concurrency

Continuations is a low-level feature that underlies virtual threading. Essentially, continuations allows the JVM to park and restart execution flow. 

As the Project Loom proposal states:

The main technical mission in implementing continuations—and indeed, of this entire project—is adding to HotSpot the ability to capture, store, and resume callstacks not as part of kernel threads.

Another feature of Loom, structured concurrency, offers an alternative to thread semantics for concurrency. The main idea to structured concurrency is to give you a synchronistic syntax to address asynchronous flows (something akin to JavaScript’s async and await keywords). This would be quite a boon to Java developers, making simple concurrent tasks easier to express.

If you were ever exposed to Quasar, which brought lightweight threading to Java via bytecode manipulation, you might remember the tech lead, Ron Pressler. Pressler, who now heads up Loom for Oracle, explained structured concurrency this way:

Structured concurrency is a paradigm that brings the principles of structured programming to concurrent code, and makes it easier to write concurrent code that cleanly deals with some of the thorniest chronic issues in concurrent programming: error handling and cancellation. In JDK 21, we delivered StructuredTaskScope, a preview API that brings structured programming to the JDK. Because virtual threads mean that every concurrent task in a program gets its own thread, virtual threads and StructuredTaskScope are a match made in heaven. In addition to making concurrent code easier to write correctly, StructuredTaskScope brings structured observation: a thread dump that captures the relationships among threads.

Alternatives to virtual threads

Before looking more closely at Loom, let’s note that a variety of approaches have been proposed for concurrency in Java. In general, these amount to asynchronous programming models. Some, like CompletableFutures and non-blocking IO, work around the edges by improving the efficiency of thread usage. Others, like RXJava (the Java implementation of ReactiveX), are wholesale asynchronous alternatives.

Although RXJava is a powerful and potentially high-performance approach to concurrency, it has drawbacks. In particular, it is quite different from the conceptual models that Java developers have traditionally used. Also, RXJava can’t match the theoretical performance achievable by managing virtual threads at the virtual machine layer.

Java’s new VirtualThread class

As mentioned, the new VirtualThread class represents a virtual thread. Under the hood, asynchronous acrobatics are underway. Why go to this trouble, instead of just adopting something like ReactiveX at the language level? The answer is both to make it easier for developers to understand, and to make it easier to move the universe of existing code. For example, data store drivers can be more easily transitioned to the new model.

A simple example of using virtual threads is shown in Listing 2. Notice it is very similar to existing Thread code. (This code snippet comes from Oracle’s introduction to Loom and virtual threads.)

Listing 2. Creating a virtual thread


Thread.startVirtualThread(
  () -> {
    System.out.println("Hello World");
  }
);

Beyond this very simple example is a wide range of considerations for scheduling. These mechanisms are not set in stone yet, and the Loom proposal gives a good overview of the ideas involved.

An important note about Loom’s virtual threads is that whatever changes are required to the entire Java system, they must not break existing code. Existing threading code will be fully compatible going forward. You can use virtual threads, but you don’t have to. Achieving this backward compatibility is a fairly Herculean task, and accounts for much of the time spent by the team working on Loom.

Lower-level async with continuations

Now that we’ve seen virtual threads, let’s take a look at the continuations feature, which is still in development. Loom supports continuations in virtual threads and structured concurrency. There is also talk of continuations being available as a public API for developers to use. So, what is a continuation?

At a high level, a continuation is a representation in code of the execution flow in a program. In other words, a continuation allows the developer to manipulate the execution flow by calling functions. The Loom documentation offers the example in Listing 3, which provides a good mental picture of how continuations work.

Listing 3. Example of a continuation


foo() { // (2)
  ...
  bar()
  ...
}
bar() {
  ...
  suspend // (3)
  ... // (5)
}
main() {
  c = continuation(foo) // (0)
  c.continue() // (1)
  c.continue() // (4)
}

Consider the flow of execution as described by each commented number:

  • (0) A continuation is created, beginning at the foo function
  •     
  • (1) It passes control to the entry point of the continuation
  • (2) It executes until the next suspension point, which is at (3)
  • (3) It releases control back to the origination, at (1)
  • (4) It now executes, which calls continue on the continuation, and flow returns to where it was suspended at (5)

Tail-call elimination

Another stated goal of Loom is tail-call elimination (also called tail-call optimization). This is a fairly esoteric element of the proposed system. The core idea is that the system will be able to avoid allocating new stacks for continuations wherever possible. In such cases, the amount of memory required to execute the continuation remains consistent rather than continually building, as each step in the process requires the previous stack to be saved and made available when the call stack is unwound.

What’s next for Loom

Although there is already quite a lot to explore in what has been delivered by Loom, even more is planned. I asked Ron Pressler about the roadmap ahead:

In the short term, we’re working on fixing what is probably the biggest hurdle to a completely transparent adoption of virtual threads: pinning due to synchronized. Currently, inside synchronized blocks or methods, IO operations that would normally release the underlying OS thread block it instead. That is called pinning, and if it happens very frequently and for a long duration it can harm the scalability benefit of virtual threads. The workaround today is to identify those instances with observation tools in the JDK, and to replace them with java.util.concurrent locks, which don’t suffer from pinning. We’re working to stop synchronized from pinning so that this work won’t be needed. Additionally, we’re working on improving the efficiency of the scheduling of IO operations by virtual threads, improving their performance further.

In the medium term we’d like to incorporate io_uring, where available, to offer scaling for filesystem operations in addition to networking operations. We also want to offer custom schedulers: Virtual threads are currently scheduled by a scheduler that’s a good fit for general-purpose servers, but more exotic uses may require other scheduling algorithms, so we’d like to support pluggable custom schedulers.

Further down the line, we would like to add channels (which are like blocking queues but with additional operations, such as explicit closing), and possibly generators, like in Python, that make it easy to write iterators.

Loom and the future of Java

Loom and Java in general are prominently devoted to building web applications. Obviously, Java is used in many other areas, and the ideas introduced by Loom may be useful in a variety of applications. It’s easy to see how massively increasing thread efficiency and dramatically reducing the resource requirements for handling multiple competing needs will result in greater throughput for servers. Better handling of requests and responses is a bottom-line win for a whole universe of existing and future Java applications.

Like any ambitious new project, Loom is not without challenges. Dealing with sophisticated interleaving of threads (virtual or otherwise) is always going to be complex, and we’ll have to wait to see exactly what library support and design patterns emerge to deal with Loom’s concurrency model.

It will be fascinating to watch as Project Loom moves into Java’s main branch and evolves in response to real-world use. As this plays out, and the advantages inherent in the new system are adopted into the infrastructure that developers rely on (think Java application servers like Jetty and Tomcat), we could witness a sea change in the Java ecosystem.

Already, Java and its primary server-side competitor Node.js are neck and neck in performance. An order-of-magnitude boost to Java performance in typical web application use cases could alter the landscape for years to come.