Compared to EJB, CDI still misses an @Asynchronous API, however since CDI 2.0 there is a way to implement it without any external dependency or custom threading strategy (custom ExecutorService) which breaks the CDI integrations with the container (you miss security context inheritance etc).

CDI 2.0 didn't bring @Asynchronous but asynchronous events. This way you can implement any asynchronous processing through an event which is returned as a handle (CompletionStage):

@ApplicationScoped
public class AsciidoctorService {
    @Inject // <1>
    private Event<PreComputeEvent> event;

    // <2>
    private static class PreComputeEvent {
        private LongStateToCompute value;
    }

    // <3>
    public CompletionFuture<LongStateToCompute> doAsync() {
        preComputedState = event
            .fireAsync(new PreComputeEvent())
            .thenApply(i -> i.value)
            .toCompletableFuture();
    }

    // <4>
    void init(@ObservesAsync final Init doInit) {
        doInit.instance = someLongComputingToImplementSynchronouslyOrNot();
    }
}
  • We need an event so we inject it to start
  • To make it work we need a typed event holding our computation, it is just a wrapper of our returned type
  • Instead of returning a Future<T> as in EJB specification we return a CompletionFuture which is a more powerful and reactive addition of java 8. In the implementation we fire our event with fireAsync to make it asynchronous and unwrap the event to have our actual result. Finally we convert it to a CompletionFuture which is generally a no-op for CDI to expose a more powerful API than CompletionStage which can't be used as a Future whereas CompletionFuture can.
  • Finally to make it working we write an asynchronous which does the synchronous computation in asynchronously (in another "thread")

When it comes to asynchronous computing in applications, there is a particular case which is interesting to speak about: the startup slow initializations. If you have a resource which is slow to compute but that you can cache then, you trigger its initialization at startup and then cache its value (or values if you want a small pool).

To do that with CDI you can observe when ApplicationScope is initialized and trigger 1 (or N) event(s) to fill this cache. In this case the event can even be injected as a parameter of the initialization event observer - which is a synchronous observer triggering an asynchrnous one ;):

@ApplicationScoped
public class AsciidoctorService {
    private CompletionFuture<LongStateToCompute> preComputedState;

    void init(@Observes @Initialized(ApplicationScoped.class) final ServletContext onStart,
                     final Event<PreComputeEvent> initEvent) {
        preComputedState = initEvent
            .fireAsync(new PreComputeEvent())
            .thenApply(i -> i.value)
            .toCompletableFuture();
    }
}

We observe when the application context is initialized - generally with the application deployment - and then trigger the asynchronous processing and keep a reference on the result.

Tip: if you want N instances, mix it with IntStream ;).

Then you have two options depending how your application is developped. The first one is to encapsulate the asynchronous computing and just make it blocking when used:

public String syncUsage() {
    try {
        return preComputedState.get();
    } catch (final InterruptedException e) {
        Thread.currentThread().interrupt();
        throw new IllegalStateException(e);
    } catch (final ExecutionException e) {
        throw new IllegalStateException(e.getCause());
    } catch (final TimeoutException e) {
        throw new IllegalStateException(e);
    }
}
?

This is very useful when you initialize a pool or in memory cache of some instances which can be slow to create. For instance is you use asciidoctorj to render some asciidoc document you will create an instance at startup and cache it to avoid to create it at runtime which is very slow and costly. This means that the get() call will not wait in general, except maybe for the first ones if they occur super quickly after the application startup.

However, if you developped your application in a reactive way you can expose the CompletionStage directly and just let it be chained. In this case you just return the preComputedState directly and let the caller(s) manages the exception handling. With some frameworks you can let it go until the front (JAX-RS for instance) layer!

This solution is simple and quite efficient, however it has one drawback: it uses the default asynchronous threading. In most of containers it is the common forkjoin pool which means you can need to configure it globally on the JVM to tune its size and that you can't seggregate the different asynchronous services of your application.

To solve that and own that threading model you can pass to the fireAsync some NotificationOptions with an executor you get from somewhere else - like ee concurrency utilities in JavaEE or a custom thread pool you configured yourself if you are in standalone:

event.fireAsync(myEvent, NotificationOptions.ofExecutor(getExecutor());

With all these tricks, no more excuse to pay any initialization cost at runtime in a CDI application ;).

From the same author:

In the same category: