Microprofile initiative was created to have a place to innovate before contributing specifications to JakartaEE (JavaEE).

In other words it means that the specifications are developped, implemented and validated before being contributed. It is a quite good idea by itself but it hurst some issues today.

To dig into it, let's take some of the specifications and their flaws.

Microprofile Config

Microprofile configuration is a quite good specification. Its main goal is to abstract the access (read) of the configuration and the way it is provisionned (written). In other words: your application gets the configuration values without knowing where it comes from. Indeed it has some more advanced features but that's the main goal.

Overall it is probably the best specification of the MP bucket but it has one issue: the "context" of the configuration is NOT based on CDI. This means that depending your environment you can use a configuration from another context and therefore have quite some surprises with the values you read if they are not global or config implementation is not well setup.

This pitfall has an explanation: be able to configure CDI extensions themselves. CDI extensions is code executed before the container is able to access beans. This means it requires an alternative way to lookup a configuration. The choice done was based on classloaders which works for wars and flat classpath applications but is easy to break in all other environments (OSGi, ears, custom setup, ...).

To defend a but config, it is important to keep in mind CDI doesn't have the notion of "eager beans" so no way to solve it properly. The cleanest alternative is to use CDI Extension events which would allow to communicate to extensions the Config instance through an event but it means the BeforeBeanDiscovery phase couldn't use the config.

To summarize that pitfall, it is probably fine to keep an alternate lookup solution but not hardcoding a ClassLoader as key would have been probably better.

Microprofile Metrics

This one is pretty interesting, it is a plain copy of codehale/dropwizard metrics API. Exactly the same with some light CDI integration to have automatic registration of metric instances and built-in interceptors for each of them.

By itself it is fine but it is important to step back one minute and realize that this is a monolith solution and that Microprofile as a platform is about cloud and microservices (I'll detail it in the last part).

This means you want more tracking and not aggregated data but events you can aggregate in another system like ELK or Splunk.

We can also be surprised Prometheus format has been integrated on /metrics endpoint but not others. Or rephrased: why integrating this format in the platform and priviledging a vendor and not others?

Microprofile OpenAPI

OpenAPI is the standardization initiative of Swagger. If you miss that part of the API design, it allows you to define in JSON or YAML the specifications of your web services (REST, it doesn't fit well RPC/SOAP like servcies). Then you often add on top of it a nice web UI to test online your web services. Restlet has very good solution and Chrome plugin on that topic if you are interested.

What has been done in the specification was to:

  • define a builder API to let you create an OpenAPI programmatically
  • copy all that API as annotation to let you describe your specification inline ("à la EE")
  • make the container find automatically services and expose it on /openapi endpoint in JSON and Yaml

However this specification forgot some very basic points which are key to make it usable for end users:

  • Microprofile built-in serialization is JSON so supporting Yaml must be optional and a potential vendor extension otherwise it breaks the platform consistency. You can oppose the OpenAPI initiative supports Yaml first but this is not a strong point since YAML->JSON conversion is trivial and supported by most tools and because the Microprofile specification is not about design (YAML) but only runtime (so JSON).
  • The builder model is very close to a POJO (with getters/setters), however using EE stack (JSON-B) you can't serialize it in a valid OpenAPI format which means if you don't only use it through /openapi endpoint then you don't have a valid payload and must reimplement a serializer the platform already has. One solution is to add a (de)serializer bean but one way stronger and easier for end user solution is just to provider a built-in JSON-B mapping in the model. Also note that the builder uses interfaces...whereas it is almost just a plain stupid pojo backed implementation so no real advantages here (no vendor logic to plug in so the abstraction is pointless).
  • The fact there are annotations and the builder API is not that cool for the end user because it requires to learn two API instead of one. Technically it is trivial to merge them: drop the annotations and just fire a CDI event with the OpenAPI instance and CDI Annotated model. This way the user can update with its specification metadata at startup and doesn't need to rely on two sets of API. The other advantage to drop annotations is that the code will become readable (check JAXRSApp to understand what I mean).
  • OpenAPI uses "references" in a lot of places to avoid to define N times the same exact definition/model. For example a User payload schema will be referenced for all the endpoints of the UserResource. This is great and the builder (fluent model actually) API must use that to ensure the serialization matches the user requirement but the annotation modelling doesn't need that at all...however it was copied 1-1 and you have @Schema(ref="/user") in the API which is against CDI which would rather define an @OpenAPIBinding which would be a kind of CDI stereotype allowing to define a metamodel and use it directly. You would therefore get a @UserSchema which would be decorated with the binding annotation and the schema one (not a reference but the actual schema) and the serialization would replace the references normally (this process is generally called canonalization or normalization of the model).

Finally the Microprofile specification allows to provide an already built openapi file. After having realized it is quite pointless because a plain servlet or JAX-RS endpoint supports that without anything new, you will get surprised they define to put it in META-INF. Like that it is not shocking because we are thinking about the classpath but for a war it is really the META-INF of the war, not the WEB-INF/classes/META-INF which is a vendor specific location and is not specified as being portable or anything. It can even be forbidden to write in it in servers or put not server specific descriptors.

Microprofile OpenTracing

OpenTracing is the initiative standardizing Zipking and friends. Goal is to be able to track end to end a "business transaction". Concretely it means the first caller will create a transaction identifier and propagate it to next services (can be a remote service or a database for instance) and each call will append a span ("something done somewhere") which contains the timing information and some metadata about the current context.

This is a very important specification for the cloud and it is not done bad. The only regret you will get when trying to embrace Microprofile is that there is no link with actual monitoring. In other words no global event solution which would integrate with OpenTracing instead of just important opentracing.io API and making it integrated with JAX-RS and CDI.

Microprofile REST Client

Just mentionning it for completeness but this one is pretty good. In short, it allows to use a proxy based approach to create HTTP clients based on top of the JAX-RS Client.

Conclusion

The previous part covered Microprofile 1.3. The overall goal of this post is to highlight the specifications are not yet ready to hit JakartaEE since they are not thinking to the platform and therefore the users (transitively).

Microprofile core is CDI and JSON, therefore all specifications must embrace the paradigm of the two and not just import what has been done elsewhere.

The next key of Microprofile is the cloud and here we can regret Microprofile Health, Metrics and OpenTracing has not been merged in a CDI bus-like solution which would provide a built-in monitoring backbone for the whole platform. Then events could be wired on ELK, Splunk, Zipkin/Jaeger or moved to aggregates to get back metrics. But this is a sink/collector concern and not that an API concern.

Finally the fact specifications tend to import a technical stack which is not natural in the paltform sometimes is worrying too since it makes the platform inconsistent and doesn't bring real features to end user but only the promise of potential libraries conflicts of headache in their interaction with the platform.

However, there are still a lot of hopes on Microprofile and in particular with the coming reactive specification which was created recently because we need some work on that topic to make JakartaEE fully embracing the cloud environment and avoid some boilerplate in our applications and make it more robust. However it must really be approached as a transversal speciication used by all others and integrated as deep as in JAX-RS and Servlet API (and to go a bit outside Microprofile natuiral stack, even JPA since now JDBC drivers can be asynchronous!).

My wish would be a stronger transversal management of the Microprofile as a platform and not isolate each specifications the ones from the others which leads to current state and duplication on the configuration and tuning on user side (in particular for the monitoring). A strong JavaEE background is probably required to do that but there is still hope it gets rethought before hitting JakartaEE so don't hesitate to report issues you see on the bugtrackers or Microprofile lists if you want it to converge to something better.

From the same author:

In the same category: