The Problem with Annotation Processors

Matthias Ngeo (Pante)
10 min readMar 19, 2021

--

For reasons unknown, broaching the subject of annotation processors seem to elicit some primordial fear in developers. People tend to associate annotation processing with borderline witchcraft and sorcery perform-able only by most adept of basement wizards. It doesn’t have to be that way. Annotation processing doesn’t have to be the big scary monster hiding under the bed.

Image taken from https://sourcesofinsight.com/monsters-under-the-bed/

No doubt, problems with annotation processing do exist, but so do solutions to those problems. One problem that stands out in particular, is the difficulty in unit testing annotation processors. A problem that elementary, a suite of JUnit 5 extensions, solves.

What’s this Annotation Processing Thingamajig?

For the uninitiated, an annotation processor is similar to a compiler plug-in. Like it’s namesake, it can be called by the compiler to process annotations, i.e. @Nullable during compilation. Said process covers an extremely broad and vague expanse. Everything from simple value validation to a full-blown pluggable type system like the checker-framework. A simple @Builder annotation builder to full-blown dependency injection via code generation like Dagger.

Post Java 9, it resides inside the java.compiler module. Inside an annotation processor lies the fabled domain of Elements and TypeMirrors, abstract Syntax Tree (AST) representations of the Java language and counterparts to the reflection framework found in Javaland. Elements represent syntactical constructs such as methods, arrays etc. while TypeMirrors represent, well, types such as reference types (classes) and primitives but we digress.

Why So Difficult?

So what makes testing annotation processing so difficult? In our opinion, everything about the annotation processing environment. We’re not claiming that the environment is some evil grotesque being, it’s actually surprisingly well-designed. The problem lies squarely with the unavailability of the environment outside the compiler. Without it’s environment, testing an annotation processor is a lost cause.

A good drinking game is taking a shot for each method call in an annotation processor that requires an annotation processing environment.

Pretty much everything requires an annotation processing environment as illustrated above.

At this junction, we have four solutions to overcome this pickle:

  • Don’t bother with unit testing
  • Wait for something, anything to happen
  • Mock/re-implement the annotation processing environment
  • Smuggle the annotation processing environment out of the compiler

To keep a long story short, we ended up becoming smugglers.

Smuggler’s Discovery

While trawling the web, we discovered Google’s compile-testing project, a hidden gem buried beneath the swathes of GitHub projects. Through some clever hacks, the project managed to provide an annotation processing environment for unit tests albeit a little lackluster and limited. Exploring the project, it became obvious that it wasn’t the panacea that we had hoped. The project suffered from a few limitations that we weren’t able to stomach:

  • Supports only JUnit 4. The annotation processing environment is only available through a JUnit rule, something that is no longer supported in JUnit 5. We have been using JUnit 5 for the longest time and don’t intend to downgrade anytime soon.
  • The utilities for working with the annotation processing environment is limited. It works, but it can be significantly more ergonomic.
  • Inability to traverse the Elements and TypeMirrors of compiled files in a test. This is essential to allow compiled files to be used as test cases.
  • Scope limitation of the annotation processing environment. The annotation processing environment is limited to the scope of a test method. This is inconvenient as initialization of test state cannot be shared between multiple tests. Furthermore, the design lends itself to unexpected behaviour.

This isn’t to say that the project is bad, just that our objectives are different. In fact, some parts of elementary is based on compile-testing. As its name implies, compile-testing focuses on testing the compilation of code, not annotation processing. That’s not our objective. Our objective is to simplify unit testing annotation processors. Thus, after a healthy dose of “Hold my beer” and Not Invented Here Syndrome, the elementary project was conceived.

Elementary, My Dear Watson

With compile-testing as a foundation, we embarked on a quest to bring Elementary to life. Starting with a clean slate blessed us with the freedom to make decisions that would otherwise incite an angry mob with pitchforks and torches:

  • Support only Java 11 & above. The module system in Java 9 introduced some breaking changes to the jdk.compiler module and ClassLoaders. We don't want to deal with that.
  • Support only JUnit 5. We do not want to support a JUnit 4 equivalent that we do not use.

Our experience working on Chimera code generation tool told us that tests for annotation processors fell into the classic black-box and white-box testing categories. For small and/or simple annotation processors, it was more efficient to invoke the annotation processor inside a compiler against sample Java source files. As the complexity and size of an annotation processor increases, running the annotation processor against sample files yields diminishing returns. It will be far less tedious to isolate and test the individual logical components. Two different categories with two completely different sets of requirements.

Box of Fun Things

Black-box testing annotation processors can be fun. It doesn’t have to be a myriad of set-up, tear-down and configuration. Not accordingly to JavacExtension at least. For each test, JavacExtension compiles a suite of test cases with the given annotation processor(s). The results of the compilation is then funneled to the test method for subsequent assertions. All configuration is handled via annotations with no additional set-up or tear-down required.

They say seeing is believing so let’s get on with the seeing.

Our imaginary annotation processor is fairly straightforward. All it does is check whether an element that is annotated with @Case is also a string field. If an element isn't a string or variable, an error message is printed. Since it's that straightforward, just black-box testing our annotation processor is enough.

Testing our imaginary annotation processor isn’t too difficult either. All we need to do is to sprinkle a few annotations on the test class, create some test cases, check the compilation results, and Voila! We’re done.

Let’s break down the code snippet.

  • By annotating the test class with @Options, we can specify the compiler flags used when compiling the test cases. In this snippet, -Werror indicates that all warnings will be treated as errors.
  • To specify which annotation processor(s) is to be invoked with the compiler, we can annotate the test class with @Processors. No prizes for correctly guessing which annotation processor in this snippet.
  • Test cases can be included for compilation by annotating the test class with either @Classpath or @Inline. Java source files on the classpath can be included using @Classpath while strings inside @Inline can be transformed into an inline source file for compilation. In this snippet, both ValidCase and InvalidCase is included for compilation.
  • An annotation’s scope is tied to its target’s scope. If a test class is annotated, the annotation will be applied for all test methods in that class. On the same note, an annotation on a test method will only be applied on said method.
  • Results represent the results of a compilation. We can specify Results as a parameter of test methods to obtain the compilation results. In this snippet, process_string_field(...) will receive the results for ValidCase while process_int_field(...) will receive the results for both ValidCase and InvalidCase.

Pandora’s Box

This is where things become really interesting. White-box testing isn’t as simple as invoking an annotation processor since the possibilities of what a test is trying to prove is unlimited. In a black-box test, we need only to prove that the compilation results of a known annotation processor against a fixed number of files matches certain criterion. On the contrary, in a white-box test, we do not know why, what and how a component is being tested. The best we can do is make the annotation processing environment accessible inside the test class.

“It can’t be that difficult to allow class scoped annotation processing environments, compile-testing already does that.”

We too, initially felt the same way and boy, were we wrong. While compile-testing does provide an annotation processing environment, it is limited to the scope of a test method. Not being able to access said environment outside of methods means repetitive and verbose initialization code, which blows. Sadly, we couldn’t just tweak compile-testing’s trick either as it was found to be incompatible with our objective.

The secret sauce behind compile-testing is actually pretty straightforward. Each test method is intercepted by a JUnit rule and wrapped in an annotation processor that invokes the method during processing. The test is subsequently executed inside a compiler that the JUnit rule invokes. Unfortunately, in this technique, an annotation processing environment is available only when a test method. It isn’t possible to tweak the technique to intercept the creation of a test instance and inject the test instance inside an annotation processor either due to the constraints of the JUnit lifecycle.

A great deal of time spent at the drawing board later, we succeeded in creating the ToolsExtension. This extension exploited the fact that a test instance only needed access to an annotation processing environment. Tests didn't need to be executed inside an annotation processor. Once we established that, our trick was run a compiler with a blocking annotation processor on a daemon thread before each test instance was created. With compilation suspended inside the processor, the environment is made accessible to the test instance on the main thread. Only after all tests has been executed does compilation resume.

Here’s a poorly drawn MS Paint diagram illustrating the entire process

Let’s pretend that as a result of the imaginary processor we described in Box of Fun Things having grown in scope and size, it was refactored into multiple components, one of which checks if an element is a string variable like the original annotation processor.

Using the ToolsExtension to test the annotation processor yields the following code snippet:

Let’s break down the code snippet:

  • By annotating the class with @Inline we can specify a inline Java source file which ToolsExtension includes for compilation.
  • The annotation processing environment can be accessed via either the Tools class or dependency injection into the test class's constructor or test methods. In this case, we access the current TypeMirrors using the static method on Tools.
  • An in-depth explanation for both @Case and Cases will be provided in the following section. For now, it's just the mechanism used to find elements in compiled files.

The Case for Cases

With the completion of ToolsExtension, we succeeded in our quest to smuggle an annotation processing environment out of the compiler. Yet one final piece in the puzzle still remains. How do we create those elements to test our code against? The jdk.compiler module doesn't provide a way to create elements. While mocking an Element is possible it is far from developer-friendly. Not only is the initialization verbose, unwieldy and convoluted, it is also difficult to guarantee that the mocked element's behaviour matches its actual counterpart. We can't look to compile=testing for guidance either since it doesn't provide anything like that.

After much headache, we managed to find the missing piece. Let’s have the compiler transform our test cases written in idiomatic Java into elements for us. That way, we avoid the mess surrounding the initialization of elements and the resultant code is far easier to understand. To achieve that, we required some way to fetch elements from the compiler. After further refinement of the concept, we eventually developed the Cases class and corresponding @Case annotation.

Returning to our code snippet from Pandora’s Box, let’s analyze it in greater detail.

  • By annotating a test case with @Case inside a Java source file, we can fetch it's corresponding element from Cases. A @Case may also contain a label to simplify retrieval.
  • Through Cases, we can fetch elements by either the label or index of the case. We can obtain an instance of Cases via Tools.cases() or like in this code snippet, through dependency injection.

Idea Graveyard

As mentioned at the beginning of this article, we explored a few other avenues which eventually led to dead-ends. We thought them to be interesting enough to discuss in the following sections. Most of them ended up getting shelved due to the impracticality and unacceptable trade-offs for the solution.

Not testing annotation processors goes without saying to be a terrible choice. Just because testing them is difficult doesn’t give us the liberty of skipping that. The problems will only worsen over time if we choose to take the easy route out. Furthermore, most annotation processors usually do code generation and static type analysis. Both of which are extremely difficult to troubleshoot.

“Good things come to those wait. But better things come to those who work for it.”

Had JEP 119: javax.lang.model Implementation Backed by Core Reflection been shipped with JDK 8, I highly doubt elementary would have even been conceived. It solved the issue with accessing an annotation processing environment outside of a compiler by providing a standard implementation. Sadly, it was shelved and future efforts seems to have stalled. A wait and see approach to unit testing annotation processors would thus be unfeasible as there isn’t anything to wait on.

A problem more difficult than testing annotation processing is trying to mock/re-implement the annotation processing environment. Since elements represent an AST for the Java language, we need to be intimate with the language specification to guarantee that the behaviour of mocked/re-implemented elements do not deviate from the original. This honestly makes testing annotation processors seem like a Disney fairy-tale, we don’t want to touch that even with a ten-foot pole. A few existing re-implementations do exist but seem to have been long-abandoned for years. In the end, it boils down to the troubles outweighing the benefits that led us to abandon this avenue.

Final Thoughts

We’ve reached the end of our journey to simplify the testing of annotation processors. Looking back, it has been an absolute blast working on the project. How adopted this project is still remains to be seen. But if anything, I hope that this article encouraged you start playing around with annotation processors.

In summary, Elementary introduces:

  • The JavacExtension for black-box testing and testing of simple annotation processors.
  • A class-scoped annotation processing environment for test classes annotated with ToolsExtension.
  • Utilities for fetching elements from the compiler to the test class

That said, this is only beginning of yet another journey. A journey that I am hopeful will bring many new feature and improvements to elementary in the time to come. Until the next time, happy coding!

Shameless advertising* This article is based on Elementary, https://github.com/Pante/elementary

*Even more shameless advertising* Read the article on Dzone, https://dzone.com/articles/the-problem-with-annotation-processors

--

--