Continous testing with JUnit and Gradle

One of the features I like when working on a small library is continuous testing. I used to setup my IDE to re-run my tests whenever a file changes. This article describes how to setup this kind of testing with Gradle and JUnit.

Continuous tests

Gradle can turn any build into a continuous build. Just run any target you want with the --continuous flag:

1
gradle test --continuous

Whenever a test file changes, Gradle will re-compile the whole project and run all tests. With the current Gradle changes to incremental build and the enhancements to compilation avoidance, the compile steps are usually very fast. This switch is usually enough for a small project.

Re-running only changed tests

However the tests run-time can be long and running only changed tests would improve productivity. In that case, Gradle AFAIK doesn’t offer anything out of the box. But it’s quite easy to add this functionality using a custom incremental task.

First, let’s start by extending the Gradle’s Test task and adding with the incremental inputs argument:

1
2
3
4
5
6
7
class TestWatcher extends Test {
@TaskAction
void executeTests(IncrementalTaskInputs inputs) {
super.executeTests()
}
}

Well this doesn’t do anything, but run the tests. But it shows that it’s possible to override the original @TaskAction. The inputs argument now gives us the ability to distinguish between two states:

  1. Something before the test task changed, therefore the task can’t be run incrementally
  2. Only the test classes changed, and in that case it will give us the list of changed files.

It also gives us the files which have been removed, but we don’t care about them. So let’s enhance the task:

1
2
3
4
5
6
7
8
9
10
11
12
class TestWatcher extends Test {
@TaskAction
void executeTests(IncrementalTaskInputs inputs) {
if (inputs.incremental) {
inputs.outOfDate { InputFileDetails change ->
// setup only the changed files for testing
}
}
super.executeTests()
}
}

In case the build is incremental at this point, we get our list of changes. To set them up for testing, we can use the Gradle’s ability to run only tests that pass a filter. The filter takes a class name in package.class format, but the incremental outputs give us files. We’ll have to change the file names into class names and strip the absolute path:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class TestWatcher extends Test {
@TaskAction
void executeTests(IncrementalTaskInputs inputs) {
if (inputs.incremental) {
def outputDir = this.project.sourceSets['test'].output.classesDir.absolutePath
this.filter.includePatterns = []
inputs.outOfDate { InputFileDetails change ->
def candidate = change.file.absolutePath
if (candidate.endsWith('.class')) {
candidate = candidate
.replace('.class', '')
.replace(outputDir, '')
.substring(1)
.replace(File.separator, '.')
this.filter.includePatterns += candidate
}
}
}
super.executeTests()
}
}

The first replace strips the file extension. The second then removes the path to the folder. The substring removes the first file separator and the last replace changes all file separators into dots. The classes then populate the set of filter patterns in filter.includePattern.

And that’s it. Add the task to the main gradle build file:

1
2
3
4
5
task testWatcher(type: TestWatcher) {
testLogging {
showStandardStreams = true
}
}

And run the build with:

1
gradle testWatcher --continuous

When you change any of the classes, the tests will be re-run. If you change only a single test class, only that class will be run.

See also

Vue Nightwatch e2e tests on Travis CI in Chrome

Travis CI is a great continuous integration tool available for free for open source projects with seamless GitHub integration. It can test various projects thanks to different virtual machine images available. However if you need to mix languages, things get a bit more complex.

If you have a JavaScript project with e2e tests using Nightwatch, you won’t be able to run them on NodeJS Travis image. Nightwatch depends on Java to run the Selenium server. Fortunately the Java image contains node, npm and nvm. So to make it work, I used the following Travis configuration in .travis.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo: required
dist: trusty
language: java
addons:
apt:
sources:
- google-chrome
packages:
- google-chrome-stable
jdk:
- oraclejdk8
node_js:
- 6
before_install:
- export CHROME_BIN=chromium-browser
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- nvm install 6.9.5
- npm install -g yarn
install:
- yarn install
script:
- yarn test

The sudo is required to install chrome. The latest Nightwatch requires Java 8. I am not sure if the node_js has any effect and might as well try to remove it. I think the image is running only an older version of node. The before_install part sets environmental properties to point to chrome and a virtual display. Then the xvfb creates the virtual display that is used by chrome. Last the version of node that I am using for development is installed. Finally yarn is installed and tests are run through yarn.

Gradle Release Plugin in Multi project Gradle build

The typical software release at the end of iteration includes incrementing the software version, tagging the release in version control and publishing production artifacts. The Gradle Release Plugin does all that and more. It makes releasing a Gradle project very easy. Until you need to release a multi-project gradle build. The official workaround did not work for me, so I have created my own.

The problem

When simply adding the release plugin to the root project, tasks that should be run only once (such as VCS tagging) are run multiple times, thus failing the build. When adding the release plugin to the subprojects, there are issues with the separate versions and the fact that the VCS repository root is in the parent directory.

Official workaround

The workaround described on the plugin homepage recommends to add the release plugin to the root and then run the release tasks separately for sub-projects. In my opinion, this creates a confusion between versions (i.e. each sub-project is released on its own, increasing the version every time).

My workaround

Based on the official workaround, it automates the release process and avoids creating multiple versions. The solution is to disable the release task in sub-projects to skip any release tasks executing on a wrong level while still publishing all sub-projects artifacts. Note that I am using the gradle wrapper, so I have added it to the script example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
plugins {
id 'net.researchgate.release' version '2.3.5'
}
allprojects {
apply plugin: 'maven-publish'
}
subprojects {
task release(overwrite: true) {
//overwrite release task in the subprojects
}
}
afterReleaseBuild.dependsOn(':moduleA:publish', ':moduleB:publish')
task wrapper(type: Wrapper) {
gradleVersion = '2.11'
}

To disable the release task in all sub-modules, it’s possible to overwrite it in the main script subprojects part with an empty task. To enable publishing of the sub-modules artifacts, the dependency on the sub-project publish task must be defined as per the example for each sub-project that should have its artifacts published.

To release the project, simply run:

1
gradlew release

Hosting a Maven Repository in Amazon S3

We had a simple requirement to host our internal deployed artifacts in the Amazon Cloud. We have migrated most of our builds to Gradle. Since version 2.4, Gradle added support for repositories hosted in Amazon AWS S3.

Ivy vs. Maven

Both Ivy and Maven repositories in S3 are supported by Gradle. But since Ivy is trying to mimic maven and we already had a maven repo, I picked maven.

Setup

I’ve created an S3 bucket and inside a simple structure:

1
2
3
maven
|- snapshots
|- internal

Getting dependencies

The following code in build.gradle enables download of the dependencies:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apply plugin: 'maven'
repositories {
maven {
name "s3snapshots"
url "s3://bucket-name/maven/snapshots"
credentials(AwsCredentials) {
accessKey "aws access key"
secretKey "aws secret key"
}
}
maven {
name "s3internal"
url "s3://bucket-name/maven/internal"
credentials(AwsCredentials) {
accessKey "aws access key"
secretKey "aws secret key"
}
}
}

Publishing dependencies

I played with the uploadArchives gradle task, but it was creating the Ivy structure. I have decided to go with the newer maven-publish plugin.

The following shows how to switch between two repositories and how to reference previously defined repositories. The switching works very well
with Gradle Release Plugin that we are using now.

1
2
3
4
5
6
7
8
9
10
11
12
apply plugin: 'maven-publish'
publishing {
publications {
mavenJava(MavenPublication) {
from components.java
}
}
repositories {
add project.version.endsWith('-SNAPSHOT') ? project.repositories.s3RepoSnapshots : project.repositories.s3RepoInternal
}
}

To include source jars, test jars or other archives, e.g. from distribution plugin, add this to the publication.
Group Id, Artifact Id can be set here as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apply plugin: 'maven-publish'
publishing {
publications {
maven(MavenPublication) {
groupId `group`
artifactId 'artifact'
from components.java
artifact sourceJar {
classifier "sources"
}
artifact testJar {
classifier "test"
}
artifact distZip {
classifier "zip"
}
}
}
}

Migration from other maven repositories

I have successfully migrated Apache Archiva to S3 simply by copying the whole directory structure from Archiva’s data folder to S3. Any maven repository should be possible to migrate the same way.

Missing features

The main disadvantage of this solution is the lack of administration and inability to prune snapshots. I am planning to create a utility that will take care of that.

Fortunately the S3 cost is very small so this will do for now.

Example

You can find an example implementation on GitHub.