In this article we'll see how to set up automated SonarQube analysis, that will be executed every time we push to the remote. For the simplicity of the article, let's say we have a pipeline job with two steps,
If you followed previous two blog posts (links at the bottom) in this series, you'll see we are working with maven project, so the pipeline here will be to:
- build project with
- analyze code so that we don't end up with bugs or security vulnerabilities on the master branch.
We will make sure to apply some caching, but also to save some time by passing artifacts from one pipeline step to another.
Why do we need code analysis automation?
We can do analysis manually on our machines. But it's better to set it up automatically. There are several reasons and I'll name only a few. The first one is -> Machine won't forget to do it. And if we agree on some level of quality, it will prevent us submitting bad code to the production.
Of course, it will free our machine from doing it, and in the end, we can centralize the output reports so everyone in the team can have easy access.
Bitbucket pipeline setup
As stated above, we will be using a pipeline with two steps,
definitions: steps: - step: name: Maven Build caches: - maven script: - mvn clean install artifacts: - target/site/** - target/classes/**
Few things to mention here
&build, we can name a step and reference it on several places in your yml file. Build step is nice example, since it usually appears in both staging and production pipelines. Please note, this is under
definitions->steps. This is where it's possible to define a step, give it a name and reuse it later. We'll see the pipeline part at the bottom of the article
artifacts -> target/site/**Artifacts can be stored and reused in another steps within the same pipeline. We put everything below
target/sitefor one reason: this is the location of our line coverage report that Sonar will use in the next step.
target/classes/**Sonar will use this directory to execute analysis based on the classes we have here. They will be created after our
mvn clean install. Consider this and previous artifact as something that we will reuse in the next step.
- step: name: Analyze code with Sonarqube caches: - maven script: - mvn sonar:sonar -Dsonar.host.url=$SONAR_HOST_URL -Dsonar.login=$SONAR_API_TOKEN
- We're using here
caches -> maven, Bitbucket implementation to speed up consequent maven builds. Same like in the previous step.
- Second thing we're using here are Bitbucket repository variables, so you can save your token (encrypted) on a repository level, and use it in the pipeline. In our case it's
I want to show here something from a pipeline log that might not be obvious from this step alone.
Artifact "target/site/**": Downloading Artifact "target/site/**": Downloaded 132.9 KiB in 0 seconds Artifact "target/site/**": Extracting Artifact "target/site/**": Extracted in 0 seconds Artifact "target/classes/**": Downloading Artifact "target/classes/**": Downloaded 9.4 KiB in 0 seconds Artifact "target/classes/**": Extracting Artifact "target/classes/**": Extracted in 0 seconds Cache "maven": Downloading Cache "maven": Downloaded 207.8 MiB in 3 seconds Cache "maven": Extracting Cache "maven": Extracted in 1 seconds
Artifact and cache are downloaded during the step setup phase at the very beginning. What does this mean for our
mvn sonar:sonar? Well, we don't have to run
mvn install or
mvn test one more time for our sonar step. We already have files that are generated in the first step, and we saved ourselves some time on the Bitbucket pipeline execution.
Additional sonar setting in pom.xml
<properties> ... <sonar.projectKey>hashnode-blog-showcase</sonar.projectKey> <sonar.java.binaries>target/classes</sonar.java.binaries> </properties>
We can define sonar properties in the
pom.xml file. Here we provided project key which is optional. In case it's missing, maven will use
But the other one is important,
sonar.java.binaries. Remember we didn't have
maven install in our second step. This property will tell Sonar where to look for Java binaries to execute analysis against. Without this, your Sonar step will fail. It will complain it don't know what to analyze.
So, this works together with the artifact from the step one,
Setting SonarQube to work with Bitbucket is easy, since we already have Maven. Sonar plugin will execute analysis with one line command. However, we need to setup things like where the binaries are, and we also set up few things that will reduce the amount of time the pipeline is running. Communication to the Sonar server is configured via Bitbucket repository variables.
This configuration will execute sonar analysis on every push to remote branch, which can then give you incremental reports on your code. I won't go into too much details about setting up the Bitbucket Pipeline, as it can be the topic for itself.
Also, the pipeline part of the yml file, as promised above:
pipelines: default: - step: - step:
We can see how it's clear and simple, as we only need to reference the steps.
Alright, thanks for reading!