Creating a restful web service with SparkJava, Dagger2 and Gradle
In this post we are going to learn how to create a restful web service using SparkJava, Dagger2, and Gradle.
If you're an android developer like I am then you probably recognize Dagger2 and Gradle, but let's go over what each of these pieces of technology will give us.
-
SparkJava will be our web service's core framework. It comes with an embedded jetty server and makes it very easy to create an API or even a full blown MVC web app. It is easy to start hacking around in and even easier to deploy since it executes as a jar file.
-
Dagger2 is a lightweight but fully featured DI framework from Google. We will use it to handle all of our dependency injection needs inside of our webservice.
-
Gradle is our build tool. We will use it to build and bundle our app.
The first thing we will need to do is install Gradle if you do not already have it installed on your development machine. You can find installation instructions here.
I will be creating a gradle java project using the IntelliJ Idea IDE. You can find the community edition of the IDE here. If you wish to make a project without an IDE you will need to simply make a new directory some where and ensure that it has a folder called src/ inside of it. Navigate into that directory with your command line and execute gradle wrapper
to get the Gradle wrapper setup for this project.
Let's get started by setting up our build.gradle file. This file will provide instructions to Gradle on how to build our project.
Here is what our initial build.gradle file is going to look like:
Project link
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.4'
}
}
group 'com.abnormallydriven.daggerspark'
version '1.0-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'com.github.johnrengelman.shadow'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
jar {
manifest {
attributes 'Main-Class': 'com.abnormallydriven.daggerspark.Application'
}
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile 'com.sparkjava:spark-core:2.5.5'
}
Let's walk through what each part of the build file is giving us.
The initial buildscript
block adds the shadow plugin to the classpath and it tells Gradle where it can find the plugin. We need the shadow plugin so that we can tell Gradle to build a "fat jar". This will bundle all of our dependencies into a single easily executable jar file.
After the buildscript
block we give our project a group and a version.
Then we apply the java and shadow plugins so that they are able to do their job when we tell Gradle to build.
sourceCompatibility
will let Gradle know what version to expect when compiling our source code files.
We then declare mavenCentral in our repositories
block so that we can pull our dependencies down.
Our jar
block gives the minimum necessary for producing an executable jar.
Finally our dependencies
block is where we add all of the libraries our project will depend upon. We will just add Spark for now and get to Dagger a little later.
Now that we have Gradle capable of building our project and Spark added as a dependency, let's build the simplest possible Spark api
package com.abnormallydriven.daggerspark;
import static spark.Spark.*;
public class Application {
public static void main(String[] args) {
get("/hello", (req, res) -> "Hello World");
}
}
This simple class defines a single route at the /hello
endpoint and when we make a request against that endpoint we should see the string Hello World
come back in the response. We can test this by executing
./gradlew clean shadowJar
Gradle should then build a jar file for us and package Spark into it so that we can simply navigate to build/libs and then run it via command line
java -jar dagger-spark-1.0-SNAPSHOT-all.jar
If we then navigate to localhost:4567/hello we should see "Hello World" in our browser window. That's all there is to it. Spark makes it really easy to get a java web service up and running quickly.
You may have noticed in your command line output that Spark mentioned it couldn't find SLF4J and so it was going to default to the no-op logger. This is because Spark will use SLF4J as its logger if you provide it as a dependency in your project. This gives you logging for free without any work on your end except of course adding the dependency. We can do that now and then rebuild and rerun our jar to see what that looks like.
We will just need to open our build.gradle file and add this line to our dependencies block
compile group: 'org.slf4j', name: 'slf4j-simple', version: '1.7.23'
then execute our shadowJar Gradle task and java -jar again to see Spark start our web service with logging enabled this time.
Project Link
While we're modifying dependencies let's go ahead and add Dagger and gson to the project. Add these lines to the dependencies
block in your build.gradle file
// https://mvnrepository.com/artifact/com.google.code.gson/gson
compile group: 'com.google.code.gson', name: 'gson', version: '2.8.0'
// https://mvnrepository.com/artifact/com.google.dagger/dagger
compile group: 'com.google.dagger', name: 'dagger', version: '2.9'
// https://mvnrepository.com/artifact/com.google.dagger/dagger-compiler
apt group: 'com.google.dagger', name: 'dagger-compiler', version: '2.9'
Then in our buildscript
block we will need to add maven to our repositories
block and a new class path dependency in our dependencies
block within the buildscript
buildscript {
repositories {
jcenter()
maven {
url "https://plugins.gradle.org/m2/"
}
}
dependencies {
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.4'
classpath "net.ltgt.gradle:gradle-apt-plugin:0.9"
}
}
If you are using IntelliJ Idea you will need to apply two plugins below our other plugins. If you are not using IntelliJ Idea you can skip the idea
plugin.
apply plugin: "net.ltgt.apt"
apply plugin: 'idea'
So what did we just do? We added a plugin that will work with the Dagger compiler project allowing us to add it as an apt
dependency. Now at compile time Dagger will be able to generate all of the behind the scenes classes that make dependency injection possible. The idea plugin will make sure that the IDE recognizes these generated classes so that we can use them in our source code editor without the editor telling us that they do not exist.
Project Link
Now that we've had our crash course in using Gradle and Dagger together let's get back to the fun stuff and write some code! We are going to add a 'people' resource to our api. We will imagine that our api is the back end for a contacts app where users can add the first and last name of people that they know.
First we will need our Person entity:
public class Person {
private long id;
private String firstName;
private String lastName;
public Person(long id, String firstName, String lastName){
this.id = id;
this.firstName = firstName;
this.lastName = lastName;
}
public long getId() {
return id;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
}
Then let's add an in memory people repository. We'll stick to just an in memory repository so that we don't add the overhead of a database and ORM to our simple example.
public class PeopleRepository {
private final AtomicInteger idSequence;
private final Map<Long, Person> personMap;
public PeopleRepository(){
idSequence = new AtomicInteger(0);
personMap = new ConcurrentHashMap<>(10);
}
public Person createPerson(String firstName, String lastName){
Person newPerson = new Person(idSequence.incrementAndGet(), firstName, lastName);
personMap.put(newPerson.getId(), newPerson);
return newPerson;
}
public List<Person> getAllPeople(){
return personMap.entrySet()
.stream()
.map(Map.Entry::getValue)
.collect(Collectors.toList());
}
public Person getPersonById(long personId){
return personMap.get(personId);
}
public Person updatePerson(Person updatedPerson){
personMap.replace(updatedPerson.getId(), updatedPerson);
return updatedPerson;
}
public Person deletePerson(long personId){
return personMap.remove(personId);
}
}
Next let's add the resource class that we will register with Spark to handle requests against our people endpoint.
public class PeopleResource {
private PeopleRepository peopleRepository;
private Gson gson;
public PeopleResource(PeopleRepository peopleRepository, Gson gson){
this.peopleRepository = peopleRepository;
this.gson = gson;
}
public Person createPerson(Request request, Response response){
JsonObject requestDto =
gson.fromJson(request.body(),JsonObject.class);
return
peopleRepository.createPerson(
requestDto.get("firstName").getAsString(),
requestDto.get("lastName").getAsString());
}
public Person getPerson(Request request, Response response){
Long personId = Long.valueOf(request.params(":id"));
return peopleRepository.getPersonById(personId);
}
public List<Person> getAllPeople(Request request, Response response){
return peopleRepository.getAllPeople();
}
public Person updatePerson(Request request, Response response){
Long personId = Long.valueOf(request.params(":id"));
JsonObject requestDto =
gson.fromJson(request.body(), JsonObject.class);
Person person = peopleRepository.getPersonById(personId);
person.setFirstName(requestDto.get("firstName").getAsString());
person.setLastName(requestDto.get("lastName").getAsString());
return peopleRepository.updatePerson(person);
}
public Person deletePerson(Request request, Response response){
Long personId = Long.valueOf(request.params(":id"));
return peopleRepository.deletePerson(personId);
}
}
Each method in our PersonResource will take a Request
and Response
object. This is required by Spark for any method that we wish to register as capable of handling an http request. So now let's look at how we can use this class with Spark to get requests routed correctly. Open up our Application class and let's replace our hello world
path with our PeopleResource
public static void main(String[] args) {
Gson gson = new GsonBuilder().setPrettyPrinting().create();
PeopleRepository peopleRepository = new PeopleRepository();
GsonTransformer transformer = new GsonTransformer(gson);
PeopleResource peopleResource =
new PeopleResource(peopleRepository, gson);
post("/people", "application/json",
peopleResource::createPerson, transformer);
get("/people/:id", "application/json",
peopleResource::getPerson, transformer);
get("/people", "application/json",
peopleResource::getAllPeople, transformer);
put("/people/:id", "application/json",
peopleResource::updatePerson, transformer);
delete("/people/:id", "application/json",
peopleResource::deletePerson, transformer);
}
The first few lines create and setup the objects that we need. In a little bit we will see how we can use Dagger to separate that kind of setup and object construction code from our application's logic. Once we have all of the objects that we need we make calls to Spark to register our various URL paths for the people resource.
post("/people", "application/json", peopleResource::createPerson, transformer);
Our call to post simply lets Spark know that if a POST request comes into the /people
path and it's content type is application/json
then it should execute the createPerson method on the PeopleResource object. The last argument says that the response should pass through our gson transformer. We haven't looked that class yet so let's do that now.
public class GsonTransformer implements ResponseTransformer {
private final Gson gson;
public GsonTransformer(Gson gson){
this.gson = gson;
}
@Override
public String render(Object model) throws Exception {
return gson.toJson(model);
}
}
Our GsonTransformer implements the ResponseTransformer
interface. This interface is defined by Spark and any object that implements it can be registered on a path to transform objects returned by the resource methods before we write them to the response body.
Let's get rid of all that manual object creation and setup that we have to do by putting Dagger to work. First we will need to create a Dagger module that can produce our gson object and our response transformer.
@Module
@Singleton
public class ApplicationModule {
@Provides
public Gson provideGson(){
return new GsonBuilder().setPrettyPrinting().create();
}
@Provides
public ResponseTransformer provideResponseTransformer(GsonTransformer gsonTransformer){
return gsonTransformer;
}
}
After we've defined our module we will create a component that can use it.
@Singleton
@Component(modules = {ApplicationModule.class})
public interface ApplicationComponent {
ResourceRegistry resourceRegistry();
}
Our component will produce a ResourceRegistry
object that we will cover in just a moment. Before we look at the ResourceRegistry
let's look at what our application class looks like now that we have Dagger handling object creation for us.
public class Application {
private ApplicationComponent applicationComponent;
private void start(){
//initialize dagger
initializeDagger();
//register routes
registerRoutes();
}
private void initializeDagger() {
applicationComponent = DaggerApplicationComponent.create();
}
private void registerRoutes(){
applicationComponent.resourceRegistry().registerRoutes();
}
public static void main(String[] args) {
new Application().start();
}
}
In our static main method we now instantiate an application object and call start(). Our start method then creates our Dagger component and we use that component to create a ResourceRegistry
and register our routes with Spark.
Here is what our ResourceRegistry
looks like.
@Singleton
public class ResourceRegistry {
private PeopleResource peopleResource;
private ResponseTransformer responseTransformer;
@Inject
ResourceRegistry(PeopleResource peopleResource,
ResponseTransformer responseTransformer){
this.peopleResource = peopleResource;
this.responseTransformer = responseTransformer;
}
public void registerRoutes(){
//Routes for our people resource
post("/people", "application/json", peopleResource::createPerson, responseTransformer);
get("/people/:id", "application/json", peopleResource::getPerson, responseTransformer);
get("/people", "application/json", peopleResource::getAllPeople, responseTransformer);
put("/people/:id", "application/json", peopleResource::updatePerson, responseTransformer);
delete("/people/:id", "application/json", peopleResource::deletePerson, responseTransformer);
}
}
ResourceRegistry
is given an instance of our PeopleResource
and ResponseTransformer
and then uses them to register the routes that we want handled with Spark.
At this point we have a fully functioning Restful web service making use of Spark and Dagger. Before we say job well done and move on let's look at a few more features of Spark that any real world web service is going to want to make use of.
First, what if we wanted to do something before a request was passed off to a resource method? Maybe our service requires that clients provide an api key before it will process their requests. Ideally we wouldn't have to implement logic checking the request for such a key in each of our resource methods in each of our resource classes, and thankfully Spark provides us with Filter
to handle just such a use case. Let's see what a really simple Filter
might look like and how we can register it with Spark to handle any incoming request.
@Singleton
public class AuthorizationFilter implements Filter {
private String apiKey;
private Gson gson;
@Inject
public AuthorizationFilter(@Named("api_key") String apiKey, Gson gson){
this.apiKey = apiKey;
this.gson = gson;
}
@Override
public void handle(Request request, Response response) throws Exception {
if(!apiKey.equals(request.headers("Authorization"))){
halt(401, gson.toJson(new ErrorMessage("You must provide a valid api key")));
}
}
}
The Filter
interface defines a single method handle(Request req, Response res)
which will be executed whenever a request comes in that matches the path upon which we have registered a filter. We are going to make our filter run on every incoming request so we will register it on the root path. Once we have done this our web service will return a 401 status code and a simple json wrapped error message to any calling client that does not provide us with a valid api key.
Here is how we register our filter with Spark
public void registerRoutes(){
before(authorizationFilter);
//Routes for our people resource
post("/people", "application/json", peopleResource::createPerson, responseTransformer);
get("/people/:id", "application/json", peopleResource::getPerson, responseTransformer);
get("/people", "application/json", peopleResource::getAllPeople, responseTransformer);
put("/people/:id", "application/json", peopleResource::updatePerson, responseTransformer);
delete("/people/:id", "application/json", peopleResource::deletePerson, responseTransformer);
}
The call to before()
without any path arguments let's Spark know that this filter should execute on every incoming request before it is passed to a matching resource. We can of course have filters for specific paths if needed. The order of filter registrations will also affect the order in which filters are executed.
Let's look at how we can use a before filter and Dagger to create a subcomponent that is scoped to just the current request.
The first step in creating a new Dagger scope is creating the scope annotation.
@Scope
@Retention(RetentionPolicy.RUNTIME)
public @interface RequestScope {
}
Once we have our request scope annotation we will need to create the subcomponent
@RequestScope
@Subcomponent()
public interface RequestComponent {
String REQUEST_COMPONENT_ATTR_NAME = "requestComponent";
RequestStatistics requestStatistics();
}
We will also need an object that "exists" in request scope. We will have an object that will store request statistics for us.
@RequestScope
public class RequestStatistics {
static AtomicInteger requestCount = new AtomicInteger(0);
private long requestStartTime;
private long requestEndTime;
@Inject
public RequestStatistics(){
requestCount.incrementAndGet();
}
public void setRequestStartTime(long requestStartTime) {
this.requestStartTime = requestStartTime;
}
public void setRequestEndTime(long requestEndTime) {
this.requestEndTime = requestEndTime;
}
public long getTotalRequestTime(){
return requestEndTime - requestStartTime;
}
}
The last step in creating a Dagger subcomponent is adding a way for its parent component to insantiate it so we will need to add this method signature to our ApplicationComponent
interface
RequestComponent requestComponent();
Now we will create a new filter that we will register as a before filter to create the scope and store it inside of the request itself.
@Singleton
public class RequestScopeInjectionFilter implements Filter {
@Inject
public RequestScopeInjectionFilter(){
}
@Override
public void handle(Request request, Response response) throws Exception {
RequestComponent requestComponent =
Application.getApplicationComponent()
.requestComponent();
request.attribute(RequestComponent.REQUEST_COMPONENT_ATTR_NAME,
requestComponent);
requestComponent.requestStatistics()
.setRequestStartTime(System.currentTimeMillis());
}
}
Let's talk about what this filter is doing. It grabs our application component and creates a new instance of our request scoped component. Then it saves the instance of the RequestComponent
inside of the SparkJava request as a request attribute. Finally it gets the request scoped instance of RequestStatistics
and sets the request start time. Now that we have recorded the start of our request we will need to record when it finishes. SparkJava can help us out again since it also allows you register filters that will execute after a request is over. It provides two registration methods for doing so: after()
and afterAfter()
. They execute in the order that you would expect and we will make use of them to finish up our work with our request scope.
First our After filter:
@Singleton
public class StatisticsAfterFilter implements Filter {
@Inject
public StatisticsAfterFilter(){
}
@Override
public void handle(Request request, Response response) throws Exception {
((RequestComponent)
request.attribute(RequestComponent.REQUEST_COMPONENT_ATTR_NAME))
.requestStatistics()
.setRequestEndTime(System.currentTimeMillis());
}
}
This will record the end time of our request. Finally we will implement our afterAfter filter where we will calculate all of the request statistics and apply them as response headers.
@Singleton
public class StatisticsAfterAfterFilter implements Filter {
@Inject
public StatisticsAfterAfterFilter(){
}
@Override
public void handle(Request request, Response response) throws Exception {
long requestTotalTime = ((RequestComponent)
request.attribute(RequestComponent.REQUEST_COMPONENT_ATTR_NAME))
.requestStatistics()
.getTotalRequestTime();
response.header("requestTime", String.valueOf(requestTotalTime));
response.header("requestCount",
String.valueOf(RequestStatistics.requestCount.get()));
}
}
Now that we have been given an introduction to filters there is one last piece of the Spark framework that I think is important to know about as you build web services: ExceptionHandler
We can register classes that will handle exceptions thrown during the lifetime of a request and convert that exception into a proper response.
Lets see an example by modifying our PeopleRepository
to throw a special EntityNotFoundException
if a request for a non existent person comes in.
public Person getPersonById(long personId) throws EntityNotFoundException {
Person person = personMap.get(personId);
if(null == person){
throw new EntityNotFoundException("No person found for personId=" + personId);
}
return person;
}
Then we create the ExceptionHandler
to handle this sccenario.
@Singleton
public class EntityNotFoundExceptionHandler implements ExceptionHandler {
private Gson gson;
@Inject
public EntityNotFoundExceptionHandler(Gson gson){
this.gson = gson;
}
@Override
public void handle(Exception exception, Request request, Response response)
{
ErrorMessage errorMessage = new ErrorMessage(exception.getMessage());
response.status(404);
response.type("application/json");
response.body(gson.toJson(errorMessage));
}
}
The handle()
method of the ExceptionHandler
will execute and our response will be transformed into a 404 with a special message whenever we throw the EntityNotFoundException
.
We register this component with Spark like this:
exception(EntityNotFoundException.class, new EntityNotFoundExceptionHandler());
Now whenever our people repository throws an EntityNotFoundException
Spark will route the request to our exception handler and we can be sure that our client will get the response we want, instead of a 503. It also means we don't need to pollute our request handling logic with potentially dozens of little try-catch blocks for all of the exception cases we may run into.
That's it! You should now have all the tools you need to begin to creating web services using Spark, Dagger2, and Gradle. Go make something awesome!