by Logo Technical Blog – Future Processing
12.01.2016
Microservices with minimum overhead using Spring Cloud, Docker and AWS ECS_

This post describes how to minimize overhead when developing systems with microservices architecture, especially as it comes to development and deployment.

Some time ago, we published similar post about solving this problem in Microsoft and Azure world. This time we will focus on AWS, Java, Spring Boot, Docker and some useful open source libraries from Netflix.

Pros and cons of Microservices

Nowadays, users expect applications to be ultra fast, reliable and available on multiple platforms, including mobile devices. Microservices come to rescue, because in comparison to big monolithic systems, they are easier to:

  • scale (each service can be scaled independently)
  • maintain (small code base is easier to understand)
  • deploy (avoiding downtime is easier)
  • experiment with other technologies (each service can be written in different technology stack)

On the other hand, microservices come with additional price. What’s the problem with microservices? In a nutshell:

Monolithic vs Microservices - Future Processing

Source: https://twitter.com/dqo/status/584057756958728193

Joking aside, instead of 1, you have N:

  • deployment pipelines
  • apps to monitor
  • independent databases

And all problems that come from distributed systems…

Solutions and patterns

Fortunately, we can leverage some technologies and patterns that have emerged during recent years. We can use tools like Docker (along with container services like AWS ECS), and some mature open source libraries. Let’s go through a simple example. Source code can be found on github.

Overview

Our weather application is simple, it consists of two independent services:

  • webapp that is responsible for rendering a web page
  • weather service exposes an API, which is consumed by the webapp

How does webapp know where to look for weather service?

Because of ephemeral nature of microservices (especially in the cloud), it’s not a good idea to hardcode any URLs in the source code. Services can be redeployed to any other instance, scaled up or scaled down. It means that any service should discover services that it relies on. In our example, there is a separate registry based on Netflix Eureka. All services register themselves in the registry so that other services can discover them by name. How to consume weather service from webapp? Let’s have a look at the code:

@FeignClient(name = "weather-service")
public interface WeatherServiceClient {

    @RequestMapping(method = RequestMethod.GET, value = "/weather")
    Map<String, String> getWeather();
}

Feign is a declarative http client. That’s all we need to consume service discovered via Eureka and spread requests across all instances of weather-service. Under the hood Feign integrates with other Netflix libraries: Ribbon (client-side load balancer) and mentioned Eureka.

Ok, but how does webapp know where to find the registry? The registry itself cannot be discovered in the same way (because there is no registry yet). It’s most likely the only service that needs to have absolutely static address and must be always up and running. Sounds scary, but in AWS environment it can be solved using ELB and server side discovery pattern. The same mechanism is used to expose webapp to the world. Technically, it’s configured using ECS Service, which will be described later in the article.

How do I run individual service locally?

Just enter service source code directory and run mvn spring-boot:run. If this service requires other services to run, in my opinion it’s convenient to implement some mocks and turn off service discovery using spring profiles.

How can I run several dependent services locally?

It wouldn’t be convenient to run each service manually every time I need to do some integration testing across them. Each service is dockerized, so we can use docker compose to run several services easily.

  1. mvn clean package
  2. docker-compose –file docker-compose-deps.yml up
  3. docker-compose –file docker-compose.yml up

As you can see, we use two docker compose configuration files:

  • docker-compose-deps.yml defines containers responsible for service discovery and configuration management
  • docker-compose.yml defines our actual services: webapp and weather service

Both files are a recipe how to run, configure and link all the containers defined inside.

How to create production environment on AWS?

EC2 Container Service consists of the following bits and pieces:

  • Cluster is a group of EC2 instances that are managed by ECS.
  • Task is an individual instance of a given docker container.
  • Service allows to keep fixed number of tasks up and running. Optionally, it can attach them to ELB.

Having all bits and pieces configured, ECS Scheduler will take care of deploying our services on the cluster. In our example we use CloudFormation template to run the entire environment:

ECS Cluster - Future Processing

How to create low-cost staging environment on AWS?

It’s a good practice to perform tests on environments that are as similar to production as possible, but most of the time we don’t need so much compute and storage resources. The picture above shows cluster that contains 6 EC2 instances, but it’s possible to change this number at any time (instances are managed by Auto Scalling group). In order to save money from the beginning, CloudFormation template has a parameter “ClusterSize”. It allows to create small cluster for testing purposes, even with only one EC2 instance. Assuming that this instance has enough resources, ECS Scheduler will run all services on this machine.

How do I deploy a new version of one service without downtime?

We have our environment deployed on AWS. Let’s deploy a new version (2.0) of our webapp. Firstly, we need to build and push docker image to a docker registry:

  1. cd webapp
  2. mvn clean package
  3. docker build –tag=”kanicz/microservices-webapp:2.0″
  4. docker push kanicz/microservices-webapp:2.0

Having a new version pushed to docker registry, we can deploy it:

  1. Modify our template, change WebappTask so that it points to the newest version in the registry:
    <
    {
    	"WebappTask": {
    		"Type": "AWS::ECS::TaskDefinition",
    		"Properties": {
    			"ContainerDefinitions": [{
    				"Name": "webapp",
    				"Image": "kanicz/microservices-webapp:2.0";
    [...]
    			}]
    		}
    	}
    }
    
  2. Update CloudFormation stack providing updated template.

CloudFormation will detect that WebappTask changed, therefore WebappService will be updated. Updating a service will trigger a deployment. ECS scheduler will create a new WebappTask, wait until it’s healthy and shutdown the old version. It’s important to mention that a cluster needs to have some additional resources (memory and cpu), because for a while there will be 2 versions of the task running at the same time.

Conclusion

Using proper tools and platforms makes microservices less painful. Especially containers (in our case Docker) are extremely useful. Amazing things can be built on top of containers. EC2 Container Service is only one example, but there are other exciting alternatives, such as kubernetes or mesos.

Related Posts

Comments

Cookies

This website stores cookies on your computer. These cookies are used to improve our website and provide more personalized services to you, both on this website and through other media. To find out more about the cookies we use, see our Cookies policy.