Quantcast
Channel: Developer Express Inc.
Viewing all articles
Browse latest Browse all 2370

DevExtreme - Real World Patterns - Orchestrating Microservices

$
0
0

This post is part of a series describing a demo project that employs various real-world patterns and tools to provide access to data in a MongoDB database for the DevExtreme grid widgets. You can find the introduction and overview to the post series by following this link.

At this point the time has come to describe the overall structure and architecture of the demo application, which employs Microservices. Microservices are an architectural pattern (or, in case you agree with those that think it shouldn’t be called a pattern, feel free to use different terminology) that requires an application to be structured as a network of services. These services are expected to be quite small (hence the name), autonomous and individually scalable. Teams working on services are given much freedom with their implementation decisions, which optimally enables each service to take greatest advantage of platforms and libraries and thereby fulfill its purpose as efficiently as possible. One goal of the architectural approach is to reduce the cost of change to the system or parts of it, enabling the overall product to stay on top of the fast-paced technological development that makes life so difficult for architects of monolithic systems. In some environments, a DevOps philosophy assigns responsibility for deployment and runtime maintenance to the development team.

If you are interested in additional information about the Microservices idea, I recommend reading Martin Fowler’s article as a starting point.

Obviously, services need to communicate. Generally, Microservices architects prefer simple and open solutions that don’t place great burden on the communication layer itself. REST is commonly used for HTTP APIs, JSON for data exchange, in place of the more complicated XML based mechanisms popular around the time of the original SOA movement. In real-world systems, there is the possibility of defining affinity between services that form a natural group, in which case programmers can decide to use arbitrary protocols or data formats for efficient communication. However, services are meant to be self-contained as much as possible, which introduces the requirement for public APIs that are easily accessible from any platform.

Seneca

Services can communicate directly, but of course that limits scalability. Brokers can be used, but quite commonly communication is handled using Message Queue systems. For this project, I decided to rely on a package called Seneca, which uses a message passing API to implement services and abstracts the communication layer. In any running Seneca service application, services can be local or remote, addressable using any of the supported transport channels, with routing being applied to transfer messages between them. The demo currently uses Seneca’s own transport implementation as well as a custom REST service. I have previously used Seneca with RabbitMQ as a message queue and I will integrate this in the demo in the future.

Here is an example of a service message handler to illustrate how Seneca works (from query-values.js):

  this.add("role:entitiesQuery, domain:values, cmd:fetch", (m, r) => {
    db(async (db) => {
      try {
        const res = await db.collection("values").findOne({ _id: new ObjectID(m.id) });
        if (res) r(null, replaceId(res));
        else r(null, { err$: "unknownid" });
      }
      catch(err) {
        r(null, { err$: err });
      }
    });
  });

The first line of the code is a Seneca API call that uses jsonic syntax to define the parameters of a message accepted by this handler. The handler is called when a message matching the parameter pattern is routed to this module, and the return handler (called r for brevity) is called with the result after processing. As you can see, the service implementation doesn’t concern itself with the source of the message or the target for the result.

The configuration of a service sets up Seneca with information about other services that messages will need to be routed to. Here is an example of that (from command-service/index.js):

seneca.
  client({
    type: "tcp",
    host: process.env.VALSRVC_HOST || "validator",
    port: process.env.VALSRVC_PORT || 3003,
    pin: "role:validation"
  }).
  use("command-values").
  listen({
    type: "tcp",
    port: process.env.CMDSRVC_PORT || 3002,
    pin: "role:entitiesCommand"
  });

In this code, the first part of the chained instructions configures a connection to another host where messages matching role:validation will be processed. As a result, if any message matching that pattern is generated within the service, it will be transferred to that other host (in this case using the tcp transport). On the other end, the listen call defines that any services defined in command-values will be accessible through a tcp transport as well, as long as they match the pattern role:entitiesCommand.

If you are curious to learn more about Seneca, there are several books available and I recommend the docs on the site as well as the Getting Started tutorial.

Docker

In a Microservices structure, each service application is expected to run in its own dedicated, encapsulated environment. It would be possible to set up individual physical machines for services, but more commonly virtual machines are used. These can reside anywhere, but deploying them to the cloud is an obvious idea that also makes scaling very simple when required. Docker is a system that creates packages (called images) for complete application runtime environments, which can then be started locally, orchestrated as a scalable network of services, and uploaded directly to cloud environments.

Note that Docker is not a virtual machine system in itself. In its original Linux incarnation, Docker images contain Linux environments and when an image is run on a Linux machine, the processes inside the image run directly on the host machine, albeit in a cleverly isolated way that guarantees separation from the host. In the end, the host decides how to run the images, and when a Linux image is run in Docker for Windows, a Hyper-V virtual machine environment is used.

The demo application uses one Docker configuration file per service application. These files define how the Docker image is built and they look like scripts (from Dockerfile-command-service):

FROM node:latest

RUN mkdir -p /usr/src/app/command-service
RUN mkdir -p /usr/src/app/db

COPY command-service /usr/src/app/command-service
COPY db /usr/src/app/db
COPY message-utils /usr/src/app/message-utils
WORKDIR /usr/src/app/db
RUN npm install
WORKDIR /usr/src/app/command-service
RUN npm install
RUN npm install -g node-inspector@0.7.5

EXPOSE 3002 57575

RUN chmod 755 start-service.sh
CMD [ "./start-service.sh" ]

This file specifies that the new image will be based on a standard image called node:latest, which contains a current version of Node.js. On top of that, files are copied to the image from the local directory and npm packages are installed. The EXPOSE line specifies two network ports to be made available by the running image and the final CMD tells Docker what should happen when the image is started.

The Makefile shows the instruction used to build the Docker image, for instance for the command service:

docker build -t sturm/cqrs-grid-demo/command-service -f Dockerfile-command-service .

The parameter following -t is the name of the image, which is registered locally with Docker. After an image has been built, it can be run like this:

> docker run sturm/cqrs-grid-demo/command-service

IP Address: 172.17.0.2
Node Inspector v0.7.5
Visit http://172.17.0.2:57575/debug?port=5858 to start debugging.
[nodemon] 1.11.0
[nodemon] reading config /usr/src/app/command-service/nodemon.json
[nodemon] to restart at any time, enter `rs`
[nodemon] ignoring: .git .nyc_output .sass-cache bower_components coverage /usr/src/app/command-service/node_modules/**/* Dockerfile package.json
[nodemon] watching: *.*
[nodemon] watching extensions: js,json
[nodemon] starting `node --harmony --debug index.js`
[nodemon] child pid: 32
[nodemon] watching 6 files
Debugger listening on 127.0.0.1:5858
{"kind":"notice","notice":"hello seneca ybhfsznkliwe/1490688205371/33/3.2.2/-","level":"info","when":1490688205390}
(node:33) DeprecationWarning: 'root' is deprecated, use 'global'

The output lines you see are generated by the running image. Further on in this post, I’ll explain the startup process in some more detail.

You can now see that the image is running on the system:

> docker ps

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                 NAMES
8d902a245f4b        sturm/cqrs-grid-demo/command-service   "./start-service.sh"   17 seconds ago      Up 16 seconds       3002/tcp, 57575/tcp   gigantic_banach

Images can be controlled externally, for instance to stop this image using its partial ID:

> docker stop 8d902

Orchestration

Since there are a number of docker images created for the demo application, the process of “running the application” involves starting all of those. That gets boring quickly during development, and for runtime maintenance it would also introduce a point of failure since it would require the admin to watch out for a large number of services manually. The Docker toolchain comes with a tool called docker-compose that can automatically spin up and take down networks of services on the basis of its own configuration file.

For example, docker-compose.yml contains the following block for the command service:

  command-service:
    image: sturm/cqrs-grid-demo/command-service
    volumes:
      - ./command-service:/usr/src/app/command-service
      - ./db:/usr/src/app/db
      - ./message-utils:/usr/src/app/message-utils
    ports:
      - "3002:3002"
    links:
      - mongo
      - validator

The block specifies the name of the image and the main network port of the service. But docker-compose goes further than that, and this configuration uses two additional features. The volumes block mounts local directories into the image (which happens on top of, or in place of the directories that have already been copied into the image when it was created), so that the service application inside the image works directly with the local sources. This is useful because the image doesn’t need to be rebuilt every time a local source file changes.

The links block defines which other images this one depends on, and docker-compose starts images in the correct order automatically. The names of the defined links are also made available as network host names to the running application, so that the command-service can talk to the mongo and validator services using these names. If you remember, the startup file for command-service uses this link to configure the connection to the validator:

host: process.env.VALSRVC_HOST || "validator",

Unless overloaded via an environment variable, the host name is assumed to be validator, and this is injected by docker-compose.

docker-compose.yml contains a configuration block for each of the services in the demo application, plus an additional one that runs a standard image from the public Docker repository:

  mongo:
    image: mongo:latest
    ports:
      - "27017:27017"

Starting the entire setup is now as simple as using one command: docker-compose up. This is executed by the Makefile target dcup, for brevity.

docker-compose service networks can easily be deployed to cloud infrastructure. Here is a video of how it can be done in Azure, and here’s a tutorial to Amazon’s ECS CLI to do it on AWS.

Services in the demo application

The demo application consists of six of its own services, plus the MongoDB one. Here they are:

ServicePurpose
query-serviceData querying
validatorValidation
command-serviceData modification
testingTest data generation
web-proxyREST service for external API
webappWeb application integrating DevExtreme widgets

Each of these services is configured in the same way described above, with their own Docker configuration file and a block in docker-compose.yml.

Development and Debugging

Each of the services also follows the same startup procedure. You’ll remember, as part of the Docker configuration file, there is the line CMD [ "./start-service.sh" ]. The shell script start-service.sh looks like this:

#!/bin/bash

terminate() {
    kill -TERM "$child" 2>/dev/null
}

trap terminate TERM

IP=`ip address | grep global | sed -r 's/.*inet (.*)\/.*/\1/'`

echo "IP Address: $IP"

node-inspector --no-preload --web-host=$IP --web-port=57575 &

sleep 1

node_modules/.bin/nodemon -I -V -- --harmony --debug index.js &
child=$!
wait "$child"

The handling of the TERM signal in the script is useful to optimize work with the running docker-compose service network during development, because it speeds up the process of quitting the running images with Ctrl-C. It also means that each image is terminated using a TERM signal, which is a more friendly way than being “killed” by docker-compose.

Starting from the bottom of the file now, you can see that the file index.js is eventually started to run the service, but this happens with the help of nodemon. nodemon is a tool that monitors the source code files for an application and restarts the application when source code changes. Since docker-compose.yml sets up volumes to be mounted into the image at runtime, nodemon can see any changes you make to your local source code files, and react to them by restarting the service application inside the docker image. As a result, you can simply run make dcup to start your service network and then work on your code with changes being picked up automatically by the running services.

In addition, each service is configured to run node-inspector. As part of service startup, a line like this is shown on the console:

Visit http://172.17.0.2:57575/debug?port=5858 to start debugging.

You can connect to the URL shown in this line, for each service, to start debugging the service at any time. Of course it is possible to debug several services at once by opening several URLs in different browser windows.

Testing

Each service has a number of tests, which are written using Mocha and Chai. You can run those using make test, for each service or for the complete project directory.

The test setups take advantage of the service architecture. In the Makefile, you can see how tests are being run, utilizing the standard MongoDB Docker image:

test:
    ID=`docker run -p=27017:27017 -d mongo:latest`; \
    sleep 2; \
    mocha --harmony $(TESTS) ; \
    docker kill $$ID

For those services that depend on others, the flexible structure of Seneca is used for automated testing in a stable local environment, by pulling in the required service projects for local evaluation. While the standard service application startup code configures Seneca to pass messages with certain patterns through remote connections, the test setup simply includes use calls to pull in additional service definitions to the local Seneca “instance” (from web-proxy/tests/tests.js, including three other services apart from its own code):

seneca.test(tdone).
  use(proxy).
  use(queryService, { connectedDb: db }).
  use(commandService, { connectedDb: db }).
  use(validator).
  use(web, config).
  ready(() => {
    const server = seneca.export('web/context')();
    test(server, () => {
      seneca.close();
      tdone();
    });
  });

Tests are executed in completely separate environments, where each test run has its own fresh database and clean Seneca setup, and even a new web server in the case of web-proxy.


Viewing all articles
Browse latest Browse all 2370

Trending Articles