Quantcast
Channel: davidvivash – Think in software
Viewing all articles
Browse latest Browse all 7

Simple Micro-service Web APIs

$
0
0

Microservice architectures need very little introduction (hopefully). At its simplest, a microservice  architecture is just a collection of many small services communicating across HTTP(S).

What I’m interested in is what a single simple service within a microservice architecture might look like.

Creating a simple Web API using MongoDB and Node.js

My very simple (eg. not production-ready) microservice will be a web API which handles user roles. This service will allow me to assign roles to users, and query the roles a user is in. The endpoints I want to create are:

  • GET http://localhost/api/roles/:user_id
    Lists the roles a user is in
  • PUT http://localhost/api/roles/
    Body: { user_id: 1, roles: [{role_name:”admin”}, {role_name:”user”}] }
    Assigns the specified roles to the specified user.

The main technologies I will use are:

  1. Node.js
    • This will be running the web server.
  2. Express JS
    • This is a Node.js-based framework for creating websites
  3. MongoDB
    • This will house the data
  4. Mongoose
    • This is a JS library that I’ll use to access the Mongo DB

Now, if you’ve not used these technologies before, hopefully the following will guide you through it. I’m going to do this in Visual Studio 2017.

Creating a Node.JS Express 4 application in Visual Studio

The code developed in this section is available at: https://github.com/DavidVivash/NodeJs.MicroServicesExample

Visual Studio 2017 includes a template out-of-the box for this, so the amount of code you actually need to write is minimal.

Firstly, start a new project using the Express 4 template:

express-template

This creates a basic web application that runs on Node.js.

Secondly, as our web application is going to talk to Mongo DB using Mongoose, we need to add this dependency. Also, as we want to decode the JSON that’ll be sent to the PUT endpoint, we’ll include a dependency on “body-parser” – this is a powerful library for parsing input request data in Node.js. To specify these dependencies, we’ll add them into package.json:

{
  "name": "roles.web-api",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "description": "Roles.WebAPI",
  "author": {
    "name": "David Vivash"
  },
  "main": "app.js",
  "dependencies": {
    "body-parser": "^1.17.1",
    "debug": "^2.2.0",
    "express": "^4.15.2",
    "mongoose": "^4.9.4",
    "pug": "^2.0.0-beta11"
  }
}

Once these packages are installed, we’re just going to be creating/modifying just 3 files:

express-main-files

  1. app.ts – this is the main startup file; it’ll setup the connection to Mongo, setup the API routes and set up the basic infrastructure
  2. routes/roles.ts – this is used to define the GET and PUT actions of our API.
  3. models/userRoles.ts – this will be used to model the data that will be added into Mongo. This is basically to ensure we are using the correct properties of the correct types.

So, from the top: app.ts

import express = require('express');
import path = require('path');
import bodyParser = require('body-parser');
import mongoose = require('mongoose');

import rolesRoute from './routes/roles';

var app = express();

// pug view engine
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'pug');

// body parser
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());

// routes
app.use('/api/roles', rolesRoute);

// connect to mongo
mongoose.connect('mongodb://localhost:27017/UserRoles');

// catch 404 and forward to error handler
app.use(function (req, res, next) {
  var err = new Error('Not Found');
  err['status'] = 404;
  next(err);
});

// error handlers

// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
  app.use((err: any, req, res, next) => {
    res.status(err['status'] || 500);
    res.render('error', {
      message: err.message,
      error: err
    });
  });
}

// production error handler
// no stacktraces leaked to user
app.use((err: any, req, res, next) => {
  res.status(err.status || 500);
  res.render('error', {
    message: err.message,
    error: {}
  });
});

module.exports = app;

I’ve highlighted the main lines of interest; it should be fairly self-explanatory what they’re for.

Now looking at the model, userRoles.ts:


import { Schema, model } from "mongoose";

var RoleSchema = new Schema({
  role_name: String
});

var UserRolesSchema = new Schema({
  user_id: { type: Number, unique: true, index: true },
  roles: [RoleSchema]
});

export = model('UserRoles', UserRolesSchema);

The above exports a model which looks at the “UserRoles” collection in MongoDB, using the UserRolesSchema, which is defined above.

Finally, the actual endpoints that are exposed are defined in the roles.ts file:


import { Router, Request, Response } from 'express';
import { mongoose } from 'mongoose';
var UserRoles = require('../models/userRoles');

const router = Router();

router.get('/:user_id', (req: Request, res: Response) => {
  UserRoles.find({ user_id: req.params.user_id }, (err, user) => {
    if (err) res.send(err);

    res.json(user);
  });
});

router.put('/', (req: Request, res: Response) => {
  UserRoles.findOneAndUpdate({ user_id: req.body.user_id }, req.body, { new: true, upsert: true, setDefaultsOnInsert: true }, (err, user) => {
    if (err) res.send(err);

    res.json(user);
  });
});

export default router;

I’m not describing these files in too much detail, as hopefully they’re fairly straightforward to understand.

Testing

Now I have a simple API, I want to test it in a simple way too. A very good tool for testing web APIs is a program called Postman. I use the Chrome app. Postman makes it easy to issue HTTP requests to servers, and analyse the responses. For the PUT endpoint, for example:

postman-put

This will simply send the PUT request with the specified body.

This is a good way of issuing single commands to your web API, and you can save these requests into collections so that for each API you have a set of requests you can issue against them. But Postman allows you to go further – by using the Postman “Runner” you can specify particular expectations for each of your requests, for example you can specify that a 200 response will be returned, and that specific content might be present, for example. This is essentially doing integration testing, which is essential if you plan to use a web API in production.

Unit Testing?

I’ll come back to this in another blog post, but for the simple web API I have presented, unit testing would be a waste of time – I certainly cannot recommend it. Integration testing, using Postman for example (or its command line companion Newman), is the only form of testing for a microservice that will offer any form of value.

Creating a simple Web API using MongoDB and .NET Core

Now I have my simple web API, and an integration test suite, I want to replace my web API with one written in a different language.

I want to do this to demonstrate the core tenet of microservice architectures:

Within a microservice architecture, each service is a language agnostic component.

Components are building blocks, which are:

  • Reusable
  • Isolated
  • Replaceable

It is very important in a microservice architecture that we can replace our services; in many cases it is preferable to replace an existing microservice than it is to add additional features to it. The Node.js example which I presented before was about 30 minutes worth of development effort; to get it production-ready would probably take a few more hours, but for the job it’s meant for, it’s pretty much there.

So with this in mind, and considering the same GET and PUT endpoints I described before, I am going to re-implement the API as an ASP.NET Core Web API, still using Mongo DB for the backend.

Creating a .NET Core Web API in Visual Studio

The code developed in this section is available at: https://github.com/DavidVivash/DNX.MicroServicesExample

.NET is obviously what Visual Studio does best, and the out-of-the-box template for a Web API makes creating a .NET Core Web API very simple indeed. Firstly, you’ll want to select the ASP.NET Core Web Application (.NET Core) template:

dnx-template

You’ll then get to tell Visual Studio you want to create a Web API:

dnx-webapi-template

This creates a basic web API that runs on .NET Core.

As this API will be communicating to Mongo DB, the .NET driver needs to be added as a dependency – this can be done by going to the NuGet package manager from within Visual Studio, and searching for MongoDB.Driver:

dnx-dependencies

Now we have a web API capable of communicating to Mongo DB, the only remaining steps are to create a controller and some models. The template web API that Visual Studio creates include ValuesController – we’ll need to replace this with a “RolesController”. And in a similar way in which the Mongoose example above used a UserRoles model to describe the objects, we’ll create similar models within the .NET example. So we will create a RolesController, a UserRoles model, and a Role model:

dnx-main-files

If you’ve written any .NET web APIs, the above will be fairly familiar; additionally, the models mimic what we created before for the Mongoose example.

The Role.cs model:

using MongoDB.Bson.Serialization.Attributes;
using Newtonsoft.Json;

namespace Roles.WebAPI.Models
{
    [BsonIgnoreExtraElements]
    public class Role
    {
        [BsonElement("role_name")]
        [JsonProperty("role_name")]
        public string RoleName { get; set; }
    }
}

… and the UserRoles.cs model:

using System.Collections.Generic;
using MongoDB.Bson.Serialization.Attributes;
using Newtonsoft.Json;

namespace Roles.WebAPI.Models
{
    [BsonIgnoreExtraElements]
    public class UserRoles
    {
        [BsonElement("user_id")]
        [JsonProperty("user_id")]
        public int UserId { get; set; }

        [BsonElement("roles")]
        [JsonProperty("roles")]
        public IEnumerable<Role> Roles { get; set; }
    }
}

Whilst both of these files are very straightforward, there’s some clear friction when compared to the Node.js example. C# property names are, by convention, Pascal cased, and I’ve kept that convention above. However, because my property names no longer match the ones in the previous example, I need to add attributes to each of my properties – not just 1 attribute, but 2. I need the BsonElement attribute to describe how the property needs to be mapped into Mongo DB, and a JsonProperty to control how the properties will be mapped to incoming and outgoing JSON.

Note: I could have just named my properties to match the JSON, and forgone the attributes above. This is a web API that expects and responds with JSON, and talks to a database that expects and returns a JSON-like structures, so maybe naming the properties to match the JSON would be best. I’ve kept with the convention of the language, however.

Anyway, with these two models in place, all we need now is RolesController.cs:

using System.Threading.Tasks;

using Microsoft.AspNetCore.Mvc;
using MongoDB.Driver;
using Roles.WebAPI.Models;

namespace Roles.WebAPI.Controllers
{
    [Route("api/[controller]")]
    public class RolesController : Controller
    {
        private static readonly MongoClient client;

        static RolesController()
        {
            client = new MongoClient("mongodb://localhost:27017");
        }

        [HttpGet("{user_id}")]
        public async Task<UserRoles> Get(int user_id)
        {
            var userRoles = await client.GetDatabase("UserRoles")
                .GetCollection<UserRoles>("userroles")
                .Find(Builders<UserRoles>.Filter.Eq("user_id", user_id))
                .FirstOrDefaultAsync();

            return userRoles;
        }

        [HttpPut]
        public async Task<UserRoles> Put([FromBody]UserRoles roles)
        {
            var result = await client.GetDatabase("UserRoles")
                .GetCollection<UserRoles>("userroles")
                .ReplaceOneAsync(Builders<UserRoles>.Filter.Eq("user_id", roles.UserId), roles, new UpdateOptions { IsUpsert = true });

            return roles;
        }
    }
}

There are perhaps a couple of things to note:

  • I’m storing a single static copy of the MongoClient instance. This instance performs all necessary connection pooling, and it’s correct to just have one instance.
  • I’m not injecting MongoClient. Well, if I were to unit test this, maybe that would be worthwhile (there’s an IMongoClient to depend upon). There’s no value of doing this for this simple example, however, and see the Testing section for why I’ve resisted making simple web APIs unit testable.
  • I’ve hardcoded the mongodb connection string. Well, yes, this should really be in a config file. I’ll leave that as an exercise; injecting options in .NET core has moved on a fair bit from .NET framework.
  • Lastly, though, this code is very simple.

So we now have a web API which completely mimics the web API that was developed in Node.js. This behaviour can be verified by running the two web APIs side-by-side, and comparing the endpoints by calling them in Postman.

So, why create the same service twice?

As mentioned previously, individual services within a microservice architecture should behave as components, and be swappable with alternative implementations. But the alternative implementation must still support the same operations. The reason for creating an alternative implementation in this case was just for reasons of comparison, but in real systems the reasons are numerous, for example:

  • Moving to alternative OS, where service cannot run
  • More performant service needed, different technology might be better suited
  • Existing service might be written in legacy, or a poorly supported language / framework
  • New features are needed, writing in a newer / different technology might make more sense

Creating integration tests in Postman, or a similar tool, is absolutely essential to test the behaviour of web APIs, and having a suite of integration tests which can be run against each service is much more valuable than (for example) spending time writing unit tests which are language and implementation specific.

Comparing the two implementations

So, we started off with just wanting two simple endpoints (GET and PUT), and created a Node.js application, and a .NET core application to satisfy this. Which is better?

  1. Complexity of solutions is roughly the same: the Node.js example created models, and the routes which satisfied the endpoints. This mirrors the .NET core example, which had very similar models, and a controller to handle the routes
  2. Performance of solutions is roughly the same: you could spend time performance tuning each example, and maybe one would win, but choosing between Node.js and .NET core purely for reasons of performance probably won’t prove a particularly fruitful exercise. In my tests of these two example, .NET core was slightly faster, running on Windows 10. Maybe running on Linux would give a different result.
  3. Development time was roughly the same: you should see the amount of code written was similar between the two

So the only things I can see to choose between the solutions is maintainability and having the infrastructure to support the technologies.

In terms of maintainability, I’d suggest this is a non-issue. The idea of a component is that it’s replaced when it reaches the end of its useful lifetime, so it should be more common for a microservice to be replaced than maintained. Yes, I did write “should” in italics.

So then we’re just left with infrastructure concerns – specifically, can your infrastructure support mixed technologies, and can the team responsible for ensuring production remains up and running adequately monitor the services you intend to release? Also, does your build / release pipeline support mixed technologies? If you are moving towards a true microservice architecture, the answer to all of the questions in this paragraph should be “yes”.


Viewing all articles
Browse latest Browse all 7

Latest Images

Trending Articles





Latest Images