Auto incrementing IDs for MongoDB

If you’re familiar with relational databases like MySQL or PostgreSQL, you’re probably also familiar with auto incrementing IDs. You select a primary key for a table and make it auto incrementing. Every row you insert afterwards, each of them gets a new ID, automatically incremented from the last one. We don’t have to keep track of what number comes next or ensure the atomic nature of this operation (what happens if two different client wants to insert a new row at the very same time? do they both get the same id?). This can be very useful where sequential, numeric IDs are essential. For example, let’s say we’re building a url shortener. We can base62 encode the ID of the url id to quickly generate a short slug for that long url.

Fast forward to MongoDB, the popular NoSQL database doesn’t have any equivalent to sequential IDs. It’s true that you can insert anything unique as the required _id¬†field of a mongodb document, so you can take things to your hand and try to insert unique ids yourselves. But you have to ensure the uniqueness and atomicity of the operation.

A very popular work around to this is to create a separate mongodb collection. Then maintain documents with a numeric value to keep track of your auto incrementing IDs. Now, every time we want to insert a new document that needs a unique ID, we come back to this collection, use the $inc operator to atomically increment this number and then use the incremented number as the unique id for our new document.

Let me give an example, say we have an messages collection. Each new message needs a new, sequential ID. We create a new collection named sequences. Each document in this sequences collection will hold the last used ID for a collection. So, for tracking the unique ID in the messages collection, we create a new document in the sequences collection like this:

{
    "_id" : "messages",
    "value" : 0
}

Next, we will write a function that can give us the next sequential ID for a collection by it’s name. The code is in Python, using PyMongo library.

def get_sequence(name):
    collection = db.sequences
    document = collection.find_one_and_update({"_id": name}, {"$inc": {"value": 1}}, return_document=True)

    return document["value"]

If we need the next auto incrementing ID for the messages collection, we can call it like this:

{"_id": get_sequence("messages")}
Find and Modify – Deprecated

If you have searched on Google, you might have come across many StackOverflow answers as well as individual blog posts which refer to findAndModify()¬†call (find_and_modify¬†in Pymongo). This was the way to do things. But it’s deprecated now, so please use the new find_one_and_update¬†function now.

(How) Does this scale?

We would only call the get_sequence function before inserting a new mongo document. The function uses the $inc operator which is atomic in nature. Mongo guarantees this. So even if 100s of different clients trying to increment the value for the same document, they will be all applied one after one. So each value they get will be unique, new IDs.

I personally haven’t been able to test this strategy at a larger scale but according to people on StackOverflow and other forums, people have scaled this to thousands and millions of users. So I guess it’s pretty safe.

API Star: Python 3 API Framework

For building quick APIs in Python, I have mostly depended on Flask. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.

Now back to API Star – I feel very productive in the framework. I can choose to write async codes based on asyncio or I can choose a traditional backend like WSGI. It comes with a command line tool –¬†apistar¬†to help us get things done faster. There’s (optional) support for both Django ORM and SQLAlchemy.¬† There’s a brilliant type system that enables us to define constraints on our input and output and from these, API Star can auto generate api schemas (and docs), provide validation and serialization feature and a lot more. Although API Star is heavily focused on building APIs, you can also build web applications on top of it fairly easily. All these might not make proper sense until we build something all by ourselves.

Getting Started

We will start by installing API Star. It would be a good idea to create a virtual environment for this exercise. If you don’t know how to create a virtualenv, don’t worry and go ahead.

pip install apistar

If you’re not using a virtual environment or the pip¬†command for your Python 3 is called pip3, then please use pip3 install apistar¬†instead.

Once we have the package installed, we should have access to the apistar¬†command line tool. We can create a new project with it. Let’s create a new project in our current directory.

apistar new .

Now we should have two files created – app.py¬†– which contains the main application and then test.py¬†for our tests. Let’s examine our app.py¬†file:

from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls


def welcome(name=None):
    if name is None:
        return {'message': 'Welcome to API Star!'}
    return {'message': 'Welcome to API Star, %s!' % name}


routes = [
    Route('/', 'GET', welcome),
    Include('/docs', docs_urls),
    Include('/static', static_urls)
]

app = App(routes=routes)


if __name__ == '__main__':
    app.main()

Before we dive into the code, let’s run the app and see if it works. If we navigate to http://127.0.0.1:8080/¬†we will get this following response:

{"message": "Welcome to API Star!"}

And if we navigate to: http://127.0.0.1:8080/?name=masnun

{"message": "Welcome to API Star, masnun!"}

Similarly if we navigate to: http://127.0.0.1:8080/docs/, we will see auto generated docs for our API.

Now let’s look at the code. We have a welcome¬†function that takes a parameter named name¬†which has a default value of None. API Star is a smart api framework. It will try to find the name¬†key in the url path or query string and pass it to our function. It also generates the API docs based on it. Pretty nice, no?

We then create a list of Route and Include instances and pass the list to the App instance. Route objects are used to define custom user routing. Include , as the name suggests, includes/embeds other routes under the path provided to it.

Routing

Routing is simple. When constructing the App instance, we need to pass a list as the routes argument. This list should comprise of Route or Include objects as we just saw above. For Routes, we pass a url path, http method name and the request handler callable (function or otherwise). For the Include instances, we pass a url path and a list of Routes instance.

Path Parameters

We can put a name inside curly braces to declare a url path parameter. For example /user/{user_id}¬†defines a path where the user_id¬†is a path parameter or a¬† variable which will be injected into the handler function (actually callable). Here’s a quick example:

from apistar import Route
from apistar.frameworks.wsgi import WSGIApp as App


def user_profile(user_id: int):
    return {'message': 'Your profile id is: {}'.format(user_id)}


routes = [
    Route('/user/{user_id}', 'GET', user_profile),
]

app = App(routes=routes)

if __name__ == '__main__':
    app.main()

If we visit http://127.0.0.1:8080/user/23 we will get a response like this:

{"message": "Your profile id is: 23"}

But if we try to visit http://127.0.0.1:8080/user/some_string¬†– it will not match. Because the user_profile¬†function we defined, we added a type hint for the user_id¬†parameter. If it’s not integer, the path doesn’t match. But if we go ahead and delete the type hint and just use user_profile(user_id), it will match this url. This is again API Star is being smart and taking advantages of typing.

Including / Grouping Routes

Sometimes it might make sense to group certain urls together. Say we have a user module that deals with user related functionality. It might be better to group all the user related endpoints under the /user path. For example Р/user/new, /user/1, /user/1/update and what not. We can easily create our handlers and routes in a separate module or package even and then include them in our own routes.

Let’s create a new module named user, the file name would be user.py. Let’s put these codes in this file:

from apistar import Route


def user_new():
    return {"message": "Create a new user"}


def user_update(user_id: int):
    return {"message": "Update user #{}".format(user_id)}


def user_profile(user_id: int):
    return {"message": "User Profile for: {}".format(user_id)}


user_routes = [
    Route("/new", "GET", user_new),
    Route("/{user_id}/update", "GET", user_update),
    Route("/{user_id}/profile", "GET", user_profile),
]

Now we can import our user_routes from within our main app file and use it like this:

from apistar import Include
from apistar.frameworks.wsgi import WSGIApp as App

from user import user_routes

routes = [
    Include("/user", user_routes)
]

app = App(routes=routes)

if __name__ == '__main__':
    app.main()

Now /user/new will delegate to user_new function.

Accessing Query String / Query Parameters

Any parameters passed in the query parameters can be injected directly into handler function. Say for the url /call?phone=1234, the handler function can define a phone¬†parameter and it will receive the value from the query string / query parameters. If the url query string doesn’t include a value for phone, it will get None¬†instead. We can also set a default value to the parameter like this:

def welcome(name=None):
    if name is None:
        return {'message': 'Welcome to API Star!'}
    return {'message': 'Welcome to API Star, %s!' % name}

In the above example, we set a default value to name which is None anyway.

Injecting Objects

By type hinting a request handler, we can have different objects injected into our views. Injecting request related objects can be helpful for accessing them directly from inside the handler. There are several built in objects in the http¬†package from API Star itself. We can also use it’s type system to create our own custom objects and have them injected into our functions. API Star also does data validation based on the constraints specified.

Let’s define our own User¬†type and have it injected in our request handler:

from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar import typesystem


class User(typesystem.Object):
    properties = {
        'name': typesystem.string(max_length=100),
        'email': typesystem.string(max_length=100),
        'age': typesystem.integer(maximum=100, minimum=18)
    }

    required = ["name", "age", "email"]


def new_user(user: User):
    return user


routes = [
    Route('/', 'POST', new_user),
]

app = App(routes=routes)

if __name__ == '__main__':
    app.main()

Now if we send this request:

curl -X POST \
  http://127.0.0.1:8080/ \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/json' \
  -d '{"name": "masnun", "email": "[email protected]", "age": 12}'

Guess what happens? We get an error saying age must be equal to or greater than 18. The type system is allowing us intelligent data validation as well. If we enable the docs url, we will also get these parameters automatically documented there.

Sending a Response

If you have noticed so far, we can just pass a dictionary and it will be JSON encoded and returned by default. However, we can set the status code and any additional headers by using the Response¬†class from apistar. Here’s a quick example:

from apistar import Route, Response
from apistar.frameworks.wsgi import WSGIApp as App


def hello():
    return Response(
        content="Hello".encode("utf-8"),
        status=200,
        headers={"X-API-Framework": "API Star"},
        content_type="text/plain"
    )


routes = [
    Route('/', 'GET', hello),
]

app = App(routes=routes)

if __name__ == '__main__':
    app.main()

It should send a plain text response along with a custom header. Please note that the content¬†should be bytes, not string. That’s why I encoded it.

Moving On

I just walked through some of the features of API Star. There’s a lot more of cool stuff in API Star. I do recommend going through the Github Readme for learning more about different features offered by this excellent framework. I shall also try to cover short, focused tutorials on API Star in the coming days.

Getting Started with Pipenv

If you’re a Python developer, you probably know about pip and the different environment management solutions like virtualenv or venv. The pip tool is currently the standard way to install a Python package. Virtualenv has been a popular way of isolating Python environments for a long time. Pipenv combines the very best of these tools and brings us the one true way to install packages while keeping the dependencies from each project isolated. It claims to have brought the very best from all other packaging worlds (the package manager for other languages / runtimes / frameworks) to the Python world. From what I have seen so far, that claim is quite valid. And it does support Windows pretty well too.

How does Pipenv work?

Pipenv works by creating and managing a virtualenv and a Pipfile for your project. When we install / remove Python packages, the changes are reflected in the Pipfile. It also generates a lock file named Pipfile.lock¬†which is used to lock the version of dependencies and help us produce deterministic builds when needed. In a typical virtualenv¬†setup, we usually create and manage the virtualenv ourselves. Then we activate the environment and pip¬†just installs / uninstalls from that particular virtual environment. Packages like virtualenvwrapper¬†helps us easily create and activate virtualenv¬†with some of it’s handy features. But pipenv¬†takes things further by automating a large part of that. The Pipfile¬†would also make more sense if you have used other packaging systems like Composer, npm, bundler etc.

Getting Started

We need to start by installing pipenv globally. We can install it using pip from PyPi:

pip install pipenv

Now let’s switch to our project directory and try installing a package:

pipenv install flask

When you first run the pipenv install¬†command, you will notice it creates a virtualenv, Pipfile and Pipfile.lock¬†for you. Feel free to go ahead inspect their contents. If you’re using an IDE like PyCharm and want to configure your project interpreter, it would be a good idea to note down the virtualenv¬†path.

Since we have installed Flask, let’s try and running a sample app. Here’s my super simple REST API built with Flask:

from flask import Flask, jsonify

app = Flask(__name__)


@app.route('/')
def hello_world():
    return jsonify({"message": "Hello World!"})

Assuming that you have the FLASK_APP environment variable set to app.py (which contains the above code), we can just run the app like this:

pipenv run flask run

Any executables in the current environment can be run using the pipenv run¬†command. But I know what you might be thinking – we want to do just flask¬†run, not use the entire, long command. That’s easy too. We just need to activate the virtualenv with this command:

pipenv shell

Now you can just do flask run¬†or in fact run any executables in the way we’re used to doing.

Handling Dependencies

We can install and uninstall packages using the install¬†and uninstall¬†commands. When we install a new package, it’s added to our Pipfile¬†and the lock file is updated as well. When we uninstall a package, the Pipfile¬†and the lock files are again updated to reflect the change. The update¬†command uninstalls the packages and installs them again so we have the latest updates.

If you would like to check your dependency graph, just use the graph command which will print out the dependencies in a nice format, kind of like this:

PS C:\Users\Masnun\Documents\Python\pipenvtest> pipenv graph
celery==4.1.0
  - billiard [required: >=3.5.0.2,<3.6.0, installed: 3.5.0.3]
  - kombu [required: >=4.0.2,<5.0, installed: 4.1.0]
    - amqp [required: >=2.1.4,<3.0, installed: 2.2.2]
      - vine [required: >=1.1.3, installed: 1.1.4]
  - pytz [required: >dev, installed: 2017.3]
Flask==0.12.2
  - click [required: >=2.0, installed: 6.7]
  - itsdangerous [required: >=0.21, installed: 0.24]
  - Jinja2 [required: >=2.4, installed: 2.10]
    - MarkupSafe [required: >=0.23, installed: 1.0]
  - Werkzeug [required: >=0.7, installed: 0.12.2]

Pipenv is Awesome!

Trust me, it is! It packs a lot of cool and useful features that can help gear up your Python development workflow. There are just too many to cover in a blog post. I would recommend checking out the Pipenv Docs to get familiar with it more.

Deploying A Flask based REST API to AWS Lambda (Serverless) using Zappa

I have heard about AWS Lambda and all the cool things happening in the serverless world. I have also deployed Go functions using the Apex framework for serverless deployment. But recently I have started working on some Python projects again and decided to see how well the Python community is adapting to the serverless era. Not to my surprise, the Python community is doing great as usual. I quickly found an awesome framework named Zappa which makes deploying Python code to AWS Lambda very easy. Python is already natively supported on the AWS Lambda platform. But with the native support you need to configure the API Gateway, S3 bucket and other stuff on your own. Thanks to Zappa, these things are now automated to our convenience. We can easily deploy WSGI apps as well. That means, we can now take our Flask / Django / API Star apps and deploy them to AWS Lambda – with ease and simplicity. In this blog post, I will quickly walk through how to deploy a flask based rest api to the serverless cloud.

Setup Flask App

Before we can get started, we need to create a simple flask app. Here’s a quick rest api (that doesn’t do much):

from flask import Flask, jsonify

app = Flask(__name__)


@app.route('/')
def hello_world():
    return jsonify({"message": "Hello World!"})

Let’s save the above code in app.py. We can now install Flask using pip:

pip install flask

And then run the code:

FLASK_APP=app.py flask run

This should run our app and we should be able to visit http://127.0.0.1:5000/ to see the output.

Setting Up AWS

Please make sure you have an account for AWS where you have added your credit card and completed the sign up process. AWS Lambda has 1 million free requests per month which is promised to be always free (not for the first 12 months or anything). When you add your card, you shall also be eligible for a free tier of S3 for 12 months. So you won’t be charged for trying out a sample app deployment. So don’t worry about adding a card. In fact, adding a card is a requirement for getting the free tier.

Once you have your AWS account setup, click on your name (top right) and click “My Security Credentials”. From there, choose the “Access Keys” section and generate a new pair of key and secret. Store them in your ~/.aws/credentials¬†file. AWS Cli (and Zappa) will use these to connect to AWS services and perform required actions. The file should look like this:

[default]
aws_access_key_id=[...]
aws_secret_access_key=[...]

[masnun]
aws_access_key_id=[...]
aws_secret_access_key=[...]

I have created two profiles here. Named profiles are useful if you have more than one accounts/projects/environments to work with. After adding the credentials, add the region information in ~/.aws/config:

[default]
region=us-west-2
output=json

[profile masnun]
region=us-east-2
output=text

This will mostly help with choosing the default region for your app. Once you have these AWS settings configured, you can get started with Zappa.

Install and Configure Zappa

First install Zappa:

pip install zappa

Now cd into the project directory (where our flask app is). Then run:

zappa init

Zappa should guide you through the settings it needs. It should also detect the Flask app and auto complete the app (app.app¬†) for you. Once the wizard finishes, you’re ready to deploy your API.

zappa deploy dev

This should now deploy the app to the dev stage. You can configure different stages for your app in Zappa settings. Once you make some code changes, you can update the app:

zappa update

Both of these commands should print out the url for the app. In my case, the url is:¬†https://1gc1f80kb5.execute-api.us-east-2.amazonaws.com/dev¬†ūüôā

What’s Next?

Congratulations, you just deployed a Flask rest api to AWS Lambda using Zappa. You can make the url shorter by pointing a domain to it from your AWS console. To know more about Zappa and all the great things it can do, please check out Zappa on Github.

Golang: Interface

In Go or Golang, declaring an interface is pretty simple and easy.

type Printer interface {
	Print(string)
}

We just defined an interface named Printer that required an implementer to have a method named Print which takes a string parameter and returns nothing. Interfaces are implemented implicitly in Go. Any type that has the Print(string) method implements the interface. There is no need to use any implements keyword or anything of that sort.

type Terminal struct {
}

func (t Terminal) Print(message string) {
	fmt.Println(message)
}

In the above example, the Terminal¬†type implements the Printer¬†interface because it implements the methods required by the interface. Here’s a runnable, full code example:

package main

import (
	"fmt"
)

type Printer interface {
	Print(string)
}

type Terminal struct {
}

func (t Terminal) Print(message string) {
	fmt.Println(message)
}

func main() {
	var printer Printer
	printer = Terminal{}

	printer.Print("Hello World!")
}

We declared our printer variable to be of type Printer which is the interface. Since the Terminal type implements the Printer interface, we can pass Terminal{} to the printer variable and later call the Print method on it.

Interface and the Method sets

As you can understand, a method set is a set of methods on a type. The method set of an interface type (for example Printer¬†here) is it’s interface, that is the Print¬†method in this example. The method set of a type T (for example Terminal) contains the methods which can take a T¬†type receiver. In our above code, the Print¬†method takes the type Terminal¬†so it’s included in Terminal‘s method set. The corresponding pointer type, *T¬†has a method set that includes all methods with a receiver type of *T¬†as well as the methods defined on the receiver type T. So *Terminal¬†type contains any method that either takes Terminal¬†or Terminal¬†as a receiver type. So the Print¬†method is also in the method set for *Terminal¬†.

Method Set of T includes all methods receiving just T.
Method Set of *T includes all methods receiving either T or *T.

So the method set of *T¬†includes the method set of T¬†anyway. But by now, you might be wondering why this is so important. It is very important to understand the method set of a type because whether it implements an interface or not depends on the method set. To understand things further, let’s take a quick look at the following example:

package main

import (
	"fmt"
)

type Printer interface {
	Print(string)
}

type Terminal struct {
}

func (t *Terminal) Print(message string) {
	fmt.Println(message)
}

func main() {
	var printer Printer
	printer = Terminal{}

	printer.Print("Hello World!")
}

If you try to run this code on the go playground or try to run/compile it on your machine, you shall get an error message like this:

.\main.go:20:10: cannot use Terminal literal (type Terminal) as type Printer in assignment:
	Terminal does not implement Printer (Print method has pointer receiver)

Can you guess what’s happening? Well, our Print¬†method has a receiver type of *Terminal¬†however we are trying to assign the type Terminal¬†to printer. The Print¬†method falls in the method set of *Terminal¬†and not Terminal. So in this particular example, *Terminal¬†type actually implements the interface, not the base Terminal¬†type. We can just assign &Terminal¬†to printer¬†and it will work fine. Try the codes here –¬†https://play.golang.org/p/MvyD0Ls8xb¬†ūüôā

Another interesting thing, since *Terminal¬†also includes the method set defined on Terminal, this could would be valid just fine –¬†https://play.golang.org/p/xDmNGBcwsM. This is why understanding the method set of a type is important to understand which interfaces it implements.

The Curious Case of Method Calls

We have seen how the method set of *T includes methods receiving both T and *T but the method set of T is confined to methods that only take T and not *T. Now you might be thinking РI have seen codes like the following snippet:

package main

import (
	"fmt"
)

type Printer interface {
	Print(string)
}

type Terminal struct {
}

func (t *Terminal) Print(message string) {
	fmt.Println(message)
}

func main() {
	var terminal Terminal
	terminal = Terminal{}
	terminal.Print("Hello!")
}

Here, the Print method receives a *Terminal type but how are we calling it on Terminal type? From what we have seen before, Terminal should not have the method set defined to take a *Terminal receiver, how is this call being made?

Well, the code x.m()¬†works fine if the m¬†method takes the x¬†type as receiver. That is fine with us. But if the method m¬†is to take the type *x¬†and we try to call x.m()¬†– that shouldn’t work, right? The proper call should be (&x).m()¬†– no? Yes, correct. But Go provides us a shortcut here.¬†If the method m¬†is defined to take a *x¬†type as receiver and base type x¬†is addressable, x.m()¬†works as a shortcut for (&x).m(). Go provides us with that shortcut to keep things simpler. So whether you have a pointer or a value, it doesn’t matter, as long as the type is addressable, you can call the method set of *x¬†on x¬†using the very same syntax. However, please remember that this shortcut is not available while working with interfaces.

The Empty Interface

The type interface{} has zero methods defined on it. And every type in Go implements zero or more methods. So their method set actually satisfies the emtpy interface aka interface{}. So if a variable is of type interface{}, we can pass any type to it.

package main

import (
	"fmt"
)

func main() {
	var x interface{}

	x = 2
	fmt.Println(x)

	x = "masnun"
	fmt.Println(x)

}

We want to store different types in the same slice? Map values can be of different types? Just use interface{}.

package main

import "fmt"

func main() {
	things := []interface{}{}

	things = append(things, "masnun")
	things = append(things, 42)

	fmt.Println(things)

	unKnownMap := map[string]interface{}{}

	unKnownMap["name"] = "masnun"
	unKnownMap["life"] = 42

	fmt.Println(unKnownMap)

}

So, when we’re not sure of a type, or we need the type to flexible / dynamic, we can use interface{}¬†to store them.

Type Assertion

While we can store any type in a interface{} type, not all types are the same. For example, you can not use the string functions on an integer type. Go would not accept if you blindly want to pass interface{} in an operation where a very specific type is expected. Take a look:

package main

import (
	"strings"
)

func main() {
	unKnownMap := map[string]interface{}{}

	unKnownMap["name"] = "masnun"
	unKnownMap["life"] = 42

	strings.ToUpper(unKnownMap["name"])

}

Even though we have a string value stored against the name key, Go actually stores it as a type interface{}¬†and thus it won’t allow us to use it like a string. Luckily, interface values do store the underlying value and type. So we can use type assertion to assert that the underlying value can behave like a certain type.

This works:

package main

import (
	"strings"
	"fmt"
)

func main() {
	unKnownMap := map[string]interface{}{}

	unKnownMap["name"] = "masnun"
	unKnownMap["life"] = 42

	fmt.Println(strings.ToUpper(unKnownMap["name"].(string)))

}

The¬†unKnownMap["name"].(string)¬†part – we’re doing the type assertion here. If the type assertion succeeds, we can use the value as a string. If it does not succeed, we will get a panic.

Getting Type of an interface{}

If you have an interface{} and want to know what it holds underneath, you can use the %T format in Printf family of calls.

package main

import (
	"fmt"
)

func main() {
	unKnownMap := map[string]interface{}{}

	unKnownMap["name"] = "masnun"
	unKnownMap["life"] = 42

	fmt.Printf("%T \n", unKnownMap["name"])
	fmt.Printf("%T \n", unKnownMap["life"])

}

Type Switch

You can also use a switch statement with an interface{} to deal with different possible types.

package main

import (
	"fmt"
)

func main() {
	unKnownMap := map[string]interface{}{}

	unKnownMap["name"] = "masnun"
	unKnownMap["life"] = 42

	TypeSwitch(unKnownMap["name"])
	TypeSwitch(unKnownMap["life"])

}

func TypeSwitch(i interface{}) {
	switch i.(type) {
	case string:
		fmt.Println("String Value: ", i.(string))

	case int:
		fmt.Println("Integer Value: ", i.(int))

	}

}

The i.(type) gets you the type of the variable. Please remember it only works with a switch statement.

Golang: Making HTTP Requests

Go aka Golang is a very promising programming language with a lot of potential. It’s very performant, easy to grasp and maintain, productive and backed by Google. In our earlier posts, we have tried to provide guidelines to learn Go¬†and later we saw how to work with JSON in Go. In this blog post, we’re going to see how we can make http requests using Go. We shall make use of the net/http¬†package in Go which provides all the stuff we need to make http requests or create new http servers. That is, this package would help you do all things “http”. To check / verify that we made correct requests, we would be using httpbin¬†which is a nice service to test our http client requests.

A Simple HTTP Request

Let’s make a very simple GET request and see how we can read the response. We would be sending a simple HTTP GET request to https://httpbin.org/get and read the response. For that we can just import the net/http¬†package and use the http.Get¬†function call. Let’s see an example:

package main

import (
	"net/http"
	"log"
	"io/ioutil"
)

func main() {
	MakeRequest()
}

func MakeRequest() {
	resp, err := http.Get("https://httpbin.org/get")
	if err != nil {
		log.Fatalln(err)
	}

	body, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		log.Fatalln(err)
	}

	log.Println(string(body))
}

We have created a separate MakeRequest¬†function and called it from our main function. So going ahead, we will just see the changes inside this function and won’t need to think about the entire program. Inside this function, we have passed the url to http.Get¬†and received two values – the response object and any errors that might have happened during the operation. We did a check to see if there were any errors. If there weren’t any errors, err¬†would be nil. Please note that this err¬†would be reported only if there was an issue connecting to the server and getting a response back. However, it would not be concerned about what http status code the server sent. For example, if the server sends a http 500 (which is internal server error), you will get that status code and error message on the resp¬†object, not on err.

Next, we read the resp.Body¬†which implements the io.ReadCloser¬†interface and we can use ioutil.ReadAll¬†to fully read the response. This function also returns two values – a byte slice ([]byte) and err. Again, we check for any potential errors in reading the response body. If there were no errors, we print out the body. Please note the string(body)¬†part. Here, we’re converting the byte slice to a string. If we don’t do it, log.Println¬†would print out representation of the byte slice, a list of all the bytes in that slice, individually. But we want a string representation. So we go ahead and make the conversion.

We would see the printed output is a JSON string. You will notice the httpbin service outputs JSON messages. So in the next example, we would see how we can send and read JSON messages.

JSON Requests and Responses

Now let’s send a JSON message. How do we do that? If you’re coming from Python / Node / Ruby, you might be used to passing a dictionary like structure to your favorite requests library and just mention it should be sent as JSON. Your library does the conversion for you and sends the request with required headers. In Go, however, things are more explicit, which is a good thing in fact. You will know what you’re doing, how you’re doing it.¬†If the JSON related functionality is new to you, please do check our blog post –¬†Golang: Working with JSON.

In Go, we would first convert our data structure to a byte slice containing the JSON representation of the data. Then we pass it to the request with the proper content type. Let’s see a code example:

func MakeRequest() {

	message := map[string]interface{}{
		"hello": "world",
		"life":  42,
		"embedded": map[string]string{
			"yes": "of course!",
		},
	}

	bytesRepresentation, err := json.Marshal(message)
	if err != nil {
		log.Fatalln(err)
	}

	resp, err := http.Post("https://httpbin.org/post", "application/json", bytes.NewBuffer(bytesRepresentation))
	if err != nil {
		log.Fatalln(err)
	}

	var result map[string]interface{}

	json.NewDecoder(resp.Body).Decode(&result)

	log.Println(result)
	log.Println(result["data"])
}

We first created message which is a map containing a string value, an integer value and another embedded map. Then we json.Marshal it to get the []byte out of it. We also check for any errors that might happen during the marshalling. Next, we make a POST request using the http.Post function. We pass the url, our content type, which is JSON and then we create and pass a new bytes.Buffer object from the bytes representation. Why do we need to create a buffer here? The http.Post function expects an implementation of io.Reader Рwhich is a brilliant design, anything that implements an io.Reader can be passed here. So we could even read this part from disk or network or any custom readers we want to implement. In our case, we can just create a bytes buffer which implements the io.Reader interface. We send the request and check for errors.

Next we declare another result¬†variable (which is also a map type) to store the results returned from the request. We could read the full body first (like previous example) and then do json.Unmarshal¬†on it. However, since the resp.Body¬†is an io.Reader, we can just pass it to json.NewDecoder¬†and then call Decode¬†on it. Remember, we have to pass a pointer to our map, so we passed &result¬†instead of just result.¬† The Decode¬†function returns an error too. But we assumed it would not matter and didn’t check. But best practice would have been to handle it as well. We logged the result¬†and result["data"]. The httpbin¬†service sends different information about the request as the response. You can see those in the result¬†map. If you want to see the data you sent, they will be in the data¬†key of the result¬†map.

Posting Form

In our last example, we have submitted JSON payload. What if we wanted to submit form values? We have the handy http.PostForm¬†function for that. This function takes the url and url.Values¬†from net/url¬†package. The url.Values¬†is a custom type which is actually¬†map[string][]string¬†internally. That is – it’s a map which contains string keys and against each key, there can be multiple string values ([]string). In a form request, you can actually submit multiple values against one field name. That’s the reason it’s a slice of string, instead of just a key to value mapping.

Here’s an example code snippet:

func MakeRequest() {

	formData := url.Values{
		"name": {"masnun"},
	}

	resp, err := http.PostForm("https://httpbin.org/post", formData)
	if err != nil {
		log.Fatalln(err)
	}

	var result map[string]interface{}

	json.NewDecoder(resp.Body).Decode(&result)

	log.Println(result["form"])
}

We would be reading the form key from the result map to retrieve our form values. We have seen how we can easily send form values using the net/http package. Next we would like to send a file along with typical form fields. For that we would also need to learn how to customize http requests on our own.

Custom Clients / Requests

The http.Get, http.Post¬†or http.PostForm¬†calls we have seen so far uses a default client already created for us. But now we are going to see how we can initialize our own Client¬†instances and use them to make our own Requests. Let’s first see how we can create our own clients and requests to do the same requests we have made before. A quick example follows:

func MakeRequest() {

	client := http.Client{}
	request, err := http.NewRequest("GET", "https://httpbin.org/get", nil)
	if err != nil {
		log.Fatalln(err)
	}

	resp, err := client.Do(request)
	if err != nil {
		log.Fatalln(err)
	}

	var result map[string]interface{}
	json.NewDecoder(resp.Body).Decode(&result)
	log.Println(result)
}

As you can see, we just take a new instance of http.Client¬†and then create a new request by calling http.NewRequest¬†function. It takes the http method, url and the request body. In our case, it’s a plain GET request, so we pass nil¬†for the body. We then call the Do¬†method on the client¬† and parse the response body. So that’s it – create a client, create a request and then let the client Do¬†the request.¬†Interestingly the client¬†also has convenient methods like Get, Post, PostForm¬†– so we can directly use them. That’s what http.Get, http.Post, http.PostForm and other root level functions actually do. They call these methods on the DefaultClient¬†which is already created beforehand. In effect, we could just do:

func MakeRequest() {

	client := http.Client{}
	resp, err := client.Get("https://httpbin.org/get")
	if err != nil {
		log.Fatalln(err)
	}

	var result map[string]interface{}
	json.NewDecoder(resp.Body).Decode(&result)
	log.Println(result)
}

And it would work similarly. Now you might be wondering Рwhy not just use the DefaultClient, why create our own? What is the benefit?

Customizing the Client

If we look at the definition of the http.Client structure, it has these fields:

type Client struct {
	Transport RoundTripper
	CheckRedirect func(req *Request, via []*Request) error
	Jar CookieJar
	Timeout time.Duration
}

If we want, we can set our own transport implementation, we can control how the redirection is handled, pass a cookie jar to save cookies and pass them to the next request or simply set a timeout. The timeout part is often very significant in making http requests. The DefaultClient does not set a timeout by default. So if a malicious service wants, it can start blocking your requests (and your goroutines) indefinitely, causing havoc in your application. Customizing the client gives us more control over how the requests are sent.

File Upload

For uploading files while sending a http request, we need to use the mime/multipart¬†package with the net/http¬†package. We will first see the code example and then walk through it to understand what we’re doing. The code might seem a lot (it includes a lot of error handling) and complex. But please bear with me, once you go through the code and understand what’s happening, it will seem so simpler ūüôā

func MakeRequest() {

	// Open the file
	file, err := os.Open("name.txt")
	if err != nil {
		log.Fatalln(err)
	}
	// Close the file later
	defer file.Close()

	// Buffer to store our request body as bytes
	var requestBody bytes.Buffer

	// Create a multipart writer
	multiPartWriter := multipart.NewWriter(&requestBody)

	// Initialize the file field
	fileWriter, err := multiPartWriter.CreateFormFile("file_field", "name.txt")
	if err != nil {
		log.Fatalln(err)
	}

	// Copy the actual file content to the field field's writer
	_, err = io.Copy(fileWriter, file)
	if err != nil {
		log.Fatalln(err)
	}

	// Populate other fields
	fieldWriter, err := multiPartWriter.CreateFormField("normal_field")
	if err != nil {
		log.Fatalln(err)
	}

	_, err = fieldWriter.Write([]byte("Value"))
	if err != nil {
		log.Fatalln(err)
	}

	// We completed adding the file and the fields, let's close the multipart writer
	// So it writes the ending boundary
	multiPartWriter.Close()

	// By now our original request body should have been populated, so let's just use it with our custom request
	req, err := http.NewRequest("POST", "https://httpbin.org/post", &requestBody)
	if err != nil {
		log.Fatalln(err)
	}
	// We need to set the content type from the writer, it includes necessary boundary as well
	req.Header.Set("Content-Type", multiPartWriter.FormDataContentType())

	// Do the request
	client := &http.Client{}
	response, err := client.Do(req)
	if err != nil {
		log.Fatalln(err)
	}

	var result map[string]interface{}

	json.NewDecoder(response.Body).Decode(&result)

	log.Println(result)
}

So what are we doing here?

  • First we are opening the file we want to upload. In our case, I have created a file named “name.txt” that just contains my name.
  • We create a bytes.Buffer¬†to hold the request body we will be passing with our http.Request¬†later on.
  • We create a multipart.Writer¬†object and pass a pointer to our bytes.Buffer¬†object so the multipart writer can write necessary bytes to it.
  • The multipart writer has convenient methods to create a form file or a form field. It gives us back a writer to which we can write our file content or the field values. We create a file field and copy our file contents to it. Then we create a normal field and write “Value” to it.
  • Once we have written our file and normal form field, we call the Close method on the multipart writer object. Closing it writes the final, ending boundary to the underlying bytes.Buffer¬†object we passed to it. This is necessary, otherwise the request body may remain incomplete.
  • We create a new post request like we saw before. We passed the bytes.Buffer¬†we created as the request body. The body now contains the multi part form data written with the help of the mime/multipart¬†package.
  • We send the request as before. But we set the content type by calling¬†multiPartWriter.FormDataContentType()¬†– which ensures the correct content type and boundary being set.
  • We decode the response from httpbin and check the output.

If everything goes well, we will see the form field and the file name in the response we received from httpbin. The concept here is simple. We are sending a http request with a custom body. We could construct the request body ourselves but we just took the help of the mime/multipart package to construct it in a relatively easier fashion.

Always Close The Response Body

Here’s a lesson I learned the hard way. When we make a http request, we get a response and an error back. We may feel lazy and decide not to check for errors or close the response body (just like in the examples above). And from the laziness comes disaster. If we do not close the response body, the connection may remain open and cause resource leak. But if the error is not nil that is in case of an error, the response can be nil. So we can’t just do a defer resp.Body.Close()¬†in this case. We have to properly check error and then close the response body.

client := http.DefaultClient
resp, err := client.Do(req)
if err != nil {
    return nil, err
}
defer resp.Body.Close()

Always Use a Timeout

Try to use your own http client and set a timeout. Not setting a timeout can block the connection and the goroutine and thus cause havoc. So do something like this:

timeout := time.Duration(5 * time.Second)
client := http.Client{
    Timeout: timeout,
}
client.Get(url)

 

Proxy in JavaScript

As we can already guess from the name, a Proxy object works as a “proxy” to another object and allows us to customize the behavior of the said object in certain ways. Let’s say you have an obj named awesomeAPI¬†which has some properties and methods. You want to “trap” any calls to the object. May be you want to debug something and log every time a property is read/set on the object? Or when a method is called? Since the object has “API” in it’s name, let’s assume it makes a HTTP call to an external API We want to cache the response instead of hitting the resource every time a method is called – we can use a proxy to do that. In fact there can be so many useful use cases for Proxy in JavaScript as we will see.

A Basic Proxy

To create a new proxy, we need two things – one target object and a handler. The target object is the object we want to proxy. The handler is an object which will define certain methods to control what happens when an operation is requested on the target object through the proxy. The proxy object traps the requests, instead of performing the requested operations directly on target object, the proxy would first see if there’s a handler method defined for that operation in our handler object. If a method is available, it’s called. Otherwise the operation is forwarded to the target object directly.

The description alone might not be clear enough. Let’s go ahead and see a basic example.

const obj = {
  hello() {
    console.log("Hello");
  }
}

const handler = {
  get(target, propKey, receiever) {
    console.log("Trapping GET for " + propKey);
    return target[propKey]
  }
}

const proxiedObject = new Proxy(obj, handler);
proxiedObject.hello();

If you run the code example, you will notice a message on your console – “Trapping GET for hello” followed by the actual “Hello” printed by the target object. What’s happening here? We are creating a new Proxy¬†object for the obj¬†object. We have a handler¬†which sets a trap for get. Now any time we access a property on proxiedObject, the get¬†method on the handler is called with the target object, name of the property and the receiver. We will focus on the property name and the target arguments for now.¬†In our handler code, we have just logged the name of the property on the console and then returned the actual value from the target object. We could of course return any value we wish – may be a transformed value? a cached value? Well, anything we wanted.

You may be wondering Рwe made a function call, hello() but why would that get trapped by get which is for property access? Methods in JavaScript are actually properties. A method call happens in two stages Рget the method (read the property) and then an apply call. So when we called proxiedObject.hello(), it looked up the hello property first. Then called it.

Traps in Our Proxies

The methods we define on the handler objects correspond to certain operations. For example, get¬†is called during property lookup, has¬†is called when the in¬†operator is used, set¬†is for setting property values. ¬†These methods are the “traps” for the corresponding operations. Traps are optional, you can just define the ones you need. If any trap is not set for a particular operation, it’s forwarded to the target object directly.

Here’s an example:

const numberStorage  = {
  number: 0
}

const handler = {
  set(target, propKey, value, receiever) {
    target[propKey] = value * value;
    return true;
  }
}

const squaredNumberStorage = new Proxy(numberStorage, handler);

squaredNumberStorage.number = 2;
console.log(squaredNumberStorage.number);

In this example, we have trapped the property set operation and instead of storing the number as it is, we are squaring it and saving the squared value. However, we have defined no traps for the get operation, so when we access the number,  the operation is forwarded to the target object and we get the actual value without any changes.

Now that you know how to trap the operations on an object using a Proxy, you can check out the available traps from the awesome MDN docs here.

Going Ahead

There is an excellent chapter on meta programming with proxies in Exploring ES6 book. The MDN docs on the Proxy object is also pretty nice with adequate examples and complete API references.

 

 

Promises in JavaScript

We encounter promises in real life every now and then. You have promised to deliver that project within the next week. Your friend has promised to play Overwatch with you tonight. If you think about it, promises are everywhere around us. Promises in JavaScript also play similar roles. A Promise object in JS is an object that promises to come up with a value or in the case of any error, a reason for the failure. But we don’t know when the promise will complete, so we attach callbacks to the promise object which get called on value or error.

Why Promises are useful?

Of course before we can dive into the technicalities of promises, you will have this question. Why does Promise matter in the first place? What is the use case? If you have written any JS code that fetches some data over the internet, you may have already got used to the fact that JavaScript is single threaded and uses asynchronous operations in places. To deal with asynchronous JS parts, we are long used to using callbacks and event listeners.

function downloadImage(imageURL, callback) {
    // Some network requests here which takes time,  let's use setTimeout as an example

    const error = null;
    const result = "image data";

    setTimeout(() => callback(error, result), 3000);


}

// How we pass callbacks
downloadImage("some url", (error, result) => {
    if (error) {
        console.log("Error")
    }
    else {
        console.log(result)
    }

});

This looks good but if you have written a few level of nested callbacks, you will soon find out the callback hell is real. You may also architect a few Pyramids of Doom.

Pyramid of Doom

Promises are one of clean ways to solve this problem. Without getting into technical parts, we can rewrite the download image example using Promises like this:

function downloadImageWithPromise(imageURL) {
    // Some network requests here which takes time,  let's use setTimeout as an example

    const error = null;
    const result = "image data";

    return new Promise((resolve, reject) => {
        setTimeout(() => {
            if (error) {
                reject(error);
            }
            else {
                resolve(result)
            }
        }, 3000)


    })


}

downloadImageWithPromise("some url").then(console.log).catch(console.error);

In most cases, we will be consumers of promises, so don’t worry if the downloadImageWithPromise¬†doesn’t immediately make sense. We will dive into promise creation soon. For now, take a look at how easy it is to consume a promise. No more callbacks or headaches. The code is clean, easy to reason about and should be easy to maintain in the long run.

With the latest JS changes, some of the important APIs are also based on promises. So it’s very essential that we understand the basics of Promises before hand.

Making a Promise

If you were a little confused about the downloadImageWithPromise¬†function, worry no more, we will break it down now. And hopefully, it will no longer remain confusing. The basic idea of making promises in JavaScript that when we don’t immediately have a value to return from our function (for example an async operation), we should return a promise instead. So the user / consumer can rely on that promise and retrieve the value from it in the future. In our code, when the async operation is complete, we should “settle” the promise object we returned. Settling means either resolving / fullfilling or rejecting the promise with an error.

Creating a new promise object is simple. We use the new¬†keyword with the Promise¬†constructor. We pass it an “executor” function. The executor function takes two arguments – a resolve¬†callback and a reject¬†callback. We run our async operations within this executor function and when the operation completes, we either call the resolve¬†or reject¬†callback with appropriate values.

function iPromiseValue() {
    return new Promise(executor);
}

function executor(resolve, reject) {
    setTimeout(() => resolve("Here's your value!"), 3000)
}

To make things clearer, we separated our executor function. This is basically how promise works. The promise constructor takes a executor function. The executor function takes two callback, resolve and reject. As soon as one of these callbacks is called, the promise is settled and the value (or the error) is made available to the consumer.

Consuming a Promise

Promise objects have two convenient methods – then¬†and catch¬†– we can pass callbacks to these methods which will be later called in order. When these callbacks are called, we will get the values passed to our callbacks. Let’s take a quick example:

function getValueAfterDelay(delay, value) {
    return new Promise((resolve, reject) => {
        setTimeout(() => resolve(value), delay);
    })
}

getValueAfterDelay(3000, "the value")
    .then((value) => {
        console.log("Got value: " + value)
    });

And here’s an example with rejection:

function rejectAfterDelay(delay) {
    return new Promise((resolve, reject) => {
        setTimeout(() => reject("No!"), delay)
    })
}

rejectAfterDelay(3000)
    .then((value) => console.log("Did we get a value? :o"))
    .catch(console.error);

Chaining Callbacks

The then method returns a promise. So we can keep chaining multiple promises one after one.

function getMePromise(value) {
    return Promise.resolve(value);
}

getMePromise(2)
    .then((value) => 2 * value)
    .then((value) => value + 1)
    .then(console.log);

First things first – the Promise.resolve¬†and Promise.reject¬†methods immediately return a resolved or rejected promise with the value we pass. This is convenient where we need to return a promise but we don’t need any delay, so need for an executor function and the separate callbacks. We can just do Promise.resolve¬†or Promise.reject¬†and be done with it.

We can see how we chained multiple then methods and passed multiple callbacks to gradually transform the values. If we return a promise from one of this callbacks to then methods, it will be settled before passing on to the next callback.

Please note: If you look at the Promise related docs on MDN or some other places, you will find that the then¬†method can take two callbacks, one for success and one for failure. The catch method is simply then(undefined, errorHandler)¬†in disguise. But there’s a problem with using two callbacks to the then¬†method. Take a look at this example:

function getMePromise(value) {
    return Promise.resolve(value);
}

function successCallback(value) {
    if (value < 10) {
        throw new Error("Less than 10");
    }
}

function failureCallback(err) {
    console.log("Error:" + err);
}

getMePromise(2).then(successCallback, failureCallback)

Running the code will get us an error:  UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Less than 10. 

So what’s happening here? The error callback in a then¬†method gets called only if there’s any error in the previous step (in this case the getMePromise¬†function). But it can not handle the error caused in the same level (from within the successCallback¬†function). This is why a then().catch()¬†chain works better than passing two callbacks to the then¬†method itself.

Promise in Real Life

We have got some basic ideas about Promise. Now we will see a real life example of using promise. We would use the axios¬†npm package to fetch a web page content. This package has a nice promise based API. Let’s install it first:

npm i -S axios

Now we can use the package.

const axios = require("axios");

axios.get("http://google.com")
    .then((resp) => console.log(resp.data.length))
    .catch((error) => console.error(error));

The axios.get function makes a HTTP GET request to the URL provided and returns a promise. We then attach success and error callbacks.

Multiple Promises

Sometimes we may need to deal with multiple promises. We can do that using Promise.all. It takes a list (iterable) of promises and returns a single promise that we can track. This single promise is resolved when all the promises in the list has resolved or one has failed (whichever happens first). It will return the results of the resolved promises in the same order. Let’s see an example:

const axios = require("axios");

const googlePromise = axios.get("http://google.com");
const facebookPromise = axios.get("http://facebook.com");

const allPromises = Promise.all([googlePromise, facebookPromise]);

allPromises
    .then(([googleRes, fbRes]) => console.log(googleRes.data.length, fbRes.data.length))
    .catch((error) => console.error(error));

Here we create two promises separately, put them in a list and pass it to Promise.all(). Then we attach our callbacks to the new promise we got. Note how we got two results in the same order inside the callback to the then method.

There is another convenient method РPromise.race() which is similar but settles as soon as one of the passed promises is resolved or rejected.

Migrating from Callback

In most cases, you can use various utilities available to convert callback based APIs to promise based one. In case, it’s not possible, just wrap their APIs in your own promises. For example, the popular request¬†package on npm has a little different callback syntax. So we can wrap it in our own promise like this:

const request = require("request");


function makeRequest(url) {

    return new Promise((resolve, reject) => {
        request(url, function (error, response, body) {
            if (error) {
                reject(error);
            } else {
                resolve(body);
            }
        });
    });


}

makeRequest('http://www.google.com').then(console.log);

In  this case, we make the request call from inside an executor function and return the promise. We pass a callback to the request function just like it expects. Inside the callback we make use of our resolve / reject callbacks to settle the promise.

Further Reading

 

REST API with KoaJS and MongoDB (Part – 3)

In Part -1¬†of this series, we saw how we can get started with KoaJS and in Part – 2¬†we built CRUD endpoints with MongoDB. In this part, we’re going to work with authentication. We will be using JSON Web Tokens aka JWT for the auth part. We have written detailed pieces on JWT before. You can read¬†Understanding JWT¬†to check out the basics and read our tutorial on JWT with Flask¬†or JWT with Django to see how other frameworks like Flask uses JWT.

JWT with KoaJS

To implement JSON Web Tokens with KoaJS, we would be using two packages Рkoa-jwt and jsonwebtoken. The second package (jsonwebtoken) provides useful helper functions to generate and verify JWTs. Where as koa-jwt provides an easy to use middleware that we can use with KoaJS.

Let’s go ahead and install these packages:

npm i -S koa-jwt jsonwebtoken

That should install the dependencies and save them in our package.json.

Securing Routes with JWT

We have the required packages installed. So we can now start securing our routes with JWT. We can just require the koa-jwt package directly and use it. But we want to customize some aspects. For that we would create our own module named jwt.js and put the custom stuff in there.

const jwt = require("koa-jwt");
const SECRET = "S3cRET~!";
const jwtInstance = jwt({secret: SECRET});

module.exports = jwtInstance;

Now in our index.js file, we would add the middleware to the app.

app.use(require("./jwt"));

If we try to visit¬†http://localhost:3000/, we will get a plain text error message saying “Authentication Error”. While the message is clear and concise, we want to output JSON, not a plain text error message. For that, we will write a custom middleware.

function JWTErrorHandler(ctx, next) {
    return next().catch((err) => {
        if (401 == err.status) {
            ctx.status = 401;
            ctx.body = {
                "error": "Not authorized"
            };
        } else {
            throw err;
        }
    });
};

The code for this middleware is pretty simple. It invokes the next middleware and if it catches an error, it checks if it’s 401, if so, it sets a nice detailed JSON as the output. If you’re familiar with how middlewares work in express / koa, this should make sense. If it doesn’t make sense, don’t worry, you will get it over time.

Now we need to export this function from our jwt.js¬†module. Let’s change the exports a little bit.

module.exports.jwt = () => jwtInstance;
module.exports.errorHandler = () => JWTErrorHandler;

Now we’re exporting two functions, which, when called will return the specific middlewares. We also need to change our imports in index.js¬†–

const jwt = require("./jwt");
app.use(jwt.errorHandler()).use(jwt.jwt());

Please note the order of the middleware we used. The error handler must come before the JWT middleware itself, so it can call next() and check for the 401 error.

If we try to browse the API now, we should get a nice JSON like this:

{"error":"Not authorized"}

Secured Routes and Router

We used the middleware directly on the koa app. That means all our routes are now secure. All the routes would now check for the Authorization¬†header value and try to verify it’s value as a JSON Web Token. That’s good but there’s a slight problem. If we can’t access any of the routes without a token, which route do we access to get the token in the first place? And what token do we use for that? Yeah, we need to have at least one route which is not secured with JWT which will accept login details and issue the JWTs to the users. Besides, there could be other API end points which we can keep open to everyone, we don’t need authentication on those routes. How do we achieve that?

Luckily, Koa allows us to use multiple routers and each router can have their own set of middlewares. We will keep our current router open and add the routes to obtain the JWT. We will create a separate route which will use the middleware and be secured. We will call this one the “secured router” and the routes would be “secured routes”.

// Create a new securedRouter
const router = new Router();
const securedRouter = new Router();

// Add the securedRouter to our app as well
app.use(router.routes()).use(router.allowedMethods());
app.use(securedRouter.routes()).use(securedRouter.allowedMethods());

We modified our existing codes. We now have two routers and we added them both to the app. Let’s now move our old CRUD routes to the secured router and apply the JWT middleware to just the secured router.

// Apply JWT middleware to secured router only
securedRouter.use(jwt.errorHandler()).use(jwt.jwt());

// List all people
securedRouter.get("/people", async (ctx) => {
    ctx.body = await ctx.app.people.find().toArray();
});

// Create new person
securedRouter.post("/people", async (ctx) => {
    ctx.body = await ctx.app.people.insert(ctx.request.body);
});

// Get one
securedRouter.get("/people/:id", async (ctx) => {
    ctx.body = await ctx.app.people.findOne({"_id": ObjectID(ctx.params.id)});
});

// Update one
securedRouter.put("/people/:id", async (ctx) => {
    let documentQuery = {"_id": ObjectID(ctx.params.id)}; // Used to find the document
    let valuesToUpdate = ctx.request.body;
    ctx.body = await ctx.app.people.updateOne(documentQuery, valuesToUpdate);
});

// Delete one
securedRouter.delete("/people/:id", async (ctx) => {
    let documentQuery = {"_id": ObjectID(ctx.params.id)}; // Used to find the document
    ctx.body = await ctx.app.people.deleteOne(documentQuery);
});


We removed the previously setup JWT middleware from the app and used it on securedRouter instead. Remember, the JWT middleware must be setup before we setup the routes themselves. Ordering of middleware matters.

If we try to visit “http://localhost:3000/”, we will no longer get the auth error, rather will see “not found” (we didn’t define any routes for the root url). However, if we try to visit “http://localhost:3000/people”, we will get the authentication error again. Exactly what we wanted.

Issuing JWTs

We now need to create the route to issue JWTs to our users. We will be accepting their login (username and password) and if they’re valid, we will issue them JWTs which they can use to further access our APIs.

The koa-jwt package no longer supports issuing tokens. We have to use the jsonwebtoken package for that instead. Personally, I like to create a helper function in my custom jwt.js module like this:

// Import jsonwebtoken
const jsonwebtoken = require("jsonwebtoken");

// helper function
module.exports.issue =  (payload) => {
    return jsonwebtoken.sign(payload, SECRET);
};

Then we can write a new route on our public router like this:

router.post("/auth", async (ctx) => {
    let username = ctx.request.body.username;
    let password = ctx.request.body.password;

    if (username === "user" && password === "pwd") {
        ctx.body = {
            token: jwt.issue({
                user: "user",
                role: "admin"
            })
        }
    } else {
        ctx.status = 401;
        ctx.body = {error: "Invalid login"}
    }
});

We have hardcoded the username and password here. In production environments, we would store the details in a database and we would hash the password. No one in their right mind should store password in plain text.

In this view, we are accepting a JSON payload and checking the username and password. And then if the details match, we are issuing the token. To test if it’s working, we can make a curl request and checkout the response:

curl -X POST \
  http://localhost:3000/auth \
  -H 'cache-control: no-cache' \
  -H 'content-type: application/json' \
  -d '{"username": "user", "password": "pwd"}'

If it worked, we will get a JSON back with a `token` value containing the JWT.

Using the JWT

We made a request and got the following response:

{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoidXNlciIsInJvbGUiOiJhZG1pbiIsImlhdCI6MTUwMjI3MTM0Nn0.GWtjeECIHFQr7vI_MphfUle06Pav_zx4sLmSrd3HE8g"}

That is our token. Now we can start using it in the Authorization header. The format should be like:Authorization: Bearer <Token>¬†. We can make a request to our secured “/people” resource using curl with this header:

curl -X GET \
  http://localhost:3000/people \
  -H 'authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoidXNlciIsInJvbGUiOiJhZG1pbiIsImlhdCI6MTUwMjI2OTg4MX0.Ugbh4UwN9tRwhIQEQUHoo-affUf5CAsCztzAXncBYt4' \
  -H 'cache-control: no-cache' \
  -H 'content-type: application/json' \

We will now get back the list of people we have stored in our mongodb.

REST API with KoaJS and MongoDB (Part – 2)

In our last post about REST API with KoaJS and MongoDB, we got started with KoaJS and learned to create simple views. We also saw how we can access the query strings and incoming JSON payloads. In this tutorial, we are going to go ahead and implement the RESTful routes and CRUD operations with MongoDB.

In case you’re new to REST API development, you might also want to check out the REST API Concepts¬†and our REST API Tutorials with Flask and Django REST Framework.

Installing MongoDB and NodeJS Driver

I am assuming you have installed MongoDB already on your system. You can install MongoDB using their official installer on Windows or Homebrew on OS X. On Linux systems, you can use the package manager that ships with your distro.

If you don’t have MongoDB installed locally, don’t worry, you can also use a free third party mongo hosting service like mLab.

Once we have MongoDB setup and available, we’re going to install the NodeJS driver for MongoDB next.

npm i -S mongodb

Connecting to MongoDB

We will create a separate file named mongo.js and put the following codes in it:

const MongoClient = require('mongodb').MongoClient;
const MONGO_URL = "mongodb://localhost:27017/polyglot_ninja";


module.exports = function (app) {
    MongoClient.connect(MONGO_URL)
        .then((connection) => {
            app.people = connection.collection("people");
            console.log("Database connection established")
        })
        .catch((err) => console.error(err))

};

Our module exports just one function which takes the app object as the only parameter. Once called, the function connects to our mongodb instance and once connected, sets the people property on our app instance. The people property would actually be a reference to the people collection on our database. So whenever we will have access to the app instance, we will be just using the app.people property to access the collection from within our app. If the connection fails, we will have the error message printed on our terminal.

We have used promises instead of callback. Which makes the code a bit cleaner. Now in our index.js file, we will call the exported function like this:

require("./mongo")(app);

That should import the function and invoke it. Assuming everything worked fine, you should see the message saying database connection established when you run the app next time.

Please Note:¬†We didn’t create the mongodb database or the collection ourselves. MongoDB is smart enough to figure out that we used the names of non existing database / collection and create them for us. If anything with that name exists already, just uses them.

Inserting Records Manually

Before we can start writing our actual code, let’s connect to our mongo database and insert some entries manually so we can play with those data. You can use the command line tool or a mongodb GUI to do so. I will use the command line tool.

$ mongo
MongoDB shell version: 3.2.7
connecting to: test
> use polyglot_ninja
switched to db polyglot_ninja
> db
polyglot_ninja
> db.people.insert({"name": "masnun", "email": "[email protected]"})
WriteResult({ "nInserted" : 1 })
> db.people.find()
{ "_id" : ObjectId("597ef404b5256ba58d26ac53"), "name" : "masnun", "email" : "[email protected]" }
>

I inserted a document with my name and email address in the people collection of the polyglot_ninja db.

Implementing The Routes

Now we will go ahead and implement the routes needed for our REST API.

Please note: Too keep the actual code short, we will skip – validation, error handling and sending proper http status codes. But these are very important in real life and must be dealt with proper care. I repeat – these things are skipped intentionally in this tutorial but should never be skipped in a production app.

GET /people (List All)

This is going to be our root element for the people api. When someone makes a GET¬†request to /people, we should send them a list of documents we have. Let’s do that.

// List all people
router.get("/people", async (ctx) => {
    ctx.body = await ctx.app.people.find().toArray();
});

Now if we run our app and visit the url, we shall see the document we created manually listed there.

POST /people (Create New)

Since we already have the body parser middleware installed, we can now easily accept JSON requests. We will assume that the user sends us properly valid data (in real life you must validate and sanitize) and we will directly insert the incoming JSON into our mongo collection.

// Create new person
router.post("/people", async (ctx) => {
    ctx.body = await ctx.app.people.insert(ctx.request.body);
});

You can POST JSON to the /people endpoint to try it.

curl -X POST \
  http://localhost:3000/people \
  -H 'cache-control: no-cache' \
  -H 'content-type: application/json' \
  -d '{"name": "genji", "email": "[email protected]"}'

Now go back to the all people list and see if your new requests are appearing there. If everything worked, they should be there ūüôā

GET /people/:id (Get One)

To query by mongo IDs we can’t just use the string representation of the ID but we need to convert it to an ObjectID object first. So we will import ObjectID¬†in our index.js¬†file first:

const ObjectID = require("mongodb").ObjectID;

The rest of the code will be simple and straightforward:

// Get one
router.get("/people/:id", async (ctx) => {
    ctx.body = await ctx.app.people.findOne({"_id": ObjectID(ctx.params.id)});
});

PUT /people/:id (Update One)

We usually use PUT¬†when we want to replace the entire document. For single field updates, we prefer PATCH. In the following code example, we have used PUT¬†but the code is also valid for a PATCH¬†request since mongo’s updateOne¬†can update as many fields as you wish. It can update just one field or the entire document. So it would work for both PUT and PATCH methods.

Here’s the code:

// Update one
router.put("/people/:id", async (ctx) => {
    let documentQuery = {"_id": ObjectID(ctx.params.id)}; // Used to find the document
    let valuesToUpdate = ctx.request.body;
    ctx.body = await ctx.app.people.updateOne(documentQuery, valuesToUpdate);
});

The updateOne method requires a query as a matching criteria to find the target document. If it finds the document, it will update the fields passed in an object (dictionary) in the second argument.

Delete /people/:id (Delete One)

Deleting one is very simple. The deleteOne method works just like the updateOne method we saw earlier. It takes the query to match a document and deletes it.

// Delete one
router.delete("/people/:id", async (ctx) => {
    let documentQuery = {"_id": ObjectID(ctx.params.id)}; // Used to find the document
    ctx.body = await ctx.app.people.deleteOne(documentQuery);
});

What’s Next?

In this tutorial, we saw how we can implement RESTful routes and use MongoDB CRUD operations. We have finally created a very basic REST APIs. But we didn’t validate or sanitize incoming data. We also didn’t use proper http status codes. Please go through different resources on the internet or our earlier REST API tutorials to learn more about those.

In our future tutorials, we shall be covering authentication, serving static files and file uploads.