Migrating a Hapi.js Node App to Serverless

I had a small node app using Hapi.js to serve up a presentation on speech recognition. It was a pretty simple app serving up some static content and a couple of routes for querying Wolfram Alpha and Google Translate. In order to make sure the presentation was always available I set it up on the cheapest tier on Heroku at $7 a month.

Since I don't give this presentation much anymore I decided to shut down my app on Heroku but at the same time I still wanted it to be available. Seemed to me that it was a perfect use case for Serverless.

First let's take a look at the code for the server.

'use strict';

const Hapi = require('hapi');
const Joi = require('joi');
const fetch = require('node-fetch');
const credentials = require('./credentials');

const server = Hapi.server({
  host: '0.0.0.0',
  port: 8000
});

// Start the server
async function start() {
  try {
    await server.register(require('inert'));

    server.route({
      method: 'POST',
      path: '/wolfram',
      config: {
        handler: function(req) {
          console.log('asking wolfram alpha');
          let url = `http://api.wolframalpha.com/v2/query?input=${
            req.payload.searchTerm
          }&appid=${credentials.wolfram}`;
          return fetch(url)
            .then(res => res.text())
            .then(text => {
              return text;
            });
        },
        validate: {
          payload: {
            searchTerm: Joi.string().required()
          }
        }
      }
    });

    server.route({
      method: 'POST',
      path: '/translate',
      config: {
        handler: function(req) {
          console.log('POST translate');
          let url = `https://www.googleapis.com/language/translate/v2?key=${
            credentials.google
          }&source=en&target=fr&q=${req.payload.searchTerm}`;
          return fetch(url)
            .then(res => res.json())
            .then(json => {
              return json;
            });
        },
        validate: {
          payload: {
            searchTerm: Joi.string().required()
          }
        }
      }
    });

    server.route({
      method: 'GET',
      path: '/{param*}',
      handler: {
        directory: {
          path: 'public'
        }
      }
    });

    await server.start();
  } catch (err) {
    console.log(err);
    process.exit(1);
  }

  console.log('Server running at:', server.info.uri);
}

start();

Like I said, not a super advanced node app, only three routes. The /wolfram route asks Wolfram Alpha a question, /translate invokes Google Translate and the /{param*} route loads all the static HTML/JS/CSS of the presentation which was built with reveal.js

The first thing I needed to do before migrating the app to serverless is to figure out which platform I wanted to use. Right now there are the big four in Amazon Lamda, Google Cloud Functions, Microsoft Azure and IBM Bluemix. Through a very scientific process, I asked my daughter which name she liked best, I decided to use the Microsoft Azure platform. Their free tier for Functions Apps allows 1,000,000 requests per month so it should reduce my monthly hosting costs to $0.

I'm going to skip over some of my learning steps and instead direct you to some links that will help you get started with Azure:

  1. Create your free Azure account
  2. Create your first Functions App

Creating your first app is pretty simple and once you've created one function you'll see how easy it is to spin up your own functions.

Now let's look at the code that replaces the /translate route from our Hapi.js app.

const fetch = require('node-fetch');
const credentials = require('./credentials');

module.exports = async function(context, req) {
  if (req.query.searchTerm || (req.body && req.body.searchTerm)) {
    const searchTerm = req.query.searchTerm || 
          req.body.searchTerm;
    let url = `https://www.googleapis.com/language/translate/v2?key=${
      credentials.google
    }&source=en&target=fr&q=${searchTerm}`;
    await fetch(url)
      .then(res => res.json())
      .then(json => {
        context.res = {
          status: 200,
          headers: { 'Content-Type': 'application/json' },
          body: json
        };
        context.done();
      });
  } else {
    context.res = {
      status: 400,
      body: 'Please pass a name on the query string or in the request body'
    };
  }
  context.done();
};

Basically, when this endpoint is called either via GET or POST, as long as it has a searchTerm parameter then we'll query Google Translate to return the French translation of the contents of the searchTerm. You may notice that we are awaiting the completion of the fetch as we don't want our function to complete before we get the translation back from Google. In order to do that we needed to make sure that the function we are exporting is async

Note: I may have been one of the few people on earth who didn't hate callback hell but I really love the new async/await syntax.

Anyway, once we've gotten the JSON back from Google we build an Context Response object (context.res) to return to the caller. Anyone familiar with HTTP responses will find the context response object very similar. We'll set status to 200 to let the caller know that everything is okay, add a Content-Type header so the caller knows to expect JSON and finally our JSON response in the body of the response.

Now on to our /wolfram route:

const fetch = require('node-fetch');
const credentials = require('./credentials');

module.exports = async function(context, req) {
  if (req.query.searchTerm || (req.body && req.body.searchTerm)) {
    const searchTerm = req.query.searchTerm || 
          req.body.searchTerm;
    let url = `http://api.wolframalpha.com/v2/query?input=${searchTerm}&appid=${
      credentials.wolfram
    }`;
    await fetch(url)
      .then(res => res.text())
      .then(text => {
        context.res = {
          status: 200,
          headers: { 'Content-Type': 'text/plain' },
          body: text
        };
        context.done();
      });
  } else {
    context.res = {
      status: 400,
      body: 'Please pass a name on the query string or in the request body'
    };
  }

  context.done();
};

Hmmm…you may notice that the two functions are very similar. In fact they are almost identical. The two major differences are the url of the remote service and the fact that Wolfram Alpha returns us XML so we need tweak our fetch code a bit. If this was more than a presentation I was porting over to Azure I'd look into using something like function chaining so that the first function sets up the url to be called and passes it along to the second function.

Regardless, I didn't want to spend too much time on the backend so let's move on to the frontend. As I mentioned earlier all of the static assets HTML/JS/CSS we from a presentation I built using reveal.js. The only code modifications I needed to make was to change the url's that were being called to search Wolfram and get a translation from Google.

Then to host everything I committed the presentation to a GitHub repo and enabled GitHub Pages. You can see the entire presentation at here, but you can also skip ahead to where you can ask Wolfram Alpha a question or translate English to French. Note: You'll have to accept the microphone permission in Google Chrome in order for the speech recognition part to work in the browser.