Using Promises to make serverless functions more readable and maintainable.

This post originally appeared on Medium here.

In a previous post, we discussed how we built a self-service database reset tool for our students at HackYourFuture using the serverless framework and a Slack slash command.

That was a “quick and dirty” function that did just enough to let our students get their homework done. We can do better, though. Let’s refactor our function to use Promises. This will let us clean up our code, making it much more readable and maintainable, and at the same time we’ll turn our function into a generic framework for handling Slack commands.

We’ll refactor each step out into functions that we’ll treat as Promises. It’s important to note that for the most part these are not asynchronous functions. We’ll explore the pros and cons of this approach as we go.

Once we’ve completed that refactor, we’re left with this nice, clean, easily readable function body:

module.exports.reset = (event, context, callback) => {
  const targetUserAgent = "slackbot";
  const targetSlackTeam = "SomeSlackTeam";

  validateUserAgent(event, targetUserAgent)
    .then(requestBody => validateSlackTeam(requestBody, targetSlackTeam))
    .then(validatedRequestBody => extractUser(validatedRequestBody))
    .then(extractedUser => resetDB(extractedUser))
    .then(response => callback(null, response))
    .catch(error => callback(error));
};

For it to work, we need to promisify our validation and extraction functions.

Promisified Validation

Let’s focus on the validateUserAgent(event, string) function, as it’s the most generic in inputs, outputs, and structure.

We pass in the event object (provided by AWS when the Lambda is invoked) and a string containing a desired user agent. Let’s convert both to lowercase and see if what we have starts with what we want.

function validateUserAgent(event, desiredUserAgent) {
  return new Promise(function(resolve, reject) {
    const foundUserAgent = event.requestContext.identity.userAgent.toLowerCase();

    if (foundUserAgent.startsWith(desiredUserAgent.toLowerCase())) {
      resolve(querystring.parse(event.body));
    } else {
      reject(
        buildError(
          401,
          "This endpoint cannot be accessed via the current User Agent."
        )
      );
    }
  });
}

If so, great! We use the built-in querystring library to parse the body provided by Slack in application/x-www-form-urlencoded format and convert it to a JSON object so we can work with it more easily. This object looks like the following:

{
  "token": "ABCD1234EFGH5678IJKL9012",
  "team_id": "XXXXXXXXX",
  "team_domain": "someteamdomain",
  "channel_id": "YYYYYYYYY",
  "channel_name": "directmessage",
  "user_id": "ZZZZZZZZZ",
  "user_name": "someuser",
  "command": "/dbreset",
  "text": "",
  "response_url": "https://hooks.slack.com/commands/ABCD01234/0123456789ab/1aA2bB3cC4dD5eE6fF7gG8hH",
  "trigger_id": "0123456789ab.0123456789a.0123456789abcdef0123456789abcdef"
}

The most important of these fields for us are the following:

  • team_domain This is the domain that invoked the Slack command. You’ll probably want to check this against your team’s domain. The proper way to do this is using the team_id, but that’s beyond the scope of this post.
  • user_name The user who invoked the Slack command. The proper way to check this is similarly to use the user_id, but for demonstration purposes this is fine.
  • command The command the user typed to invoke the API call. This may be useful if you have several commands invoking the same URL, e.g., https://api.yourcompany.com/handleSlackEvents.
  • text Here we have the arguments provided to the command. Parse and proceed as you would with any CLI, or pass these off to services like Amazon’s Lex (conversational interfaces) or Polly (text-to-speech) to do even cooler things.
  • response_url A Slack command will wait for three seconds following invocation to receive a response. After that it times out, and you’ll need to POST back to this URL (again in application/x-www-form-urlencoded format) to provide a response.

If you have time, I recommend you check out the rest and how you can use them. Be aware that the user_name field is going away. There are also enterprise fields that you may encounter that don’t appear in this gist.

From here on out, it’s just a matter of writing the remaining functions as Promises and chaining them together using the below prototype. You’ll make life easier on yourself if you treat each as a pure function and don’t modify any incoming data.

function promisePrototype(event) {
  return new Promise(function(resolve, reject) {
    if (success) {
      resolve(someNewObject);
    } else {
      reject(buildError(500, "An internal error occured."));
    }
  });
}

An additional benefit of this method is that we can centralize our error handling in one place. No matter which function fails, we basically want to send the same response. We’ve written a helper function buildError that each Promise can call in its reject method to send specific HTTP response codes and messages. Then, we can handle any error that arises with that single line .catch(error => callback(error)); at the end of our main function body.

function buildError(statusCode, message) {
  const error = {
    statusCode: statusCode,
    body: JSON.stringify({
      error: message
    })
  };

  return error;
}

Abusing Promises

I can feel some of you getting angry as you read through this. That’s not what promises are for! you scream. You’re right (kind of), but the point of this is to provide a single method of composing Lambdas to help us work together more quickly and to make our code more maintainable.

After all, in a perfect world, all of our Lambdas should be written synchronously. If we do that and use this framework, we’ll actually end up inserting overhead for no other reason than to improve readability and understandability of our code. GASP

There are downsides, of course. Everything in life is a tradeoff, and this is no different. One of the best features of Lambda, however, is our ability to measure these tradeoffs in cash terms. We can then decide if this is where we should be applying optimizations, or if this is a lower priority for us.

The Cost of Promises

No, I’m not talking about the political-science implications of the cost of fulfilling promises made on the campaign trail (perhaps another post for another time). Promisifying our Lambda function slowed it down, which means it takes longer to execute. That means it costs us more! Before we throw the whole thing out though, let’s consider some real-world factors.

The most important is frequency of invocation. For us, this function will (hopefully) never be executed more than a few thousand times per month. AWS Lambda pricing details show us that the first 1 million requests (the execution limit) and 400,000 GB-seconds per month of compute time (the compute limit) are free.

In our case, we allocate 128MB of RAM to a function that nearly always completes execution in under 300ms. This means each of our invocations should be billed at 0.000384 GB-seconds. That gives us over a billion invocations before we hit the upper compute limit, so as long as we stay under the execution limit of one million monthly requests, all of our invocations will be free.

Production Loads

Of course, you may be in a different situation, so let’s assume your company has already blown through its free tier. What’s the monetary impact of promisifying this Lambda?

Our function previously ran nearly all executions between 100 and 200ms. This means they will almost always be charged at the 200ms rate. Our promisified function nearly always runs between 200 and 300ms, so they will almost always be charged at the 300ms rate. This is a fifty-percent increase in charged compute time, but what does this mean in real life? Is it a fifty percent increase in cost? How many invocations will it require for us to spend an additional dollar on compute?

First, the easy bit. Back at the pricing page, we see that each request after the free tier is exhausted is charged at a flat rate of $0.0000002 per request, regardless of allocated memory. Each version of our function will incur this charge, so we need to include it, but it doesn’t vary.

We also see that 128MB functions (which ours is) price out at $0.000000208 per 100ms. Since the difference between our two functions in execution time is 100ms, our promisified version would require 4,807,692 requests per month to incur an additional dollar of monthly spending. These are numbers we can use!

That’s not, however, a fifty percent increase in monthly spending. Remember that base invocation charge? It works out to $0.9615384 per month, and it has to be paid by both our previous version and our promisified version, as does the baseline compute cost (we only calculated the delta above). So our total monthly costs are:

  • Previous function: $2.961538272
  • Promisified function: $3.961538208

So our actual percent increase is 33.8%. Yes, that’s not insignificant, but this is nearly the worst case. If you refactor a lambda function that runs for 4 minutes and 2 seconds to run for 4 minutes and 2.6 seconds, the increase is infinitesimal compared to the benefits.

Wrapup

So, promisifying your Lambdas can make them much easier to build, debug, and understand, but there is a real, measurable cost in dollars. Each team will need to make its own determinations on which is more valuable according to its own constraints, but in my experience developer time has always been the most scarce commodity.

If I were building out a large project (and I am), I would make this tradeoff early in favor of readability and consistency. Later, when I’m raking in the millions (Ha!) I’ll start optimizing for price. I’ve been around long enough to learn (via repeated pain) that premature optimization is an anti-pattern.

What are your thoughts? How do you make the tradeoff between productivity and cost? Will that change as we get ever-increasing granularity into both the cost of our functions and the efficiency of our developers? Let me know what you think in the comments!