Tag Archives: Async

Node.js Part 2 – Asynchronous Calls in Node

“There’s no such thing as a free lunch” – El Paso Herald-Post, June 27th, 1938

The etymology of this quote, like many others, is not exactly clear-cut.  However it’s meaning and truth are indisputable.  Ok, you can dispute it, but even doubters would be forced to admit that it is very rare to get something good in return for giving nothing.  The same is true of obtaining the real benefits of Node.js.  You cannot reap the harvest without first doing some sowing of your own.  You need to commit to one thing if you are going to use Node in the spirit of it’s design – you have to be willing and able to make your blocking calls (mostly IO-related) in an asynchronous manner.  Furthermore, there is a wrong way to do this and multiple right ways to do it.  This article explains the difference and explains how asynchronous calls can be made correctly.

To write a non-trivial Node program one must understand something of how Node works, but not every detail.  A typical user application is more than just a single program, it is a composite of one or more discreet interlocking bundles of code.  A Node application is no different in this regard.  To deeply comprehend Node without abstracting our understanding we’d all have to go learn the internals of Node’s component pieces – Google’s V8 JavaScript engine, the libuv framework and a bunch of Node binding libraries written in C and JavaScript.  Fortunately (although study of these things would be interesting indeed) we don’t have to do this.

To understand Node well enough to really use it, we can rely on an abstraction of how it works.  We could think in terms of Event loops, Event Queues, Event Emitters and function call backs.  In fact, a lot of articles attempting to describe the way Node works simply throw up a few of these terms and continue with something to the effect of, “You are undoubtedly familiar with events and call backs since you encounter these ideas when writing JavaScript for a browser”.  It’s a bit of a cop out, but understandable – they are trying to abstract for you how Node works without going into gory details you don’t have to know.  If you are like me, explanations like this can leave you feeling vaguely uneasy.  They either too far or else don’t go far enough to provide a solid abstraction that one can use to understand the interface between a Node program and Node itself.

For the purposes of this article, we’ll use a simpler abstraction – one that focuses solely on the interaction between Node and your program code.  Node can be thought of as an Event Creator and Manager.  Every time your code makes an asynchronous call, Node creates an associated event that your code will take advantage of.   Naturally, a two-way communication must be maintained between your code and this Event Creator/Manager that is Node.  There are two ways in which you as a coder can communicate with Node:

  • Call back functions to allow Node to “talk back” to your code when an Asynchronous function completes.
  • Library calls that you invoke as input to Node, many of which can be blocking calls.

It is these blocking calls that are of interest in this article.  Typically these are IO sorts of calls but there are others.  Some examples include calls related to files, streams, sockets, pipes and timers.  Node has one single process thread of execution.  Its job is to quickly move through the ordinary parts of program code (and the code of other program instances that are using it) and queue up asynchronous events for blocking code.  This event queuing is part of a process that Node uses to off-load blocking work onto other external process threads.  As a coder, when you make a blocking call in your code, you have choices about the manner in which you invoke it:

  1. Usage of asynchronous vs. usage of synchronous function calls (asynchronous is thematic).
  2. Assuming asynchronous calling is chosen, how you control the order of execution for calls that are dependent upon each other.

The use of Asynchronous function calls is more than just the subject of this article – it is the heart and soul of Node itself.  The Asynchronous execution of long blocking functions – on external process threads that your own single program thread never has to worry about – is Node’s whole reason for existence.  This is what makes Node fast, efficient in memory usage and extremely scalable.  However again, there is no such thing as a free lunch – there is a cost for this.  That cost is the necessity of working with an event-driven callback model and taking extra care when calling the “I0″ types of functions mentioned above.

This article will focus primarily on explaining how to correctly use the asynchronous versions of function calls supplied by Node’s function libraries.  To do so, we will be looking at three different program examples.  Each example will be a variation on the same simple program.  It will first create a file and write some information to it, then it will re-open it and append some more information, then it will read that information and present it in the browser.  The three variations of this program will consist of a program using only synchronous calls, a program using only asynchronous calls in a naive manner, and the program using asynchronous calls in a proper manner. We’ll be looking at the actual runnable code for these a little later on in the article.  I’d like to first present a little analogy that helps explain and justify the need for taking the trouble to set up proper asynchronous processing.  If that sounds like a waste of time to you, then just skip on to the code examination portions of this article.

Here is an analogy that illustrates the differences between a synchronous approach and a proper asynchronous approach.  Let us equate a Node application to a small restaurant, one with only three employees – a Chef, a Waitress and a Manager. Fleshing out the analogy a little further, it is made up of:

  • A Chef – Node itself will play this part.
  • A Waitress – This is represented by the controlling thread of your program.
  • Food Orders – These are the Blocking/IO calls, presented to the Chef by the Waiter.  They are blocking because the Waitress must wait for the food to be prepared before she can serve it.
  • Food – This is data.
  • Customer(s) – This is the remaining code in your program that consumes the “food”.
  • The Manager – This is the coder.  He tells the Waitress and the Chef how they will do their jobs.

We will look at a slow synchronous restaurant, a naively run asynchronous restaurant, and a well run asynchronous restaurant.  For each scenario we will have just one diner ordering the same meal:  An appetizer of onion rings, followed by a main course consisting of a salad, a steak and a baked potato, with a dish of ice cream for desert.

Let’s look at a synchronous restaurant first.  In this restaurant, the manager has decreed that the Chef will only work on one menu item at a time.  He figures this will reduce complexity and possibility for error. To control this, the Waitress must submit Food Orders consisting of just one item and she must do so in the order needed to properly serve the diner.  So, the Chef makes just one thing at a time instead of being able to prepare multiple things at once.  There is a controlled ordering of the food items to prepare but each one must be completely finished before the Chef begins working on the next.  In the end, the customer would not be bowled over with the service but at least the meal would be correct, i.e. it would be delivered in good order and nothing would get cold or melt.  Consider however, how well this restaurant would do if it got busy.  One customer’s meal would be prepared at a time, and even that in discreet sub steps.  Service would be unacceptably slow.

Now let’s look at a naively run asynchronous restaurant.  The Manager, having been fired from his previous retaurant due to slow service, realizes that this Chef is going to have to be allowed to cook more than one thing at time.  He tells this Waitress that the chef must not be kept waiting, and to submit all Food Orders as soon as a customer has finished ordering.   However, the Manager still wants to avoid complex communication between herself and the chef.  Therefore, each Food Order is only allowed to have one item on it.  He figures that she can arrange the food items into meals when delivering them to each diner, since she knows what they ordered.  So the Chef is unfettered in this restaurant.  He can prepare as many items at once as seems reasonable to him, and furthermore he can choose what order to prepare them in. The results are not good.  He has no idea which items are destined for which customer, or even how many diners there are at the time.  He decides that ice cream is very fast to prepare, so he’ll dish it out first.  After that, he decides to start preparing everything else all at once – there isn’t much to cook.  It turns out that the potato is fastest – they are pre-cooked, so that gets handed off to the Waitress not long after the ice cream.  The salad follows shortly thereafter.  A bit later, the steak and the onion rings are finished about the same time, so he sets both at once on the pickup area used by the Waitress.  The end result here is that all of the food making up the customer’s order was delivered pretty quickly, but there were definite problems with the order of delivery.  The customer had a choice of eating desert first or letting it melt. He was then presented with a baked potato instead of the introductory salad, which he got next.  After a while, the waitress served up the rest of the main meal and the appetizer at the same time.  In the end, the customer would be upset and probably never return – that is if he even stayed for the entire meal.  This naïve asynchronous approach fails at even serving one customer, unless the customer orders something simple like just ice cream, or unless the order of food preparation completion just happens to match the expected order.  In a busy restaurant, the insanity would be even worse – and what’s more, service for some customers would probably not only be badly ordered, but slow.

Now we’ll look at a win – the well run asynchronous restaurant.  In this scenario our intrepid Manager has decided he doesn’t completey know what he is doing.  So he takes some classes and reads a book or two on managing the restaurant business.  He lands another new job, based upon the strength of his new training.  This time things are different.  He realizes that the Chef needs to work on more than one thing at a time and he also realizes that the Chef has to be given more guidelines as to how he prepares the Food Orders.  He knows he has to allow for a little more complexity in communication between Waitress and Chef.  He tells the Waitress to immediately submit Food Orders containing the entire meal for a diner and that the order of items within the Food Order must match the expected order of delivery for the meal.  So the chef is free to prepare more than one thing at a time and the order in which he may prepare them for a given Food Order is specified.  Furthermore, since food orders contain entire meals and the preparations he begins will only roughly have to coincide with the order of Food Orders as they are submitted, he is free to work at maximum efficiency while still more or less completing entire orders as they arrive. It won’t necessarily be first in first out – part or all of one Food Order might be completed before a prior Food Order.  However on the whole, Food Orders will more or less come out in the order in which they were submitted – and more importantly, the order of completion of items within an individual Food Order will always be correct.  In the end, orders come out in correct order and are delivered pretty quickly and even approximate a first in first out scenario.  Customer are satisfied, Manager and restaurant flourish.

It’s time to look at some code - we will look at three programs that attempt to do the same thing, though not at all in the same way. Each program is written with the idea that it will perform the following three basic steps in the order listed below:

  1. Create a file and add some data.
  2. Append some more data to the file.
  3. Read the full content of the file, then send that content back to the browser, along with some tracking of step order and elapsed time that were created as the program did it’s work.

Also, each program makes use of the same two helper functions.  I’ll just define them here once and not show them in each of the program examples, though you will see them being called.  There is nothing of any interest to discuss in these two functions, so I’ll not comment on them.

 

Our first example program is one that does it’s work by making synchronous File IO calls.  It runs correctly but would not scale well, nor would it play nice with other programs running on the same Node instance -  Every single File IO call is blocking – the main thread of Node must wait, babysitting this one instance of a program.

 

Our second example program is one that does it’s work by making asynchronous File IO calls, but it does so in a naïve manner.  It simply replaces the synchronous calls with their asynchronous versions.  This is not enough!  Since the events will execute asynchronously, there is absolutely no guarantee that they will return in the same order in which they were called.  Nothing has been done here to constrain that – the order of return is undetermined.  Step 2, “Append data”, might occur before Step 1, “Create original file data”.  Alternatively, Step 3, “Read and show existing file content” might occur before Steps 1 or Step 2 are complete, or on a busy system, perhaps before any part of their data were in the file at all.

 

Our third and final example program is one that does it’s work by making asynchronous File IO calls, and it does so correctly.  In the absence of special tools for this purpose (more on that later), the correct way to force one asynchronous call to follow the next is to chain them.  Let us say for example, as in the example below, that there are three successive asynchronous calls to be made and each one should only be invoked after the successful completion of the previous call.  What must be done is to put the invocation of the second call in the branch of code that is the successful completion of the first asynchronous call.  Likewise, the third asynchronous call should only be invoked in the branch of code that represents successful completion of the second asynchronous call.

 

Note the important difference between the properly coded asynchronous methodology and the use of simple synchronous calls.  In the Synchronous method, the entire execution thread of Node is forced to move through the blocking calls one at time, sitting idle will waiting a relatively long time for the results of each blocking call.  With the asynchronous methodology, this program still forces the sequential calling of these programs, waiting for the prior call to finish before beginning the next.  But what a difference.  In the asynchronous case, the single Node process thread is free to set up an event for each of these calls in turn, and it does not have to wait for them to return results.  It is free to continue it’s business of queuing events and making call backs for other programs – all that IO blocking is done on some other external process thread that your program knows nothing about, and that Node does not have to sit idly waiting for.

So there are indeed extra precautions to take when making depended asynchronous calls and they produce a kind of hierarchical hell (with all the indentations) in the source files that make them harder to maintain.  There are multiple Modules available from Github that attempt to provide solutions for this problem.  They provide an API that allows for a more organized approach to writing this sort of chained code i.e. avoiding the pyramidal, hierarchical hell.  I will mention one such project, perhaps the most popular, called Async (not Asynch.js which is something else).  The usage of such modules and of Async in particular, is beyond the scope of this article.

You may have noticed that my error handling in the three examples was either threadbare or non-existent. Example 1 actually had none – this is a combination of laziness on my part and a desire to keep the examples as simple and pure as possible.  In examples 2 and 3, there is some error handling due to the fact that is required by design, but it is pretty threadbare.  What, no try catch blocks?  Well, it turns out that try/catch/finally doesn’t really serve well in the asynchronous coding style that one must adopt for high volume Node coding.  As of Node 0.8, there is something new provided called Domains, that is meant to address this.  The usage of Domains is also beyond the scope of this article, but here is a link to a pretty good Domain slide presentation by Felix Geisendörfer at Speaker Deck.

I hope this article has been of some use to someone out there.  Remember that if you are writing synchronous, blocking function calls into your Node code, you are doing the wrong thing unless you own the Node instance and you know that it will never have to handle heavy loads.  Asynchronous, event-driven coding for server-side solutions is what Node was invented for in the first place.