Go to main contentGo to page footer

BazaarJS: Module loaders e package managers

This is the second episode of our BazarJS series on the magical world of single-page applications... today we're going to be taking a serious look at module loaders, bundlers and package managers: three of the most "confused", controversial and peculiar topics in the Javascript world.

As always, after giving an overview of the available solutions, we will explain the advantages and disadvantages that have led us to our personal choice.

Estimate reading 10 minutes

Javascript, as is well known, has no native mechanism capable of managing dependencies between files. That is to say, it has no equivalent of Ruby's require or Sass's @import. For years, we've just sidestepped this limitation by making use of a mix of anonymous functions and the global namespace:

// calculator.js
(function(root) {
  var calculator = {
    sum: function(a, b) { return a + b; }
  };
  root.Calculator = calculator;
})(this);

// app.js
console.log(calculator.sum(1, 2)); // => 3

Unfortunately, as we are well aware, this is only a half-solution to the problem. The fact that no dependency tree is generated means that the work of specifying the ordering for the inclusion of the different modules is left up to the developer, resulting in the classic TypeError: undefined is not a function error messages that appear when you don't manage to get that ordering just right.

CommonJS

Node.js has managed to bypass this lack by implementing the pattern specified by the CommonJS committee. This solution is elegant, comfortable to use and similar to the patterns used in other programming languages. It is also one that is capable of providing the requisite module encapsulation:

// calculator.js
module.exports = {
  sum: function(a, b) { return a + b; }
};

// app.js
var assert = require('assert');
var calculator = require('./calculator');
assert.equal(3, calculator.sum(1, 2));

The require() function is capable of reading the content of specified local files, evaluating them and returning the content of the object module.exports. Everything which is not exposed to this module remains "private" and is not externally accessible.

Unfortunately, the fact that the call to require() is synchronous means that it is not well adapted to in-browser use, given that the dynamic loading of the Javascript file itself has to be asynchronous.

AMD

As a response to this limitation, CommonJS has defined AMD (Asynchronous Module Definition), an asynchronous variant for module loading which is far easier to use in-browser:

// calculator.js
define("calculator", function() {
  return {
    sum: function(a, b) { return a + b; }
  };
});

// app.js
define("app", ["calculator"], function(calculator) {
  console.log(calculator.sum(1, 2)); // => 3
});

Modules are defined using the define() function, by means of which it is also possible to specify module names and any eventual dependencies that they may require. The pattern makes it possible for these dependencies to be loaded asynchronously and then passed to the callback in the form of parameters, conserving their specified order through the use of an array.

This is certainly less elegant than the Node.js equivalent, but on the other hand it is the only available option for browsers (or so it would appear... we'll come back to this point later).

The light at the end of the tunnel: ECMAScript 6

ECMAScript 6 — the next version of Javascript, whose standardization was completed in 2014 and which will now be slowly implemented in browser — has finally proposed an "official" solution to the problem, with a synthesis similar to one found in Python:

// calculator.js
export function sum(a, b) { return a + b; }

// app.js
import * as calculator from 'calculator';

console.log(calculator.sum(1, 2)); // => 3

t is noteworthy that the import keyword is set to synchronous. Could this mean that it will be possible to make use of it only in Node.js and similar systems? Fortunately not: to implement this mechanism browsers will have to carry out a static analysis of the code prior to evaluating the body of Javascript files, in order to discover any eventual import to be loaded upstream from the evaluation of the code itself.

And the developer? Pays.

Facing the reality of the current state of fragmentation and awaiting global support for the new "universal" ECMAScript 6 syntax, a developer wanting to release a Javascript library today is obliged to ensure that their module is compatible with all three of the above described loading methods:

  • Globals
  • CommonJS
  • AMD

Although this might seem like a lot of work, in practice it's not that difficult to pull off. Wrapping your library around a skeleton of the following type will get the job done:

// calculator.js
(function (name, context, definition) {
  if (typeof module != 'undefined' && module.exports)
    module.exports = definition();
  else if (typeof define == 'function' && define.amd)
    define(name, definition);
  else
    context[name] = definition();
}('calculator', this, function () {
  // your module here!
  return {
    sum: function(a, b) { return a + b; }
  };
});

A library capable of supporting all of the current loading “standards” is said to support UMD (Universal Module Definition).

Fortunately, most of the libraries developed in recent years do support UMD, and can therefore be used in any runtime environment, be it client-side or server-side.

But, getting back to our topic at hand :)

In the light of this obligatory “theoretical” introduction, a couple of questions come to mind. Firstly, which module-loading mechanism should we use to write our client-side application? Given the already described limitations should we take it that AMD is the only possible solution... does it have to be that way? Secondly, which module database should we be using?

Let's take a look at some of the most popular solutions that the Javascript community have made available.

Solution 1: RequireJS + Bower

RequireJS is the most popular implementation of the AMD pattern currently in circulation, while Bower is the standard package manager for front-end packets (not only for JS, but also for CSS, Sass, etc.).

We'll take a look at the figures for these two projects:

##### RequireJS

* **Homepage:** http://requirejs.org/
* **Created at:** February 2010
* **Github stars:** ★ 6.795
##### Bower

* **Homepage:** http://bower.io
* **Created at:** September 2012
* **Available Modules:** 21.564
* **Github stars:** ★ 11.471

With RequireJS, the loading of dependencies is handled by importing a single script: RequireJS, to be precise.

<script data-main="scripts/main" src="scripts/require.js"></script>

The tag's data-main attribute specifies the entry point for the application in which eventual configurations of RequireJS necessary for downloading files, and for initializing the previously mentioned asynchronous AMD chain, are specified:

// main.js
requirejs.config({ baseUrl: '/scripts' });
requirejs(['app']);

Given that RequireJS's operations are entirely limited to module loading, it needs to be paired with a package manager, such as Bower. Doing so gives it access to an enormous (+20,000) database of third-party front-end modules and libraries which allow us to manage dependency analysis, versioning conflicts and local module downloads.

The Bower equivalent to the Gemfile is bower.json, and it looks like this:

{
  "name": "my-project",
  "private": true,
  "dependencies": {
    "rsvp": "~3.0.16"
  }
}

On reading this file, the bower install command is able to download dependencies specified locally in the ./bower_components directory.

It goes without saying that the requirejs.config() method can be used to configure RequireJS to recover the Bower modules in this folder.

The hard reality

At first glance, asynchronous loading using AMD (and RequireJS) seems like a great idea, as it allows us to progressively download only those files which are strictly necessary to the execution of our app.

In practice, this isn't really possible in an in-browser context. This is because the HTTP overheads for asynchronous downloading of single Javascript files are dramatic, to the point of their ruining the application's performance in the case of projects of medium complexity.

RequireJS therefore advises developers to proceed with a bundling operation of the various modules, using their command line tool r.js:

node r.js -o name=main out=bundle.js baseUrl=.

The process parses the files begining at a specified entry-point (in our case, main.js) to reconstruct the dependencies on the base of the define() calls that it finds along the way. It is then able to organize and concatenate all of the necessary files in a single file: bundle.js; which we will include in our HTML in place of the main.js file that we saw previously:

<script data-main="scripts/bundle" src="scripts/require.js"></script>

The bundle file is simply a concatenation of the Javascript files necessary in runtime:

// bundle.js
define("calculator", [],function() {
  return {
    sum: function(a, b) { return a + b; }
  };
});

define("app", ["calculator"], function(calculator) {
  console.log(calculator.sum(1, 2)); // => 3
});

requirejs(['app']);

The order in which the tool concatenates the files allows RequireJS to cash the body of the modules in runtime, prior to the arrival of the initializing call requirejs(), thereby avoiding the need to make further download requests.

Solution 2: Browserify + Npm

Unlike RequireJS, Browserify allows you to write your client-side application making use of CommonJS's synchronous loading (using Node.js). How is this possible? We'll get into the details of the how soon, but before that, it won't do any harm to take a look at some usage statistics:

##### Browserify

* **Homepage:** http://browserify.org/
* **Created at:** Settembre 2010
* **Github stars:** ★ 6.164
##### Npm

* **Homepage:** http://npmjs.org/
* **Created at:** Settembre 2009
* **Available Packages**: 115.973 (!!!)
* **Github stars:** ★ 5.332

Like RequireJS, Browserify puts a command line tool at your disposal, one which allows you to parse modules starting from a given entry point (in this case, app.js) in search of occurrences of calls to require():

browserify app.js --outfile bundle.js

The result is similar to the following: [^debundle]

// bundle.js
debundle({
  entryPoint: "./app",
  modules: {
    "./app": function(require, module) {
      var calculator = require('./calculator');
      console.log(calculator.sum(1, 2));
    },
    "./calculator": function(require, module) {
      module.exports = {
        sum: function(a, b) { return a + b; }
      };
    }
  }
});

function debundle(data) {
  var cache = {};
  var require = function(name) {
    if (cache[name]) { return cache[name]; }
    var module = cache[name] = { exports: {} };
    data.modules[name](require, module);
    return module.exports;
  };
  return require(data.entryPoint);
}

You see what we did there? :) The content of every module, including the entry-point, can be requested during runtime by being passed to a debundle() function which implements the module.exports/require CommonJS mechanism on the client side.

This is where the trick is: by bundling all of the upstream modules together, the require process is able to calmly follow a synchronous logic.

Not content with this, Browserify goes even further: in order to complete the simulation of a Node.js environment, Browserify allows you not only to use the require() of locally applied files, but also

  • of any npm packets present in the directory ./node_modules;
  • of some of Node.js's core modules (url, path, stream, events, http);

These modules, not obviously having been designed for in-browser use, have been rewritten in their entirety by the Browserify team. The rewritten modules maintain the same API and are included in the place of the originals during the bundling process.

A final, important consideration: Browserify's parsing mechanism can be augmented by means of transform, third party code capable of modifying and pre-processing source files prior to their being included in the bundle.

browserify app.js         \
  --transform coffeeify   \
  --transform uglifify    \
  --outfile bundle.js

A command of this kind enables us, for example, to write our app in Coffeescript and have bundle.js compiled and compressed. Not bad.

[^debundle]: The function debundle() in this post has been simplified for better comprehention: here the full version from Browserify.

Analysis

Going back to the first part of this series, we can see that both solutions are well-integrated with the principal task runners (esp. gulp-browserify, gulp-requirejs), so both of them are tied in this respect.

Browserify accepts the fact that asynchronous RequireJS is nothing more than wishful thinking that is impossible to realize in practice [^http2] and therefore decides to make use of CommonJS's synchronous pattern, which is more comfortable and less verbose. A point in its favour.

[^http2]: Or, at least, it will be untill the release of HTTP/2, able to drastically reduce overhead and latency for each single request.

On the other hand, Browserify requires the use of npm packets, when the package-manager best suited for front-end use would really be Bower. Admittedly, this is more of a philosophical consideration than a practical one, given that:

  • the overwhelming majority of front-end modules are also released on npm;
  • Browserify's transform, like debowerify, makes it possible to include Bower packages in the place of npm ones;

In any case, it is important to pay attention to the npm packets that you decide to use, as it isn't given that they'll actually work within the browser. To address this issue Toby Ho has released Browserify Search, an instrument designed to let you know whether or not particular packets will function in-browser, one which is based on some pretty sophisticated analysis of npm packets. Fortunately, around half npm packets (c. 60,000) currently pass the check.

Browserify presently has a larger and more active community than its competitors. As an example of the vitality surrounding the project, we can site the following:

  • Watchify, a watcher for Browserify that implements new bundles when any project dependencies are updated, reducing by an order of magnitude the build times, by use of caching mechanisms;
  • Disc, a tool that generates a navigable cross-section of Browserify bundles that make it possible to identify packets' heaviest dependencies;
  • partition-bundle, a plugin that lets you partition application modules, allowing for quicker initial download times and progressive loading of client-side logic.

Both RequireJS and Browserify have support for sourcemaps, which is of fundamental importance for the debugging of in production applications.

Our choice

Browserify. This choice is based in part on the reasons already cited — better community support and more comfortable coding — but there are other reasons too.

The choice to write a Node.js-compatible application brings with it two additional, enormous advantages:

  • the ability to create isomorphic applications. In other words, Javascript applications that — by means of simple abstractions implemented at the level of routing and renderization of views — can be executed in the same way either on the browser side or on the server side[^isomorphic]. This kind of approach allows us to get the best of both worlds:

    • extremely rapid visualization of the first page loaded by the user, due to the fact that server-side pre-rendering means that it isn't necessary to wait for all of the Javascript code to be downloaded and loaded by the browser;
    • instant client-side updating from the first loading onward, minimizing the number of calls to the server.
  • the ability to carry out unit-tests in a Node.js environment, eliminating the need to launch a browser. We know well how fundamental it is to keep the execution times of these tests as short as possible, and putting a browser in the middle of the process adds a couple of seconds to the launch time.

[^isomorphic]: Spike Brehm did release last year a simple project on Github, showing how an isomorphic app works. Take a look to this.

New competitors are already at the gates...

In reality, our choice here is far from definitive. A new wave of alternative solutions is already knocking at the door, and are already gaining support from the more reactionary (or perhaps it would be better to say more hipster?) members of the community.

Webpack

One of the tools most often mentioned in this regard is Webpack. Without distancing itself too far from the fundamental concepts of Browserify, Webpack differs from it in a number of ways that have led to a notable amount of interest and opposition within the Javascript community. The specific nature of these differences are well explained by the author of Browserify himself in this article.

jspm

Far more audacious is the jspm project, which has been receiving a lot of attention in recent weeks and whose distinguishing characteristics are its ability to:

  • allow Node.js, Bower and Github dependencies to be installed;
  • support all libraries, implementing all of the possible mechanisms of module loading available (globals, AMD, CommonJS);
  • make it possible to write applications themselves in ECMAScript 6, complete with import directives;
  • remove the need for bundling operations of any kind, something that — when added to the characteristics already cited — constitutes a true innovation.

How is this all possible? Pre-processing of ECMAScript 6 files, parsing of import directives and dependency downloads are all carried out client-side, in runtime. While this does slow things down in the development environment, it also greatly simplifies the bootstrapping process for new applications; one of the toughest tasks for a Javascript neophyte.

Using jspm in production allows you either to fall back on classic bundle generation or — and this is the project's second major innovation — use a CDN HTTP/2. This latter process, carried out using a process known as dependency caching, permits you to eliminate the latency times and overheads of progressive downloading. We will try to talk in greater depth about jspm again soon: it's worth the effort, even if only in terms of the training it gives you :)

Coming next: CSS preprocessors

This post has been tough :) In the next one we will take a brief digression from the world of Javascript to allow us to digest what we have covered so far. We'll take a look at the other half of the front-end world: style sheets. Based on our extensive experience with Sass, we'll take a look at Less.js and Stylus: the main Javascript alternatives currently available. Is it worth changing preprocessor?

Follow us on Twitter or subscribe to our RSS feed to keep up-to-date!

Did you find this interesting?Know us better