Latest blog entries


The Ultimate Code-Checking Machine

Apr 20 2015 : Brujo Benavides

Beautiful Code

Writing some poetry in Erlang

Apr 20 2015 : Brujo Benavides

Sorting by popularity like Reddit with Ruby, PostgreSQL and Elastic, part 2

Second part of a post showing how to rank items by popularity in a Ruby application following the Reddit style, this time with performance and pagination focus using Elastic.

Apr 01 2015 : Flavio Granero

Sorting by popularity like Reddit with Ruby, PostgreSQL and Elastic, part 1

First part of a post showing how we can sort items by popularity in a Ruby application following the Reddit style, using PostgreSQL and Elastic.

Mar 25 2015 : Flavio Granero

TDD, Coverage and Why Testing Exceptions

Why should I test the exceptions in my code?

Feb 24 2015 : Brujo Benavides

Galgo iOS Library

An iOS library to display logs as on screen overlays.

Feb 10 2015 : Andres Canal

Announcing xref_runner

Making full use of the Erlang toolbox

Feb 10 2015 : Iñaki Garay

Weird List Comprensions in Erlang

Some strange cases regarding List Comprenhensions

Jan 13 2015 : Brujo Benavides

How to automatically generate different production and staging builds

Use gradle to build different flavors of APKs

Dec 22 2014 : Henrique Boregio

Galgo Android Library

An android library to display logs as on screen overlays

Nov 20 2014 : Henrique Boregio

ErloungeBA @ Inaka

ErloungeBA meeting @ Inaka Offices.

Nov 14 2014 : Inaka

Shotgun: HTTP client for Server-sent Events

Show usage of Shotgun and how consuming SSE can be simple.

Oct 20 2014 : Juan Facorro

Metaprogramming in Erlang: Writing a partial application function

The joys of metaprogramming and Erlang's abstract format

Oct 14 2014 : Hernán Rivas Acosta

Implementing an Android REST Client using Retrofit

Quickly create REST Clients from a simple java interface

Oct 10 2014 : Henrique Boregio

Worker Pool (for Erlang)

Introducing one of our open-source tools: Worker Pool

Sep 25 2014 : Brujo Benavides

The Fork Workflow in iOS

A clear way to apply modifications to your project dependencies

Sep 19 2014 : Pablo Villar

Launching Android Activities in a Separate Task

Launching Android Activities in a Separate Task

Sep 09 2014 : Henrique Boregio

Getting the right colors in your iOS app

How to keep consistence when picking and applying colors

Sep 05 2014 : Pablo Villar

The King of Code Style

Introducing our erlang style guide and style-checking tool, Elvis

Sep 05 2014 : Iñaki Garay

Proud to announce our new home with Erlang Solutions

Inaka is proud to announce our new home with Erlang Solutions!

Aug 05 2014 : Chad DePue

See all Inaka's blog posts >>

Canillita - Your First Erlang Server

Fernando "Brujo" Benavides wrote this on November 06, 2013 under dev, erlang, java .


So you are learning Erlang and you want to start with a simple example project, but you want to create something that is actually useful. This post is for you! In this article, I'll show you how to create a very basic yet useful RESTful server using some widely known Erlang libraries. It will not be enough to teach you how to program in Erlang and I won't dive into the core aspects of the language itself. For that you can always learn you some erlang for great good! ;) On the other hand, if you're an experienced Erlang programmer and you need a RESTful server with SSE capabilities for your application, you may use this example as a starting point to build your system.


What's in this article

These are the components, protocols and features that I'll use and show in this article. Each one comes with a link where you can find more information about it.

  • SSE: a technology for where a browser gets automatic updates from a server via HTTP connection
  • cowboy: the ultimate server for the modern Web, written in Erlang with support for Websocket, SPDY and more
  • sumo_db: a very simple persistance layer capable of interacting with different db's, while offering a consistent api to your code
  • jiffy: a JSON parser as a NIF, one of the many JSON libraries for Erlang

What's not in this article

You will not find the following stuff here:

  • HTTP authentication, header management, QueryString and many other things -- you can easily add these to your RESTful server using cowboy
  • Different kinds of routes -- this server will have just one simple url, whereas real-life servers usually have many of them and they're not so simple
  • Complex persistency operations -- with sumo_db you can do much more than what I did here
  • Tests -- explictly excluded from this article to reduce the number of things to learn or understand, but absolutely necessary in real-life systems

The Application

For this article I have created an application called canillita. It is a very very basic pubsub server. On one end it receives news in JSON format. On the other end it delivers that news to whoever wants to read it in SSE format. It's worth noting that we're abusing the SSE format here by using the news titles as event types. Do not do this at home, kids. canillita is implemented as a simple RESTful server with a single url that provides two endpoints:

POST /news

This endpoint accepts a JSON object to be published and always returns 201 Created, unless it's malformed, in which case it returns 400 Bad Request.


The JSON object can include two optional fields: title and content


curl -vX POST http://localhost:4004/news \
-H"Content-Type:application/json" \
-d'{ "title": "The Title", "content": "The Content" }'
> POST /news HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:4004
> Accept: */*
> Content-Type:application/json
> Content-Length: 50
< HTTP/1.1 204 No Content
< connection: keep-alive
< server: Cowboy
< date: Fri, 08 Nov 2013 20:06:01 GMT
< content-length: 0
< content-type: text/html

GET /news

This endpoint provides an SSE connection that starts by replaying all the news on the db (I know, it's a pretty dumb application for now). Then, it lets you stay connected to automatically get the newer ones posted using the previous endpoint.


curl -vX GET http://localhost:4004/news
> GET /news HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:4004
> Accept: */*
< HTTP/1.1 200 OK
< transfer-encoding: chunked
< connection: keep-alive
* Server Cowboy is not blacklisted
< server: Cowboy
< date: Thu, 07 Nov 2013 14:31:10 GMT
< content-type: text/event-stream
event: Initial Story
data: Hello World!

event: The Title
data: The Content


How it is done

For starters, canillita is a typical _rebar_ized Erlang application, so if you check its codebase you'll find the usual suspects:

  • Files like Makefile, Emakefile, rebar and rebar.config are used to compile and run the application
  • The priv folder includes the default configuration file for the app
  • Also src, which contains the code. In it, you will see:
    • the application description file
    • canillita.erl: the main application module
    • canillita_news.erl: the persistency module
    • canillita_news_handler.erl: the http handler
    • canillita_sup.erl: the main application supervisor

Initial setup

Since this is a typical Erlang application, the first thing I created for it was the application description file. This file is composed by a single Erlang tuple that, along with the applicaton name, includes some high level parameters for the app, like its description and vsn. It also includes two very important things:

  • mod which describes the starting point of our application (in our case, the module canillita)
  • applications which lists the other applications that should be started for canillita to run, in our case:
    • kernel and stdlib: default apps used by all Erlang applications
    • lager: the logging framework
    • cowboy: the web framework
    • emysql, because we'll use MySQL as our backend
    • sumo_db: the persistency framework
{application, canillita,
    "Canillita: Simple Paperboy-themed PubSub"},
  {vsn, "1.0"},
  {mod, {canillita, []}}

In order to make sure we have those apps downloaded and compiled each time we want to compile or run canillita, we use rebar and we set it up on the rebar.config file. From the multiple options you can put in rebar.config; I used just one. deps informs rebar about the project dependencies, telling it where it is hosted and what branch/version we need for our system.

{deps, [
 {lager,   "2.*", {git, "",     "master"}},
 {emysql,  "0.*", {git, "", "master"}},
 {cowboy,  "0.*", {git, "",   "master"}},
 {jiffy,   "0.*", {git, "",    "master"}},
 {sumo_db, "1",   {git, "",   "master"}}

Then, using rebar, our Makefile can look as follows (don't worry about run:, you'll see how this works later):

NODE ?= canillita
REBAR ?= "./rebar"
CONFIG ?= "priv/app.config"
RUN := erl -pa ebin -pa deps/*/ebin -smp enable -s lager -boot start_sasl -config ${CONFIG} ${ERL_ARGS}

  ${REBAR} get-deps compile

  ${REBAR} skip_deps=true compile

  ${REBAR} clean

  ${REBAR} skip_deps=true clean

run: quick
  if [ -n "${NODE}" ]; then ${RUN} -name ${NODE}@`hostname` -s canillita; \
  else ${RUN} -s canillita; \

Application module

Every Erlang application has an application module, which is identified because it adheres to the application behavior, and also because it's the one listed in the mod attribute of the .app.src file. It usually carries the same name as the app itself. In our case, that module is canillita. The module includes the following functions:

%% @doc Starts the application
start() ->

%% @doc Stops the application
stop() -> application:stop(canillita).

%% @private
start(_StartType, _StartArgs) ->

%% @private
stop(_State) -> ok.

The first two are the external API for the system. start/0 starts the application using one of the latest additiions to the application module (ensure_all_started/1) and stop/0 stops the server. The other two are the behavior implementation functions that are called when the application is started or stopped. As you can see, the stopping part is not that interesting, while the starting one basically starts the application main supervisor.

Main supervisor

Usual Erlang applications have one main supervisor process that controls the application. In our case that process is created using the canillita_sup module. This module is where the web server is configured and started. The only function required by the supervisor behavior is init/1. In that function we define the processes that will build up the main application in our system. In our case we just have one: our HTTP server that's started using the start_listeners/0 function. That specification, written in a not-so-pretty Erlang tuple, is what init/1 returns. But before that, this function does two other important things:

  • First of all, it "creates" our persistency schema (and in fact, sumo:create_schema() just updates it so if the schema is already created, it just stays as-is).
  • Then it creates a new group of processes using pg2, which is the simplest possible library (although not the most efficient one) for that. This group will eventually include all the users connected to the SSE endpoint.
init({}) ->
  ok = pg2:create(canillita_listeners),
  {ok, { {one_for_one, 5, 10},
    [ {canillita_http,
        {canillita_sup, start_listeners,[]},
        permanent, 1000, worker,

So, as we said, start_listeners/0 should start our web server. And so it does. To understand how, just follow the comments I left inside it:

start_listeners() ->
  % Get application configuration
  {ok, Port} =
  {ok, ListenerCount} =

  % Set up the server routes
  Dispatch =
          canillita_news_handler, []}

  % Set up the options for the TCP layer
  RanchOptions =
    [ {port, Port}
  % Set up the options for the HTTP layer
  CowboyOptions =
    [ {env,       [{dispatch, Dispatch}]}
    , {compress,  true}
    , {timeout,   12000}
  % Start the cowboy http server
    canillita_http, ListenerCount,
    RanchOptions, CowboyOptions).

…and that's it! Now we have our server running. But to process requests we have to implement canillita_news_handler as we said when specifying the /news route.

Web Request Handler

canillita_news_handler will be the main module for our application. It will handle all our web requests, because we defined just one route on our supervisor. This module implements two different request handler types defined in cowboy:

Which handler to use is decided on the init function, according to the request method:

init(_Transport, Req, _Opts) ->
  case cowboy_req:method(Req) of
    {<<"POST">>, _} ->
      {upgrade, protocol, cowboy_rest};
    {<<"GET">>, Req1} ->

REST handler

To implement a REST handler, we can implement several different callback functions defined in the cowboy_rest protocol definition. In our case, we used the following functions:

%% Only POST is allowed as REST
allowed_methods(Req, State) ->
  {[<<"POST">>], Req, State}.

%% Only application/json is accepted,
%% and it's parsed using handle_post/2
content_types_accepted(Req, State) ->
     {<<"application">>, <<"json">>, []},
     handle_post}], Req, State}.

%% resource_exists defaults to 'true', so we
%% need to change it, since every POST
%% should create a new event
resource_exists(Req, State) ->
  {false, Req, State}.

Then, to complete this part of the implementation, we need to write the handle_post function. That's where the internal logic for handling a POST lies:

handle_post(Req, State) ->
  %% Get the request body
  {ok, Body, Req1} = cowboy_req:body(Req),

  %% Decode it as JSON
  case json_decode(Body) of
    {Params} ->
      %% Extract the known properties from
      %% it, using default values if needed
      Title =
          <<"title">>, Params, <<"News">>),
      Content =
          <<"content">>, Params, <<"">>),
      %% Save it with sumo_db
      NewsFlash =
        canillita_news:new(Title, Content),

      %% Send a notification to listeners

      %% ...and return 204 No Content
      {true, Req1, State};
    {bad_json, Reason} ->
      %% Return 400 with the json encoded
      %% error
      {ok, Req2} =
          400, [],
          jiffy:encode(Reason), Req1),
      {halt, Req2, State}

From that code, cowboy_req functions are pretty clear. The only weird thing about them is that, Erlang being a functional language, those functions can't "modify" the request object and they return a new version of it instead.

There's also json_decode which is a function that uses jiffy to decode the received body as a json, traps exceptions and then formats the result in a prettier format.

And then we have canillita_news:new which is a sumo_db-related function that we will describe in deeper detail later.

Finally, you can see notify/1. That's the function that delivers events to those clients that are connected and listening to GET /events through SSE.

notify(NewsFlash) ->
    fun(Listener) ->
      Listener ! {news_flash, NewsFlash}

This function basically goes over the list of members of the canillita_listeners group (you'll see how listeners join this group in a moment) and send a message to each one of them with the news flash.

Loop handler

The loop handler is composed by handle_get and info:

handle_get(Req) ->
  %% Set the response encoding properly for
  %% SSE
  {ok, Req1} =
        <<"text/event-stream">>}], Req),

  %% Get the latests news from the database
  LatestNews = canillita_news:latest_news(),

  %% Send each one of them on the response
    fun(NewsFlash) ->
      send_flash(NewsFlash, Req1)
    end, LatestNews),

  %% Join the canillita_listeners group
  ok =
    pg2:join(canillita_listeners, self()),

  %% Instruct cowboy to start looping and
  %% hibernate until a message is received
  {loop, Req1, undefined, hibernate}.

%% @doc this function is called on every
%%      message received by the handler
info({news_flash, NewsFlash}, Req, State) ->
  %% Send the news to the listener
  send_flash(NewsFlash, Req),
  %% Keep looping
  {loop, Req, State, hibernate}.

Once a connection is established, we retrieve all the previous news from the database (as we will soon see), we send them all through the wire, and we join the listeners group and then sleep (hibernate) until a new message is sent to this process. In that case, we send that new message and go back to sleep again until the connection is closed by the client.

Persistency layer

The last piece of the system is the persistency layer, canillita_news. It's implemented as a Sumo document. That behaviour defines three callbacks:

  • sumo_schema, used to create the table on the call to sumo:create_schema/0
  • sumo_sleep, used to convert an entity from its system representation to sumo's internal one
  • sumo_wakeup, used to convert an entity from sumo's internal representation to its system one
sumo_schema() ->
    [ sumo:new_field(id,            integer,  [id, not_null, auto_increment])
    , sumo:new_field(title,         text,     [not_null])
    , sumo:new_field(content,       text,     [not_null])
    , sumo:new_field(created_at,    datetime, [not_null])
    , sumo:new_field(updated_at,    datetime, [not_null])

sumo_sleep(NewsFlash) -> NewsFlash.

sumo_wakeup(NewsFlash) -> NewsFlash.

Sumo's internal representation is a proplist (i.e. a list of key/value tuples), not to further complicate things we decided to use the same one for canillita, that's why sumo_sleep and sumo_wakeup just return what they receive.

Beside the behaviour-defined functions, we wrote some other functions to abstract the internal representation of a news flash. They're pretty self explanatory:

new(Title, Content) ->
  Now =
    {datetime, calendar:universal_time()},
  NewsFlash =
    [ {title,       Title}
    , {content,     Content}
    , {created_at,  Now}
    , {updated_at,  Now}],
  sumo:persist(canillita_news, NewsFlash).

get_id(NewsFlash) ->
  proplists:get_value(id, NewsFlash).

get_title(NewsFlash) ->
  proplists:get_value(title, NewsFlash).

get_content(NewsFlash) ->
  proplists:get_value(content, NewsFlash).

latest_news() ->


Simple as that, if properly configured, we now have an SSE / RESTful server capable of handling thousands of listeners in one node. And it benefits from all the virtues of Erlang, like:

  • hot code swapping: you can make changes on the code and compile them just running make:all([load]) in the console without turning the server off
  • easy multi-node scalability: pg2 is multi-node aware so adding new nodes is just a matter of turning them on and connecting them
  • process supervision: since all our processes are supervised, a crash in one of them just gets restarted and the system keeps working as expected

And if you're a newcomer to Erlang, you can see that you don't need to learn a lot of stuff before you start working on your first real-life project. This one, for instance, has little if any code that is Erlang-intensive. So thanks to the many libraries that are already out there, you can start with simple projects as this one and satisfy some basic but important requirements in a complete way.