tisdag 4 november 2014

React and the compiler

During the last month's, React.js has sailed up to the position of "the coolest thing ever" in the JavaScript world, effectively pushing Angular down a notch.
This week I started building a trading app for a client, with high performance demands with a lot of real time data being pushed from the server. I thought that react was a perfect match and starting building a proof of concept.
The praise I've heard about react promises great performance, and my first metrics shows that react seems to live up to it's hype.
I must say it's really cool! Plus the fact that I get to try out webpack, which seems just as cool, feels great!

But...
There's one thing that really bothers me. And that's the use of a compiler. I know, a lot of different frameworks use a compiler these days, so what's the big deal?
Well, let me explain.
I am using the webpack react-loader, which compiles the jsx files to regular JavaScript. That is cool, because you can mix HTML and JavaScript in the same files without having to quote strings. You can even add a "harmony" setting to the loader, which let's you use some, but not all,  parts of ES6. Like arrow functions, they just work, being compiled to ES5 functions. But try generators and you're out of luck, there's no support for them yet, try and use them and you'll get a compilation error.

Is that a problem? I believe it is.
Soon browsers will start to support different parts of ES6, like, for instance generators (which Firefox already does by the way).
So, there's stuff the browser supports, but the compiler doesn't, meaning you can't use them. I get flashbacks to my Java background, where you often couldn't use the latest version of the Java language because the app server doesn't support it...

But can't we use another compiler, to convert the generator to ES5 code? Like traceur for instance? No, you can't. Because jsx files contain HTML, the traceur compiler fails if you try that compiler first. And the jsx compiler fails if you try to compile jsx files containing generators first.

Were in the hands of the react developers to keep up with the ES6 spec if we want to use the latest features.
The JavaScript world is moving blazingly fast, and what's trendy today might be dead tomorrow. What if we build all those applications using react, only to find out that Facebook stops developing it? Then we'd probably never be able to use any never features of EcmaScript than those in the react compiler now, 2014.

A better way
I believe a better way would be to follow the sweet.js motto: "stop building compilers, build macros!"
There's not that big a difference, but a sweetjs macro only replaces syntax that it recognizes. It doesn't crash if it find something that it doesn't recognize.
Using macros instead of a compiler would allow us to use whatever syntax the target browser supports, and replace all those non supported stuff with ES5 code. And later on, when browser's starts to support those ES6 features, just remove the macro and let the code pass through.

Any framework I use should let me do as much or more as I could without it. If it robs me of some JavaScript goodies, I'm not that happy with it.

onsdag 7 maj 2014

Migrating a Backbone.js application to Angular.js

Today I completed the first part of migrating our Backbone based web app over to Angular.
Our app is fairly complex, built with Backbone.Marionette and require.js. We use backbone.iobind together with socket.io to make sure any user is always viewing the latest data from the backend. We use Backbone.Forms to render html from json. And Backbone.ModelBinder to bind input fields to models. All of the above functionality can be acheived with Angular core + some plugins, but I like to do these sort of tasks with as small steps  as possible. 

My initial goal is to change as little code as possible, keep all of the functionality, while using Angular for routing and dependency injection. 

Step 1: create an Angular app:

yo angular

(In case you don't know what yo means, check out Yeoman here and here)

When you're workin towards a REST backend with Angular, you probably will use either $http or $resource. But you're options don't end with the built in Angular modules or with Angular plugins; you can use whatever library you want. For instance Backbone. This means I can keep our Backbone models and collections, while focusing on getting our views and controllers right.

To acheive this I created an angular service which contains all of the Backbone Models and Collections. I don't really mind having all of them in the same file, since my plan is to gradually remove them and use Angular standards instead.

But until then, this is what my backbone model service looks like now:

angular.module('myAngularApp')
    .service('Restapi', function Restapi() {
this.MyModel = Backbone.Model.extend({ // ... }); }
In the original application I had a lot of logic in views (Region, Composite and ItemViews). For instance: In one view I create a Backbone.Form from a json schema, fetch a Backbone model and bind it to the schema. To make this work with angular I just copied this code into angular controllers instead like this:

angular.module('myAngularApp').
controller('MyCtrl', function(Restapi) {
    $scope.myModel = new Restapi.MyModel({
       id: 123456
    });
    $scope.myModel.fetch({
       success: function() {
 var form = new Backbone.Form({
  model: $scope.myModel,
  schema: someJson
 });
 form.render();
 $('#some-id-to-place-the-form').append(form.el);
     }
});
So this is more or less how I did it, just a lot of copying and pasting the old Backbone code into my newly created Angular project. Now I can continue to remove Backbone, one piece at a time until I no longer need the Backbone dependencies.

måndag 28 april 2014

Why node.js matters

Javascript is not my favorite programming language.
Even with coffeescript or sweetjs, which gives a lot of syntactic sugar to Javascript, I still prefer Ruby or Python. 
So why would I choose Javascript in the back end of a web app?

Let's start by thinking about the front end.  While it's true that you can choose whatever language you like for the back end, you really don't have a choice for the front end. If you want to build a modern web app you have to use Javascript. (Yes I know, you can use a framework like gwt where you write all of your front end code in Java, but looking at the trends it's obviously not a solution for the future)

Learning Javascript takes time. A lot of time. There are many tools and libraries available to help with various things like unit testing, minifying the code, handle older browsers and what not. Not to mention all of the different mv* frameworks, utility libraries and template engines. I've now spent a year trying to master the Javascript echo system, but I still feel I've only skimmed the surface. 

Recruiting a good programmer is hard these days. A good Javascript programmer is a lot harder. But try to find a good Javascript programmer who also have mastered your specific back end (perhaps Java + Spring MVC + Hibernate + many more)? That's almost impossible. That's the reason why a majority of the projects you see will have two separate teams: one building the front end and another one building the back end. 

Having separate teams for the front end and the back end present a whole bunch of deadline related problems. 

It's obvious during the sprint planning: some sprints will mostly be back end related tasks while the front end team more or less waits for the back end, and other sprints will be the other way around. 

Or the back end team spend time building mock ups which, even in the cases where the mock ups work fine until the real functionality is implemented, is time spent on things that will never be put into production. 

Other times two or more developers will be sick or on vacation. If both of them are on the same team it could jeopardize the deadline since you can't replace them with developers from the other team. 

If you choose Node for the back end and Javascript for the front end, you can have one team of developers, all of them jumping between different parts of the application when needed.
Do you want to know a secret?
Your front end developers already use Node. They just haven't told you about it. Go ahead and ask them if they're using Grunt. Or Bower. Or perhaps Browserify. Or npm. All of these are using Node under the hood. Your front end developers already have the tools necessary to build a Javascript application using Node, and most of them will be delighted if they can use the same tools to build your back end.

To boldly go where many men has gone before!
Go ahead and try it, and you will have one team of developers that is a lot happier, is delivering more functionality in less time and writes less lines of code.
You should have a really good reason not to choose Node for your back end!


torsdag 17 april 2014

Testing promises with Jasmine

Let's say I have this code (using lie.js for promises, but should work about the same with q.js or angulars $q):

  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

If I want to test if it works, I would write a spec. To make it easier to read I will include the code under test in the spec:


describe 'Foo', () ->
  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

  it 'should wait until promise resolves', () ->
    self = this

    Foo.bar()
    .then (baz)->
      baz.should.be true

I now watch it run with success:
 
Done, without errors. 

Cool, it now works. Or does it?

I change the test to something that should fail, like this;

'use strict'

describe 'Foo', () ->
  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

  it 'should wait until promise resolves', () ->
    self = this

    Foo.bar()
    .then (baz)->
      true.should.be false

If we run it, it still succeeds:
 
Done, without errors. 

Obviously not quite working...
The reason is still the same, the code in the 'then' clause never executes, because the test finishes to early.

I then add a waitsFor like this:

describe 'Foo', () ->
  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

  it 'should wait until promise resolves', () ->
    self = this

    waitsFor () -> self.hasRun

    Foo.bar()
    .then (aBool)->
      true.should.be false
      self.hasRun = true 
 
With the following result:
 
Done, without errors.  
 
Hmm, this is not what I expected. It should fail, right? The reason for this isn't quite obvious. Any errors in the then clause will be swallowed in the following catch clause. Which really isn't what I want. If I change to the following code:
 
describe 'Foo', () ->
  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

  it 'should wait until promise resolves', () ->
    self = this

    waitsFor () -> self.hasRun

    Foo.bar().then (baz)->
          self.baz = true
          self.hasRun = true

    self.baz.should.be true

There is still the same problem with TypeError: Cannot read property 'should' of undefined 

We have to surround it with a run(), which waits until the waitsFor is complete before executing:

describe 'Foo', () ->
  Foo = {bar:()->
    new Promise (resolve, reject)->
      resolve(true)
  }

  it 'should wait until promise resolves', () ->
    self = this

    waitsFor () -> self.hasRun

    Foo.bar().then (baz)->
          self.baz = true
          self.hasRun = true

    runs () ->
      self.baz.should.equal true


Done, without errors. 

And if I change the last line to something that should fail:
      false.should.equal true
 
AssertionError: expected false to equal true 
 
So it now runs the spec as expected. And in case you don't habla coffee, here is the same test written in Javascript:
 

describe('Foo', function() {
  var Foo;
  Foo = {
    bar: function() {
      return new Promise(function(resolve, reject) {
        resolve(true);
      });
    }
  };
  it('should wait until promise resolves', function() {
    var self = this;
    waitsFor(function() {
      return self.hasRun;
    });
    Foo.bar().then(function(baz) {
      self.baz = true;
      self.hasRun = true;
    });
    runs(function() {
      self.baz.should.be(true);
    });
  });
});  

fredag 4 april 2014

What's the deal with sweet.js?

I had heard people giving praise for sweet.js, but couldn't figure out what all the buzz was about. But I decided to give it a go, and found it really good. I'm going to try to explain why I like it.

Sweet.js is quite different from Coffeescript, Typescript or Dart. These are languages that can be compiled to Javascript.

Now, Sweet.js on the other hand is not another language. In fact, after you've set up your environment to build your  Sweet.js files, you get no benefits. You still write the exact same Javascript as before. But what you get is the possibility to enhance the language.

Lets say you find yourself writing this repeatedly:

JSON.stringify(obj)

You could install a macro from npm:

npm install git+https://github.com/me97esn/conosle-macros.git --save-dev

that lets you write the following code instead:

obj as string

When sweet.js finds the above 'as string' it replaces it with JSON.stringify(obj).
But everything else is just plain Javacript.

Or let's say you would like to use the following syntax to create a range:

0 ... 10

That's totally possible. Sweet.js will replace it with
[
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
];

See example here.

That macro is included in the above installed npm package. Just tell the sweet.js grunt task to use conosle-macros/range.sjs (or conosle-macros/macros.sjs to get all of them) and that's possible.

Just as ES6 fat arrows which allows you to write

()=>;

Instead of

function(){}

So, by installing the above macros
I can now write:
(0 ... 10).map((x)=>x *x)

Instead of writing

[
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
].map(function (x) {
    return x * x;
}.bind(this));

Pretty neat, isn't it?

And what if you have different needs than me? Well, writing macros yourself is pretty straightforward. Just go ahead. Create some macros, publish them in npm and make the Javascript world a better place!

tisdag 11 mars 2014

Building Cordova apps with Yeoman

It is really quite easy to get Yeoman and Cordova to work together:
First install all the required stuff:

$ npm install yeoman -g
$ npm install cordova -g




In this example I am using the angular generator, but it should work with any yeoman generator

$ npm install -g generator-angular




Now, create a yeoman app

$ mkdir myAngularApp
$ cd myAngularApp
$ yo angular

Then, create a subfolder where the cordova stuff will live:

$cordova create cordova
$cd cordova

Add some platforms (ios, android). Note that each platform require their own sdk:s, which you will have to install. I am using ios.

$ cordova platform add ios  

back to the yeoman app
Aaaalmost done. We just have to make some changes to our Gruntfile. We want grunt to copy all of our yeoman stuff into the cordova app, so open the gruntfile.

First, change the dist folder to
dist: 'cordova/www'
This means that any time we run
$ grunt build
our minified scripts will be put in the cordova folder.

But it is really difficult to debug minified code, so we also want to be able to publish our source code. Add the following to the copy section of the grunf file:

cordova: {
        files: [{
          expand: true,
          dest: 'cordova/www',
          cwd: 'app',
          src: [
            '**',
          ]
        }]
      },


Now, by running

$ grunt copy:cordova

 we copy all of our source code over to cordova.

Now try it out by running

$ grunt copy:cordova
$ cd cordova
$ cordova run ios

This should open up an ios simulator, or deploy to an ios device if it is connected via usb cable to the computer.

fredag 20 september 2013

Backgrid and CouchDB changes

According to the Backgrid documentation, the way to save our Backbone models to the database is by building our models like:
var MyModel = Backbone.Model.extend({
  initialize: function () {
    Backbone.Model.prototype.initialize.apply(this, arguments);
    this.on("change", function (model, options) {
      if (options && options.save === false) return;
      model.save();
    });
  }
});
That looks good at first, but will become a real problem if we use CouchDB and _changes to get real time data in our app. What happens is: We change our model -> it get persisted to Couch. Couch informs all listeners that a document has been changed, which meens that our Backgrid app will change the model, which in turn will trigger the change event in our model. Then: a new save to Couch and so forth. Can anybody spot an infinite loop? What we did instead was to have our models simple like this:
var MyModel = Backbone.Model.extend({
});
And then, in our view where we instansiate the grid, include the following.

var bg = new Backgrid.Grid({
    columns: gridColumns,
    collection: self.collection,
    footer: footer
});

bg.listenTo(self.collection,"backgrid:edited",function(model){
    if(Object.getOwnPropertyNames(model.changed).length > 0){
        model.save();
    }
});