Sunday, October 4, 2015

Passport-http Digest Authentication and Express 4+ bug fix

Passport is quite popular environment for implementing authentication under Node.JS and Express Framework.

Passport-http is a plug-in that provides Digest Authentication interfaces and is very popular for Passport.

However, since Express 4, the default approach to the Express routes is them to be relative.

Example - 
Express <=3 default application assumed that every file who extend Express with routes should specify the full path to the route in a similar to this approach:

app.js:
app.get('{fullpath}',function)...

Or:
require('routes/users')(app)

Where users.js do as well:

app.get('{fullpath',function)...

The default app of Express 4+ uses relative routes, which is quite better as it allows full isolation between the modules of the application.

Example:

app.js:
app.use('/rest',require('routes/rest'));

Where rest.js has:

router = require('express').Router();
router.get('/data', function)... // where data is relative url and the actual will be /rest/data

This simplifies the readability. Also it allows you to separate the authentication. You could have different authentication approach or configuration to each different module. And that works well with Passport.

For example:

app.js:
app.use(passport.initialize());
app.use(passport.session());

rest.js:
var DigestStrategy = require('passport-http').DigestStrategy;
... here there should be a code for authentication function using Digest ...

and then:
router.get('/data',authentication,function) ....

This simplifies, makes it more readable and isolates very much the code necessary for authentication.

Personally, I write my own authentication functions in a separate module, then I include them in the express route module where I want to use them and it became even more simpler:

rest.js:
var auth = require('../libs/auth.js');
Router.get('/data', auth('admins'), function) ...

I even could apply different permissions, roles like - if you have pre authenticated session, then the interface will not ask you for authentication (saved one RTT) but if you don't it will ask you for digest authentication. Quite simple and quite readable.

However, all this does not work with Passport-http, because of a very small bug within.

The bug:

For security reasons, passport-http module verifies that the authentication URI from the customer request is the same as the URL requested authentication. However, the authentication URI (creds.uri) is always full path, but it is compared to req.url which is always relative path. The comparision has to be between creds.uri and req.baseUrl+req.url.

And this is my fix proposed to the author of passport-http, which I hope will be merged with the code.

Friday, September 25, 2015

EmbeddedJS - async reimplementation

I like using Embedded JS together with RequireJS in very small projects. It is very small, lighting fast and extremely simple, powerful and extensive.

However, there are hundreds of implementations, most of them out of date, and particularly the implementation I like is not supported anymore. But that I mean not only that the author is not supporting it, but also it works unpredictably in some browsers because it rely on Sync XMLHttpRequest which is not allowed in the main thread anymore.

So I decided to rewrite that EJS implementation myself, in a way I could use it in async mode.

So allow me to introduce the new Async Embedded JS implementation, which is here at https://github.com/delian/embeddedjs

It supports Node, AMD (requirejs) and globals (as the original) and detect them automatically.
A little documentation can be found here https://github.com/delian/embeddedjs/blob/master/README.md

The new code is written in ES5 and uses the new Function method from ES6. That makes it working only in modern browsers (IE10+). But it makes it really really fast. By current estimates about twice faster than the original.

It is still work in progress (no avoidance of cached URLs for example), but works perfectly fine for me.

If you use it and hit a bug, please report it at the Issues page at Github

Saturday, August 22, 2015

JavaScript private methods and variables

Often I am told that JavaScript has no real private methods and variables the same way as this is for Python or Perl.

Although, that is true for Python and Perl (all scopes and their content are publicly accessible and there is always a way to list and access the content) it is not true for JavaScript.

JavaScript have one of the most powerful scoping technique available for a main-stream programing language and provides you with the ability to hide and make private some content if needed.

Let me show you an example:

function MyClass() {
    var fullyPrivateVar = 'xxxx';
    this.notFullyPrivateVar = 'yyyy';

    function privateMethod() {
        return fullyPrivateVar;
    }

    this.publicMethodWithPrivateAccess = function() { return privateMethod() }
}

MyClass.prototype.publicMethodWithNoPrivateAccess = function() {
    return this.notFullyPrivateVar;
}

x = new MyClass();
console.log(x.publicMethodWithNoPrivateAccess());
yyyy
console.log(x.publicMethodWithPrivateAccess());
xxxx

I think the example is self explanatory, but let me say few words on it:
In JavaScript you can have hierarchy of scopes and you can define unlimited amount function in functions levels (and scope levels) in difference to other programming languages where you have limited / fixed scope levels.

So I have a privateMethod and fullyPrivateVar defined within MyClass. Because of it, only things defined in this scope will have access to them. Like publicMethodWithPrivateAccess that I assign with the constructor to the newly constructed object. However,  publicMethodWithNoPrivateAccess is defined outside of the scope and has no access to those values. But it has access to everything in the this scope. One could argue, that this approach takes more memory for the methods because I define a new method with every initialization of the class (the function assignment). But that is not true. The function in the function is having a static code (it is not part of eval+variable) and is compiled during the initial javascript processing of the JS virtual machine. So it will not take extra memory. Only during execution the function will be dynamically assigned with the private scope of the constructor, something you want anyway.

Friday, March 13, 2015

Node.JS module to access Cisco IOS XR XML interface

Hello to all,

This is the early version of my module for Node.JS that allows configuring routers and retrieving information over Cisco IOS XR's XML interface.

The module is in its early phases - it still does not read IOS XR schema files and therefore decode the data (in JSON) in a little ugly way (too much arrays). I am planning to fix it, so there may be changes in the responses.

Please see bellow the first version of the documentation I've set in the github:

Module for Cisco XML API interface IOS XR

This is a small module that implements interface to Cisco IOS XR XML Interface.
This module open an maintain TCP session to the router, sends requests and receive responses.

Installation

To install the module do something like that:
npm install node-ciscoxml

Usage

It is very easy to use this module. See the methods bellow:

Load the module

To load and use the module, you have to use a code similar to this:
var cxml = require('node-ciscoxml');
var c = cxml( { ...connect options.. });

Module init and connect options

host (default 127.0.0.1) - the hostname of the router where we'll connect
port (default 38751) - the port of the router where XML API is listening
username (default guest) - the username used for authentication, if username is requested by the remote side
password (default guest) - the password used for authentication, if password is requested by the remote side
connectErrCnt (default 3) - how many times it will retry to connect in case of an error
autoConnect (default true) - should it try to auto connect to the remote side if a request is dispatched and there is no open session already
autoDisconnect (default 60000) - how much milliseconds we will wait for another request before the tcp session to the remote side is closed. If the value is 0, it will wait forever (or until the remote side disconnects). Bear in mind autoConnect set to false does not assume autoDisconnect set to 0/false as well.
userPromptRegex (default (Username|Login)) - the rule used to identify that the remote side requests for a username
passPromptRegex (default Password) - the rule used to identify that the remote side requests for a password
xmlPromptRegex (default XML>) - the rule used to identify successful login/connection
noDelay (default true) - disables the Nagle algorithm (true)
keepAlive (default 30000) - enabled or disables (value of 0) TCP keepalive for the socket
ssl (default false) - if it is set to true or an object, then SSL session will be opened. Node.js TLS module is used for that so if the ssl points to an object, the tls options are taken from it. Be careful - enabling SSL does not change the default port from 38751 to 38752. You have to set it explicitly!
Example:
var cxml = require('node-ciscoxml');
var c = cxml( {
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

connect method

This method forces explicitly a connection. It could accept any options of the above.
Example:
var cxml = require('node-ciscoxml');
var c = cxml();
c.connect( {
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});
The connect method is not necessary to be used. If autoConnect is enabled (default) the module will automatically open and close tcp connections when needed.
Connect supports callback. Example:
var cxml = require('node-ciscoxml');
cxml().connect( {
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
}, function(err) {
    if (!err)
        console.log('Successful connection');
});
The callback may be the only parameter as well. Example:
var cxml = require('node-ciscoxml');
cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
}).connect(function(err) {
    if (!err)
        console.log('Successful connection');
});
Example with SSL:
var cxml = require('node-ciscoxml');
var fs = require('fs');
cxml({
    host: '10.10.1.1',
    port: 38752,
    username: 'xmlapi',
    password: 'xmlpass',
    ssl: {
          // These are necessary only if using the client certificate authentication
          key: fs.readFileSync('client-key.pem'),
          cert: fs.readFileSync('client-cert.pem'),
          // This is necessary only if the server uses the self-signed certificate
          ca: [ fs.readFileSync('server-cert.pem') ]
    }
}).connect(function(err) {
    if (!err)
        console.log('Successful connection');
});

disconnect method

This method explicitly disconnects a connection.

sendRaw method

.sendRaw(data,callback)
Parameters:
data - a string containing valid Cisco XML request to be sent
callback - function that will be called when a valid Cisco XML response is received
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.sendRaw('<Request><GetDataSpaceInfo/></Request>',function(err,data) {
    console.log('Received',err,data);
});

sendRawObj method

.sendRawObj(data,callback)
Parameters:
data - a javascript object that will be converted to a Cisco XML request
callback - function that will be called with valid Cisco XML response converted to javascript object
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.sendRawObj({ GetDataSpaceInfo: '' },function(err,data) {
    console.log('Received',err,data);
});

rootGetDataSpaceInfo method

.rootGetDataSpaceInfo(callback)
Equivalent to .sendRawObj for GetDataSpaceInfo command

getNext

Sends getNext request with a specific id, so we can retrieve the rest of the previous operation if it has been truncated.
id - the ID callback - the callback with the data (in js object format)
Keep in mind next response may be truncated as well, so you have to check for IteratorID all the time.
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.sendRawObj({ Get: { Configuration: {} } },function(err,data) {
    console.log('Received',err,data);
    if ((!err) && data && data.Response.$.IteratorID) {
        return c.getNext(data.Response.$.IteratorID,function(err,nextData) {
            // .. code to merge data with nextData
        });
    }
    // .. code
});

sendRequest method

This method is equivalent to sendRawObj but it can automatically detect the need and resupply GetNext requests so the response is absolutley full. Therefore this method should be the preferred method for sending requests that expect very large replies.
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.sendRequest({ GetDataSpaceInfo: '' },function(err,data) {
    console.log('Received',err,data);
});

requestPath method

This is a method equivalent to sendRequest but instead of an object, the request may be formatted in a simple path string. This metod is not very useful for complex requests. But its value is in the ability to simplify very much the simple requests. The response is in JavaScript object
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.requestPath('Get.Configuration.Hostname',function(err,data) {
    console.log('Received',err,data);
});

reqPathPath method

This is the same method as requestPath, but the response is not an object, but a path array. The method supports optional filter, which has to be a RegExp object and all paths and values will be tested against it Only those returning true will be included in the response array.
Example:
var cxml = require('node-ciscoxml');
var c = cxml({
    host: '10.10.1.1',
    port: 5000,
    username: 'xmlapi',
    password: 'xmlpass'
});

c.reqPathPath('Get.Configuration.Hostname',/Hostname/,function(err,data) {
    console.log('Received',data[0]);
    // The output should be something like
    // [ 'Response("MajorVersion"="1","MinorVersion"="0").Get.Configuration.Hostname("MajorVersion"="1","MinorVersion"="0")',
           'asr9k-router' ] 
});
This method could be very useful for getting simple responses and configurations.

getConfig method

This method requests the whole configuration of the remote device and return it as object
Example:
c.getConfig(function(err,config) {
    console.log(err,config);
});

cliConfig method

This method is quite simple, it executes a command(s) in CLI Configuration mode and return the response in JS Object. You have to know that any configuration change in IOS XR is not effective unless it is committed!
Example:
c.cliConfig('username testuser\ngroup operator\n',function(err,data) {
    console.log(err,data);
    c.commit();
});

cliExec method

Executes a command(s) in CLI Exec mode and return the response in JS Object.
c.cliExec('show interfaces',function(err,data) {
    console.log(err,data?data.Response.CLI[0].Exec[0]);
});

commit method

Commit the current configuration.
Example:
c.commit(function(err,data) {
    console.log(err,data);
});

lock method

Locks the configuration mode.
Example:
c.lock(function(err,data) {
    console.log(err,data);
});

unlock method

Unlocks the configuration mode.
Example:
c.unlock(function(err,data) {
    console.log(err,data);
});

Configure Cisco IOS XR for XML agent

To configure IOS XR for remote XML configuration you have to:
Ensure you have *mgbl*** package installed and activated! Without it you will have no xml agentcommands!
Enable the XML agent with a similar configuration:
xml agent
  vrf default
    ipv4 access-list SECUREACCESS
  !
  ipv6 enable
  session timeout 10
  iteration on size 100000
!
You can enable tty and/or ssl agents as well!
(Keep in mind - full filtering of the XML access has to be done by the control-plane management-plane command! The XML interface does not use VTYs!)
You have to ensure you have correctly configured aaa as the xml agent uses default method for both authentication and authorization and that cannot be changed (last verified with IOS XR 5.3).
You have to have both aaa authentication and authorization. If authorization is not set (aaa authorization default local or none), you may not be able to log in. And you shall ensure that both the authentication and authorization share the same source (tacacs+ or local).
The default agent port is 38751 for the default agent and 38752 for SSL.

Debugging

The module uses "debug" module to log its outputs. You can enable the debugging by having in your code something like:
require('debug').enable('ciscoxml');
Or setting DEBUG environment to ciscoxml before starting the Node.JS

Wednesday, March 11, 2015

ExtJS short highlight of a gridrow in case of an update

In my ExtJS code I often update the values of records associated with Stores/Data Models associated with Grid Panel. 
Sometimes this an automated update received from the web server (an example you can see here) sometimes not.
However, as a user I like to be able to see if somewhere in the grid some data has been modified.

And to create a code, that highlight for a moment the row or a cell where a change happened is actually not so hard. 

Here is a small example how you can do that - you just have to bind to event 'itemupdate' on a tableview. See the example bellow, that is supposed to be self explanatory:

Ext.ComponentQuery.query('#mytableview').on('itemupdate', function(record, index, node) {
    node.className = node.className + ' x-grid-item-alt';
    setTimeout(function() {
        node.className = node.className.replace(/x-grid-item-alt/, '');
    }, 500);
});

Monday, March 9, 2015

node.js module implementing EventEmitter interface using MongoDB tailable cursors as backend

I've published in the npm a new module, that I've used privately for a long time which implements EventEmitter interface using MongoDB tailable cursors as backend.


This module could be used as a messaging bus between processes or even between node.js modules as it allows implementing EventEmitter wihout need of sharing the object instance in advance.

Please see the first version of the README.md bellow:

Module for creating event bus interface based on MongoDB tailable cursors

The idea behind this module is to create EventEmitter like interface, which uses MongoDB capped collections and tailable cursors as an internal messaging bus. This model has a lot of advantages, especially if you already use MongoDB in your project.
The advantages are:
You don't have to exchange the event emitter object between different pages or even different processes (forked, clustered, living on separate machines). As long as you use the same mongoUrl and capped collection name, you can exchange information. This way you can even create applications that runs on a different hardware and they may exchanging events and data as if they are the same application! Also your events are stored in a collection and could be used as a transaction log latley (mongodb's own transaction log is implemented with capped collections).
It simplifies an application development very much.

Installation

To install the module run the following command:
npm install node-mongotailableevents

Short

It is easy to use that module. Look at the following example:
var ev = require('node-mongotailableevents');

var e = ev( { ...options ... }, callback );

e.on('event',callback);

e.emit('event',data);

Initialization and options

The following options can be used with the module
  • mongoUrl (default mongodb://127.0.0.1/test) - the URL to the mongo database
  • mongoOptions (default none) - Specific options to be used for the connection to the mongo database
  • name (default tailedEvents) - the name of the capped collection that will be created if it does not exists
  • size (default 1000000) - the maximum size of the capped collection (when reached, the oldest records will be automatically removed)
  • max (default 1000) - the maximum size in amount of records for the capped collection
You can call and create a new event emitter instance without options:
var ev = require('node-mongotailableevents');
var e = ev();
Or you can call and create a event emitter instance with options:
var ev = require('node-mongotailableevents');
var e = ev({
   mongoUrl: 'mongodb://127.0.0.1/mydb',
   name: 'myEventCollection'
});
Or you can call and create a event emitter instance with options and callback, which will be called when the collection is created successfuly:
var ev = require('node-mongotailableevents');
ev({
   mongoUrl: 'mongodb://127.0.0.1/mydb',
   name: 'myEventCollection'
}, function(err, e) {
    console.log('EventEmitter',e);
});
Or you can call and create event emitter with just callback (and default options):
ev(function(err, e) {
    console.log('EventEmitter',e);
});

Usage

This module inherits EventEmitter, so you can use all of the EventEmitter methods. Example:
ev(function(err, e) {
    if (err) throw err;

    e.on('myevent',function(data) {
        console.log('We have received',data);
    });

    e.emit('myevent','my data');
});
The best feature is that you can exchange events between different pages or processes, without the need of exchange in advance of the eventEmitter object instance or without any complex configuration, as long as both pages processes uses the same mongodb database (but it could be a different replica servers) and the same "name" (the name of the capped collection). This way you can create massive clusters and messaging bus distributed among multiple machines without a need of any separate messaging system and its configuration.
Do a simple example - start two separate node processes with the following code, and see what the results are:
var ev = require('node-mongotailableevents');
ev(function(err, e) {
    if (err) throw err;

    e.on('myevent',function(data) {
        console.log('We have received',data);
    });

    setInterval(function() {
        e.emit('myevent','my data'+parseInt(Math.random()*1000000));
    },5000);
});
You shall see on both of the outputs both of the messages received.

Sunday, March 8, 2015

Example how to use node-netflowv9 and define your own netflow type decoders

This is an example of how you can use node-netflowv9 library (version >= 0.2.5) to define your own proprietary Netflow v9 type decoders if they are not supported.
The given primer is adding decoding for types 30000, 30001, 30002 for Cisco ASA/PIX netflow:

var Collector = require('node-netflowv9');

var colObj = Collector(function (flow) { console.log(flow) });
colObj.listen(5000); var aclDecodeRule = { 12: 'o["$name"] = { aclId: buf.readUInt32BE($pos), aclLineId: buf.readUInt32BE($pos+4), aclCnfId: buf.readUInt32BE($pos+8) };'
}; colObj.nfTypes[33000] = { name: 'nf_f_ingress_acl_id', compileRule: aclDecodeRule }; colObj.nfTypes[33001] = { name: 'nf_f_egress_acl_id', compileRule: aclDecodeRule }; colObj.nfTypes[33002] = { name: 'nf_f_fw_ext_event', compileRule: { 2: 'o['$name']=buf.readUInt16BE($pos);' } }; colObj.nfTypes[40000] = { name: 'nf_f_username', compileRule: { 0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len);' } };

Tuesday, March 3, 2015

node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

My node-netflowv9 library has been updated to version 0.2.5

There are few new things -
  • Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIX
  • Unknown NetFlow v9 type does not throw an error. It is decoded into property with name 'unknown_type_XXX' where XXX is the ID of the type
  • Unknown NetFlow v9 Option Template scope does not throw an error. It is decoded in 'unknown_scope_XXX' where XXX is the ID of the scope
  • The user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen "on fly" - at any time.
  • The library supports well multiple netflow collectors running at the same time
  • A lot of new options and models for using of the library has been introduced
Bellow is the updated README.md file, describing how to use the library:

Usage

The usage of the netflowv9 collector library is very very simple. You just have to do something like this:
var Collector = require('node-netflowv9');

Collector(function(flow) {
    console.log(flow);
}).listen(3000);
or you can use it as event provider:
Collector({port: 3000}).on('data',function(flow) {
    console.log(flow);
});
The flow will be presented in a format very similar to this:
{ header: 
  { version: 9,
     count: 25,
     uptime: 2452864139,
     seconds: 1401951592,
     sequence: 254138992,
     sourceId: 2081 },
  rinfo: 
  { address: '15.21.21.13',
     family: 'IPv4',
     port: 29471,
     size: 1452 },
  packet: Buffer <00 00 00 00 ....>
  flow: [
  { in_pkts: 3,
     in_bytes: 144,
     ipv4_src_addr: '15.23.23.37',
     ipv4_dst_addr: '16.16.19.165',
     input_snmp: 27,
     output_snmp: 16,
     last_switched: 2452753808,
     first_switched: 2452744429,
     l4_src_port: 61538,
     l4_dst_port: 62348,
     out_as: 0,
     in_as: 0,
     bgp_ipv4_next_hop: '16.16.1.1',
     src_mask: 32,
     dst_mask: 24,
     protocol: 17,
     tcp_flags: 0,
     src_tos: 0,
     direction: 1,
     fw_status: 64,
     flow_sampler_id: 2 } } ]
There will be one callback for each packet, which may contain more than one flow.
You can also access a NetFlow decode function directly. Do something like this:
var netflowPktDecoder = require('node-netflowv9').nfPktDecode;
....
console.log(netflowPktDecoder(buffer))
Currently we support netflow version 1, 5, 7 and 9.

Options

You can initialize the collector with either callback function only or a group of options within an object.
The following options are available during initialization:
port - defines the port where our collector will listen to.
Collector({ port: 5000, cb: function (flow) { console.log(flow) } })
If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:
Collector(function (flow) { console.log(flow) }).listen(port)
cb - defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires 'data' event for each received flow
Collector({ cb: function (flow) { console.log(flow) } }).listen(5000)
ipv4num - defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot format
Collector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)
socketType - defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.
Collector({ socketType: 'udp6', cb: function (flow) { console.log(flow) } }).listen(5000)
nfTypes - defines your own decoders to NetFlow v9+ types
nfScope - defines your own decoders to NetFlow v9+ Option Template scopes

Define your own decoders for NetFlow v9+ types

NetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as 'unknown_type_XXX' where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).
To understand how to do that, you have to learn a bit about the internals of how this module works.
  • When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding function
  • When a netflow is received for a known flowset template (we have a compiled function for it) - the function is simply executed
This approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.
The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.
Each template consist of an object of the following form:
{ name: 'property-name', compileRule: compileRuleObject }
compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:
{
   length: 'javascript code as a string that decode this value',
   ...
}
There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:
{
   4: 'code used to decode this netflow type with length of 4',
   8: 'code used to decode this netflow type with length of 8',
   0: 'code used to decode ANY OTHER length'
}

decoding code

The decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.
There are few variables you have to use:
$pos - this string is replaced with a number containing the current position of the netflow type within the binary buffer.
$len - this string is replaced with a number containing the length of the netflow type
$name - this string is replaced with a string containing the name property of the netflow type (defined by you above)
buf - is Node.JS Buffer object containing the Flow we want to decode
o - this is the object where the decoded flow is written to.
Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.
If you want to decode a string, of variable length, you could write a compileRuleObject of the form:
{
   0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
}
The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.

Example

Lets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.
You can write a code like this:
Collector({
   port: 5000,
   nfTypes: {
      4444: {   // 4444 is the NetFlow Type ID which decoding we want to replace
         name: 'my_vendor_type4444', // This will be the property name, that will contain the decoded value, it will be also the value of the $name
         compileRule: {
             1: "o['$name']=buf.readUInt8($pos);", // This is how we decode type of length 1 to a number
             2: "o['$name']=buf.readUInt16BE($pos);", // This is how we decode type of length 2 to a number
             3: "o['$name']=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);", // This is how we decode type of length 3 to a number
             4: "o['$name']=buf.readUInt32BE($pos);", // This is how we decode type of length 4 to a number
             5: "o['$name']=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);", // This is how we decode type of length 5 to a number
             6: "o['$name']=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);", // This is how we decode type of length 6 to a number
             8: "o['$name']=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);", // This is how we decode type of length 8 to a number
             0: "o['$name']='Unsupported Length of $len'"
         }
      }
   },
   cb: function (flow) {
      console.log(flow)
   }
});
It looks to be a bit complex, but actually it is not. In most of the cases, you don't have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:
var colObj = Collector(function (flow) {
      console.log(flow)
});

colObj.listen(5000);

colObj.nfTypes[6789] = {
    name: 'vendor_string',
    compileRule: {
        0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
    }
}
As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).
You could also overwrite the default property names where the decoded data is written. For example:
var colObj = Collector(function (flow) {
      console.log(flow)
});
colObj.listen(5000);

colObj.nfTypes[14].name = 'outputInterface';
colObj.nfTypes[10].name = 'inputInterface';

Logging / Debugging the module

You can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:
require('debug').enable('NetFlowV9');
var Collector = require('node-netflowv9');
Collector(function(flow) {
    console.log(flow);
}).listen(5555);

Multiple collectors

The module allows you to define multiple collectors at the same time. For example:
var Collector = require('node-netflowv9');

Collector(function(flow) { // Collector 1 listening on port 5555
    console.log(flow);
}).listen(5555);

Collector(function(flow) { // Collector 2 listening on port 6666
    console.log(flow);
}).listen(6666);

NetFlowV9 Options Template

NetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.
Currently the following nfScope are supported - system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.