Wednesday, October 22, 2014

My perl-cwmp patches are merged


Hello,
I've used perl-cwmp here and there. It is a nice, really small, really light and simple TR-069 ACS, with a very easy install and no heavy requirements. You can read the whole code for few minutes and you can make your own modifications. I am using it in a lot of small "special" cases, where you need something fast and specific, or a very complex workflow that cannot be implemented by any other ACS server.

However, this project has been stalled for a while. I've found that a lot of modern TR-069/CWMP agents do not work well with the perl-cwmp. 

There are quite of few reasons behind those problems:

- Some of the agents are very strict - they expect the SOAP message to be formatted in a specific way, not the way perl-cwmp does it
- Some of the agents are compiled with not so smart, static expansion of the CWMP xsd file. That means they do expect string type spec in the SOAP message and strict ordering

perl-cwmp do not "compile" the CWMP XSD and do not send strict requests nor interpretate the responses strictly. It does not automatically set the correct property type in the request according to the spec, because it never reads the spec. It always assume that the property type is a string.

To allow perl-cwmp to be fixed and adjusted to work with those type of TR-069 agents I've done few modifications to the code, and I am happy to announce they have been accepted and merged to the main code:

The first modification is that I've updated (according to the current standard) the SOAP header. It was incorrectly set and many TR069 devices I have tested (and basically all that worked with the Broadcom TR069 client) rejected the request.

The second modification is that all the properties now may have specified type. Unless you specify the type it is always assumed to be a string. That will allow the ACS to set property value of agents that do a strict set check.

InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#60

The #...# specifies the type of the property. In the example above, we are setting value of unsignedInt 60 to PeriodicInformInterval.

You can also set value to a property by reading a value from another property.
For that you can use ${ property name }

Here is an example how to set the PPP password to be the value of the Serial Number:

InternetGatewayDevice.WANDevice.1.WANConnectionDevice.1.WANPPPConnection.1.Password: ${InternetGatewayDevice.DeviceInfo.SerialNumber}

And last but not least - now you can execute small code, or external script and set the value of a property to the output of that code. You can do that with $[ code ]

Here is an example how to set a random value to the PeriodicInformInterval:

InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[60 + int(rand(100))]

Here is another example, how to execute external script that could take this decision:
InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[ `./externalscript.sh ${InternetGatewayDevice.LANDevice.1.LANEthernetInterfaceConfig.1.MACAddress} ${InternetGatewayDevice.DeviceInfo.SerialNumber}` ]

The last modification I've done is to allow the perl-cwmp to "fork" a new process when a TR-069 request arrives. It has been single threaded code, which mean the agents has to wait until the previous task is completed. However, if the TCP listening queue is full, or the ACS very busy, some of the agents will assume there is no response and timeout. You may have to wait for 24h (the default periodic interval for some vendors) until you get your next request. Now that can be avoided.

All this is very valuable for dynamic and automated configurations without the need of modification of the core code, just modifying the configuration file.

Saturday, October 4, 2014

Why MVC?

As you all probably know, the MVC approach has been very modern lately. MVC stands for Model-View-Controller where it is expected that your data, your visualization and your gluing and managing code has to be fully separated in separated files (they are not separated really as they are linked to each other in the same program). As I am coming from the world of the system and embedded programming it was hard for me to understand the reason behind this. 
Instinctively I thought this should be somehow related to the ease of the development. May be this makes it easier for the separation of the work of UI designers, back-end communication (and development) and the UI execution control. You can easily split the work among different people with different skills, I thought. But now I realize it is something absolutely different.
It is maintainability, therefore easier support!
And it is best illustrated with HTML.

You can easily insert JavaScript code directly within an HTML tag:
<INPUT TYPE=BUTTON onClick="alert('blabla')" VALUE="Click Me!">

If you go for the MVC approach, you should have a separated code that do something like this:
Separate HTML:
<INPUT ID="myButton" TYPE=BUTTON VALUE="Click Me!">

Separate JavaScript:
document.getElementById("myButton").addListener("click",function() { alert('blabla') })

It is obvious - MVC is more expensive in terms of code, structure, style and preparations. So why to walk this extra mile? Some programmers with my background would usually say - it has overheads and therefore is ineffective to program.

However, if you have a case of a software that has to be rewriten constantly - introducing new functionality, new features, fixing it, you have a lot of other issues to deal with. Your major problem will be the maintainability and readability of your code.
And I am sure everyone will agree that having all your control code, execution and control flow merged in the same code structure is much better, than having them split among a lot of data processing code and UI visualisations. 

If you have a huge HTML code with a lot of javascript code separated and bound directly into the tags (non MVC) it is extremely hard to know and keep in mind what are all the events that happens and what is the order of the execution of the code. MVC will make that much much easier, even though in the beginning it may be costly with an extra overhead.

Wednesday, October 1, 2014

Sencha ExtJS grid update in real time from the back-end

Hello to all,

I love using Sencha ExtJS in some projects as it is the most complete JavaScript UI framework, even though it is kind of slow, not fast reacting and being cpu and memory expensive. ExtJS allows you to do very fast and lazy development of otherwise complex UI and especially if you use Sencha Architect you can minimize the UI development time focusing only on the important things of your code.

However, ExtJS has quite few draw backs - missing features or some things are over complex and hard to be kept in mind by inexperienced developer (like their Controller idea). 

Here I would like to show you a little example how you can implement a very simple real time update of Sencha Grids (tables) from the backend for an multi user application.

Why do you need this?
I often develop apps that has to be used by multiple persons at the same time and they share and modify the same data.

In such situation, a developer usually has to resolve all those conflicting cases where two users try to modify the same exact data. And Sencha ExtJS grids are not very helpful here. Sencha uses the concept of Store that interact with the data of the back-end (for example by using REST API) and then the Store is assigned to a visualization object like ComboBox or a Grid (Tables). If you modify a table (with the help of Cell Edit Plugin or Row Edit Plugin) that has autoSync property set to true, then any modification you do automatically generates a REST POST/PUT/DELETE query to inform the back end. It can never be easier for a developer, right? But all the data sent to the back end contains the whole modified row - all the properties. On a first sight, this is not an issue. But it is, if you have multiple users editing the same table at the same time. The problem happens because the Sencha Store caches the data. So if User1 modifies it - it is stored on the server. But if User2 modifies the same row but a different column, it will do that over the old data and can overwrite the User1 modification. The backend cannot know which property has been modified and which not and who of the two modifications has to be kept.
There are a lot of tricks a developer usually use to avoid this conflicts. Keeping a version of the modification with each data row in the server, which is received in GET by the UI clients. So when a modification happens, it is accepted only if the client sends the same version number as the one stored in the server, and then the version in the server increases. If another one modification is received with older cached data, it will not be accepted as it will have a different version number. Then the customer will receive an error, then the UI software may refresh its data and updates the versions and the content visualized to the user. 
This is quite popular model, but it is not very nice for the user. The problem is that with multiple users working with the application modifying the same data over the same time, the user will constantly be outdated and will constantly receive errors loosing all its modifications.
The only good solution for both users and the system in general is if in case of change we can update the data in real time in all UI applications. This does not avoid all the possibilities for conflict. But it is highly minimizing it, making the whole operation more pleasant for the end user.

This problem and the need of resolving it happens quite often. Google Spreadsheet and later Google Docs has introduced real time update between the UI data of all the users modifying the same document about 4 years ago.

Example
I like to show here that it is not really hard to update in real time the Stores of ExtJS applications.
It actually requires very little additional code.

Lets imaging we are using a UI developed in Sencha ExtJS with Stores communicating through REST with the backend. The backend for this example will be Node.JS and MongoDB.

Between the Node.JS and the Ext.JS UI there will be Socket.IO session that we will use to push the updates from the Node.JS to the ExtJS Store. I love Socket.IO because it provides a simple WebSockets interface with fallback to HTTP pooling model in case of WebSockets cannot be open (which happens a lot, if you are so unlucky to use a Microsoft security software for example - it blocks WebSockets). 

At the MongoDB we may use capped collections. I love capped collections - they are not only limited in size, but also they allow you to bind a triggers (make the collection tailable) that will receive any new insertion immediately when it happen.

So imagine your Node.JS express REST code looks something like this:

app.get('/rest/myrest',restGetMyrest);
app.put('/rest/myrest/:id',restPutMyrest);
app.post('/rest/myrest/:id',restPostMyrest);
app.del('/rest/myrest/:id',restDelMyrest);

function restGetMyrest(req,res) { // READ REST method
   db.collection('myrest').find().toArray(function(err,q) { return res.send(200,q) })
}

function restPutMyrest(req,res) { // UPDATE REST method
  var id = ObjectID.createFromHexString(req.param('id'));
  db.collection('myrest').findAndModify({ _id: id }, [['_id':'asc']], { $set: req.body }, { safe: true, 'new': true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection('capDb').insert({ method: 'myrest', op: 'update', data: q }, function() {});
      return res.send(200,q);
  })
}

function restPostMyrest(req,res) { // CREATE REST method
  var id = ObjectID.createFromHexString(req.param('id'));
  db.collection('myrest').insert({ _id: id },req.body, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      setTimeout(function() {
         db.collection('capDb').insert({ method: 'myrest', op: 'create', data: q[0] }, function() {});
      },250);
      return res.send(200,q);
  })
}

function restDelMyrest(req,res) { // DELETE REST method
  var id = ObjectID.createFromHexString(req.param('id'));
  db.collection('myrest').remove({ _id: id }, { $set: req.body }, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection('capDb').insert({ method: 'myrest', op: 'delete', data: { _id: id } }, function() {});
      return res.send(201,{});
  })
}

As you can see above - we have implemented a classical CRUD REST method named "myrest" retrieving and storing data in a mongodb collection named 'myrest'. However, with all modification we also store that modification in a mongodb capped collection named "capDb".
We use this capped collection (in bold) as an internal mechanism for communication within the NodeDB. You can use events instead, or you can directly send this message to the Socket.IO receiver. However, I like capped db, as they set a lot of advantages - there can be multiple Node.JS processes listening on a capped db and receiving the updates simultaneously. So it is easier to implement clusters that way, including notifying Node.JS processes distributed over different machines.

So now, may be in another file or anywhere else, you may have a simple Node.JS Socket.IO code looking like this:

var s = sIo.of('/updates');
db.createCollection("capDb", { capped: true, size: 100000 }, function (err, col) {
   var stream = col.find({},{ tailable: true, awaitdata: true, numberOfRetries: -1 }).stream();
   stream.on('data',function(doc) {
       s.emit(doc.op,doc);
   }
});
 
With this little code above we are basically broadcasting to everyone connected with Socket.IO to /updates the content of the last insertion in the tailable capDb. Also we are creating this collection, if it does not exists from before.

This is everything you need in Node.JS :)

Now we can get back to the Ext.JS code. Simply you need to have somewhere in your HTML application this code executed:

var socket = io.connect('/updates');
socket.on('create', function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   if ((!s)||(s.getCount()>s.pageSize||s.findRecord('id',msg.data._id)) return;
   s.suspendAutoSync();
   s.add(msg.data);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on('update', function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord('id',msg.data._id))) return;
   s.suspendAutoSync();
   for (var k in msg.data) if (r.get(k) != msg.data[k]) r.set(k,msg.data[k]);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on('delete',function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord('id',msg.data._id))) return;
   s.suspendAutoSync();
   s.remove(r);
   s.commitChanges();
   s.resumeAutoSync();
});

This is all.
Basically what we do from end to end -
If the Node.JS receives any CRUD REST operation it updates the data in the MongoDB, but also for Create, Update, Delete it notify over Socket.IO all the listening web clients about this operation (in my example, I use tailable capped collection in MongoDB as a an internal messaging bus, but you can emit to the Socket.IO directly or use another messaging bus like EventEmitter).

Then the ExtJS receives the update over Socket.IO and assumes that the method property contains the name of the Store that has to be updated. Then we find the store, suspedAutoSync if it exists (otherwise we can get into update->autosync->rest->update loop), modify the content of the record (or the store) and resume AutoSync.

With this simple code you can broadcast all the modifications in your data between all the extjs users that are currently online, so they can see updates in real time in their grids.

A single REST method may be used by multiple stores. In such case, you have to modify your code with some association between the REST method name and all the related stores.
However, for this simple example, that is unnecessary.

Some other day, I may show you my "ExtJS WebSockets CRUD proxy" I made, where you have only one communication channel between the stores and the backend - Socket.IO. It is much faster and removes the need of having REST code at all in your server.