Judging a creation by the creator

I never liked any products produced by Microsoft. Never. When I want to exaggerate my feelings’ about Microsoft products I’d say: “To me the best application they have ever come up with, is Notepad. That’s it. That’s how far they can go.”

But I have always respected Bill Gates. He is the person I look up to. A character I want to be. The humbleness, simplicity, the smile, the charity work, it all makes me want to look up to him and try to adopt and develop some of his attitudes.

On the exact opposite side of the spectrum resides, Linus Torvalds. I love all the software products this genius human produces. From Linux to Git. Everything has changed the way I work and I am very grateful to live in an era when he lives as well. But when it comes to his character and personality, I have the same feeling as Microsoft products, never liked them. I deeply respect what he does, but the way he does it, is not my type.

In my world, there are many creations which I truly enjoy and deeply appreciate, but I never liked the creator. It is not limited to the tech world though. Even in music there are many singers that I can not stand watching or hearing an interview with them, but I truly admire their music.

I don’t know how much one could or should differentiate the creator from the creation, if at all. But separating those two, makes me judge each creation by itself. Not affected by previous creations. Not being prejudice and judgemental.

My favourite quote which I have always tried to respect is: “Accept the truth, even if your enemy says it“. Interesting enough, I heard this quote from someone who I never liked.

The golden art of talent acquisition

In today’s world the demand for developers is more than supply and finding a good developer is not easy. A good developer is all it takes for a successful product. So in this market the technical recruiters play a major role both for companies and developers.

Most of the developers get contacted by recruiters on a regular basis. In my personal experience majority of these first contacts are very disappointing.
Sometimes I have received messages for the roles that I have close-to-zero experience in, and I figured those are only massively sent email which one out of a thousand may reply back. I consider those as spams.

It has also happened to many a few times that they start the email with my name and as it goes on, the second paragraph starts with someone else’s name, showing that the recruiter did not even spent the time to fill their email template properly.

And worst of all is miss-spelling my name. I don’t expect the correct pronunciation, that’s completely normal. But failure in a simple copy/paste is very annoying.

In one of the cases I felt like dealing with a car dealer. Someone who just wants to make a deal and could not care less about who is on the other side of the transaction.

But in some rare cases you find some genuine recruiters which are good at their jobs. I had read this before and I already knew there are some rare golden recruiters.

I recently had the honor of experiencing one. When Caroline Stocks contacted me for the first time it was a nice and warm message, but very well crafted according to what my previous experiences has been. It was a great fit to what I do and what I want to do. To me that was a hit on the spot and got my attention.

The phone conversation was an absolute delight. A very humble character who was willing to spent all the time it might take to get to know what I have been doing and what I want to do. A very good and patient listener who authentically enjoys what she is doing.

After the phone conversation I headed to their website, as a web-enthusiast, I always do that. What I enjoyed the most is “Meet the team” section. It takes a very humble person to name one-self last as the founder of a company.

She had spent sometime after to go through my twitter account. Also reading my blog posts just to get to know me better is an incredible sign of a quality recruiting process.

What makes the difference in is the personality, the skills, the patience and having everyone’s benefit in mind.

I wish her luck and hope we will have more people like her.

Software Architect & Software Architecture

Becoming a software architect is the goal for many software developers. To me a seasoned architect is someone who has a lot of experience in different tiers of the software. It is very important for an architect to have their hands on the development. I believe a good architect is a good programmer as well.

In agile era architecture is responsibility of everyone in the team. Everyone should have a firm understanding of the architecture of the application. Although there will be one person more responsible for designing, innovating and simplifying the architecture, but understanding the whole system is everyone’s responsibility.

To me architecture includes both the infra structure and the software design. This is mandatory for an scalable application.

I will keep it short and would encourage you to watch this great talk from Molly Dishman & Martin Fowler Keynote.

Vagrant CentOS 6.5 Shared Folders issue

Recently I stumbled upon a weird issue which I had it before but neglected to fix it completely, `Vagrant Shared Folders` on CentOS.

The problem is when I upgrade my VirtualBox to 4.3.22 (on a Windows 7 host) and Vagrant to 1.7.2 my previous CentOS 6.5 shared folders started acting up.

The problem was the initial `vagrant up` was working perfectly fine, all the shared folders were mapped and working like a charm. BUT subsequent `vagrant reload`s were not the same. The machine was rebooting successfully but the shared folders were raising the following issue:

> default: Machine booted and ready! 
> default: Checking for guest additions in VM... 
> default: Mounting shared folders... 
 default: /vagrant => D:/myproject
Failed to mount folders in Linux guest. This is usually because 
the "vboxsf" file system is not available. Please verify that 
the guest additions are properly installed in the guest and 
can work properly. The command attempted was: 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` vagrant /vagrant 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` vagrant /vagrant 
The error output from the last command was: 
/sbin/mount.vboxsf: mounting failed with the error: No such device 

The problem was because of the version discrepancy between VirtualBox guest Additions between `Host` and `Guest`.

Initially you could just log in to the VM and manually update the VirtualBox guest additions:

$vagrant ssh
-bash-4.1$ sudo /etc/init.d/vboxadd  setup

but even that was not working in my case with `CentOS 6.5`. It was raising the following:

Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably installed with
yum install kernel-devel-2.6.32-504.8.1.el6.x86_64

Building the main Guest Additions module [FAILED]
(Look at /var/log/vboxadd-install.log to find out what went wrong)
Doing non-kernel setup of the Guest Additions [ OK ]

Viewing the contents of the log files indicated the following:

$ cat /var/log/vboxadd-install.log
/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module.

Which means that the kernel headers are miss-matching. This post helped me to get it straight and by running the following command I was able to get the kernel headers right:

$ sudo yum install  gcc dkms   kernel-devel

Now I could run the `vboxadd` again and it works smoothly:

$ sudo /etc/init.d/vboxadd  setup

Now if you log out of the VM and do a `vagrant reload` everything would work as expected.

My final step was to integrate these in to the Provisioning of the VM as it runs find the first time I’m initializing the machine.

As I’m doing a simple `shell` file to provision the machine, I just added these two lines to the end of the bash file which is used by VagrantFile:

sudo yum install  gcc dkms   kernel-devel
sudo /etc/init.d/vboxadd  setup

Hope this would save someone some time.

update 1

Adding the commands to the provisioning file is not working. When they are manually executed in the VM, they work perfectly fine, but through the provisioning it fails which I’m investigating more on that.

Update 2

My friend and colleague, Marc,  pointed out that, this could be achieved through the Vagrant plugin `vbox-guest`. Using vbox-guest is pretty straight forward and easy with latest versions of the OSs CoreOS, Ubuntu, CentOS7, etc. The problem I was facing was because of the CentOS 6.5 and the discrepancy between the kernel versions required by Virtualbox and the one comes by default with CentOS 6.5. That is the reason I ruled out the plugin and completely uninstalled it from Vagrant.

Alternative Solution with NFS (winnfsd)

Another option to avoid all the hassles above is to use `NFS`. NFS is not supported natively by Windows.But the excellent Vagrant winnfsd plugin makes it work through a winnfsd service running in the background.

After installing the plugin, all you need is to specify the shared folders in you `VagrantFile` and the set the type to `nfs` as follow:

config.vm.synced_folder ".", "/vagrant", type: "nfs"

Simplistic setup is one of the perks of `nfs`, but as it is not supported natively in the host machine, there could be some cases with some issues.

So far I only have seen a case with installation of `node-gyp` which causes an error like this:

 CXX(target) Release/obj.target/ursaNative/src/asprintf.o 
 SOLINK_MODULE(target) Release/obj.target/ursaNative.node 
/usr/bin/ld: Release/obj.target/ursaNative/src/ursaNative.o: bad relocation section name `' 
Release/obj.target/ursaNative/src/ursaNative.o: could not read symbols: File format not recognized 
collect2: ld returned 1 exit status 
make: *** [Release/obj.target/ursaNative.node] Error 1 
make: Leaving directory `/vagrant/node_modules/ursa/build' 

This is an issue with the underlying shared file system. So far the only way I could have work around this is something like below:

$ cd /tmp
$ npm install node-gyp
$ cp -r node_modules/node-gyp /vagrant/node_modules/

This is a big issue though. Because of the unpredictable errors and unhelpful messages, I decided to drop the idea of using winnfs. It adds more complexity than it brings simplicity.

Up and Running in no (less) time with dotfiles

Setting up a new computer to your comfort is very time consuming. That includes all the tiny configurations, aliases, functions and small modifications you have done throughout the time.

Some configurations are not very easy to remember or you might have lost the details.

I have some `launchd` tasks configured on my computer which I don’t remember to carry them over every time I switch systems or when I need them some where else. I usually keep a note of how the setup works in a wiki, but what could explain it more than the syntax of how it actually works?

I like the idea of dotfiles, it gets you up and running very quickly. You could branch out one of the existing ones or you could start your own.

I started my own and trying to put what ever setup I have made to my computer in to that repository categorically.
In case of `launchd`, I have created a category specially for that and in the `bootstrap.sh` file which is supposed to setup a complete system, I have written few lines of code to loop through the folder, find any file in that, copy them over to  `/Library/LaunchDaemons/` , `load` them and then `start` the tasks.

I believe my dotfiles repository would give me more confidence the next time I want to completely scrap my OS and get a fresh install.

Testing a Google Polymer Project using InternJS

For the pas few days I had the opportunity to play with Google Polymer project and start implementing something very interesting.

But no project gets pretty fun until it is testable. And testing Polymer projects is not as easy as it is expected to be.

There are some existing tools to test including:

But we have had InternJS fully setup for our project and we’ve been happy with it [1]. So moving to yet another JS testing framework is the least we would want to do.

Getting InternJS up and running to test the Web-Components is not easy. I was lucky enough to stumble upon Chris Storm‘s blog posts (1, 2) around the same time I was looking in to this. Those posts helped me to get me on my foot. Although they are not fully functional.

The nature of problem is because of Shadow DOM. Each Web-Component is encapsulated in its own DOM and are completely independent from the rest and also the main DOM.
So even if you capture a reference to the element you want to test you suddenly face the following error from Intern:

stale element reference: element is not attached to the page document

Finally I was able to get a sample test running. It is pretty ugly but at least it could be a start.

The problem

Imagine we have a structure of Web-Components as follow:

<custom-element-one flex>
 <custom-nested-element id="someId">

What we want to test is to get the `id` of `custom-nested-element` and assert its value. So you are expecting the following would work:

return this.remote
.then(pollUntil('return document.querySelector(&amp;quot;custom-element-one&amp;quot;).shadowRoot;', 20000))
.then(function (doc) {
        .then(function (value) {
            assert.equal(value, 'someId');

But unfortunately it won’t work.

The Solution

The problem is coming from WebDriver rather than Intern or Leadfoot. The issue is you can get a reference to the element, but WebDriver thinks that the reference is stale and is not attached to the document. As it only checks the DOM of the main document.

The solution is just a work around on WebDriver’s limitation. Here is a sample which could make it work, if you want to test lets say `id` attribute of that specific nested web-component element.

The same approach would work for result of exposed methods as well.

return this.remote
.then(pollUntil(function () {
    if (document.querySelector('custom-element-one').shadowRoot) {
        return document.querySelector('custom-element-one').shadowRoot.querySelector('custom-nested-element').id;
    return null;
}, [] , 20000))
.then(function (idValue) {
    assert.equal(idValue, 'someId');

At the core what we are doing is using `pollUntil` helper function to fetch the `shadowRoot` of the first web-component when it is ready and use that to fetch the second one.

Response of `pollUntil` is wrapped in a Promise, so having a `then` afterwards would receive the value of the returned `id`.

[1] If you are interested to know more about the architectural approach and tools we are using, I highly recommend reading Dominique’s detailed write up, JavaScript technology Stack.

Node.JS module install behind a corporate proxy

If you happen to be behind a corporate firewall like me, chances are you have tons of problems with Node’s package manager,npm.

So here are some steps which might help you.

1. Setup NPM to use your proxy configuration

sudo npm config set proxy http://proxy.mycompany.comg:3128
sudo npm config set https-proxy https://proxy.mycompany.com:3128

2. Force your npm to use HTTP over HTTPS
You could do this by doing :

sudo npm config set registry http://registry.npmjs.org/


The above steps should work for majority of modules but it may not work for some.

3. If the module has a HTTPS reference explicitly in its package.json dependencies
Some modules specify the full URL in their dependencies other than just the name. If that is the case NPM would go and fetch the module from that URL . If the URL contains HHTPS although you have specified to use HHTP only but it would try to access HTTPS, so it would freeze there specially if your proxy server blocks that.

In this case:
3.1 Clone the repository to your local machine

3.2 Modify the package.json file and replace any HTTPS with HTTP

3.3 Install the local module
To install the module located in your local hard drive, use NPM and instead of module’s name, specify the path:

npm install module/


This should fix most of the issues with installing a module behind a corporate proxy.