Business value of Polyglot programming

Many years ago, after working with PHP for about 8 years, I was pushed to start development in Python for the first time. My mentality at the time was to focus on one thing and become the expert on the subject. I wanted to be the expert of PHP & MySQL world and thought moving away from it would make me less focused on the subject.

I was very resistant to this change and facing a new world.

As I delved more into Python and learning it, I realized I was solving almost the same problems but in a very different approach. Features of the language are very similar, but the way they are used is different. The community of Python developers approach problems in a different way than PHP developers. I started learning their approach and how they thought and approached problems. And started liking it.

As I became more comfortable with Python, I started thinking through problems differently. Since I knew two languages, I had the option of approaching problems in two different ways. Whichever best solved the problem, won.

A few years passed, and again I switched projects. And again, I learned another language: This time, Node.JS. The language was not new to me, but the stack was. But just as I had lost my prejudice on programming languages, I was willing to take the opportunity to learn the platform and polish my skills.

I learned that in practice that each language has its own advantages and each is good to solve particular problems. The more I learned different technology stacks, the less I became biased in solving problems with a certain technology stack or language. When thinking about solving a problem, I spend most of my time thinking what the problem actually is, the best approach to solve it, and what tool fits the best for that approach. I then pick the best tool for the problem out of the toolbox.

Being technology and language agnostic helps the development teams to approach the problem with the best possible tools but also introduces its own business values and challenges. When you use the best tool for the job, development time decreases, the code base simplifies, and you stand to see an increase the overall performance.

However, as with any other technological and architectural choices, there are advantages and disadvantages associated with Polyglot technologies.

Simply solving the problem

When choosing multiple technologies to co-exist in an environment, the goal is to pick the best solution for the problem in hand. By picking the appropriate technology the implementation become easier and simpler.

So if you are building a bi-directional notification system, you can choose the technology stacks that are most suited for this.
Or if you want to have high performance data throughput, there are variety of different technologies to look at.

Of course a single language is capable of doing a lot, but when you pick the stack that’s more appropriate for a particular problem, you will have less time spent on trying to bend the problem to fit with the capabilities of the chosen language.

Performance

No single language is best at solving all problems. When it is needed to solve a problem with a specific technology which is not designed for that, it takes more time from the development team to handle it. Also it is very probable that the actual performance is affected by numerous tweaks that developers had to do. Also usually more work is involved to adapt a technology to do something which is not initially designed for. By picking the right technology stack, not only it reduces the development time, but also the overall performance of that component is highly affected as well.

You have a toolbox, not only hammer

When you have variety of tools from which to choose, you will pick the tool that is best for the problem, not the one with which you are most comfortable.

Abraham Maslow said, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” When the development team has more tools in their toolbox, only nails look like nails.

Team of full-stack developers

I believe architecture of a software application is a collaborative responsibility. Everyone in the team should have a firm understanding of the architecture.

The architecture includes the full spectrum of the software development, build, deployment, and operations.
Everyone in the team should understand the big picture of what is happening under the hood. This helps the entire team to have inputs for the architecture of the application and everyone is equally responsible. Also in cases of emergencies, every individual within the team is capable of jumping in and help.

DevOps and Maintenance

Operation of Polyglot systems is the biggest concern for development team.

When the number of different technologies grows, it becomes more difficult for the DevOps team to maintain the application and infrastructure. Each language and technology requires its specific environment, setup, and skill set to monitor and maintain.

Complexity of a system grows exponentially. The more complex a system gets the more difficult it becomes for maintenance.

No better time than now

Being able to use polyglot technologies and programming languages is not a new idea. Throughout the history of software development there have been many attempts to facilitate this and make integration of completely different technologies easier. CORBA, Java RMI, and different RPC methods are all examples of different approaches to solve the same problem; acting as a liaison to make different technologies communicate with one another.

It has never been easy to do so. Each of these technologies has its own drawbacks, which made them popular for certain situations. But with the increasing popularity and growth of containers, it is easier to package an application with its dependencies into one isolated image, which in turn makes it easy to deploy anywhere. Containers, in combination with microservices, makes polyglot technologies a good solution.

In a microservices approach, an application is broken into several smaller services, each could potentially be built on a different technology stack. But when packaged in a container it is ready for deployment in any environment, a lot easier.

The beauty of microservices is the isolation of each component. You can assign a single responsibility to each component and use a technology stack that best solves that particular responsibility. Each component provides a standard interface with which the rest of the microservices communicate.

Conclusion

When facing a technical problem, being technology agnostic enables you to pick the best tool for the job. You can also break a problem in to several smaller components and pick the best suitable technology for each.

With the emerge of containers and microservices,  it is more convenient than ever to build software systems consisting of several different technologies and have them communicate with one another. One main concern is complexity. The more technologies we have, the more complex it becomes to monitor and maintain a system.

Software engineering is the science of knowing and understanding the tools and picking the right ones for the job.

RV Tech has been a great example of this. In the past few years, we have been able to decouple our monolith applications and break them in to tiny microservices and use variety of different technologies to solve our problems.
The beauty lies under the fact that we don’t necessarily need to be certain for a specific technology when we are picking it. It’s small enough so it can be easily replaced. Relatively!

Advertisements

Judging a creation by the creator

I never liked any products produced by Microsoft. Never (at least before they came up with .Net core). But I have always respected Bill Gates. He is the person I look up to. A character I want to be. The humbleness, simplicity, the smile, the charity work, it all makes me want to look up to him and try to adopt and develop some of his attitudes.

On the exact opposite side of the spectrum resides, Linus Torvalds. I love all the software products this genius human produces. From Linux to Git. Everything has changed the way I work and I am very grateful to live in an era when he lives as well. But when it comes to his character and personality, I have the same feeling as Microsoft products, never liked them. I deeply respect what he does, but the way he does it, is not my type.

In my world, there are many creations which I truly enjoy and deeply appreciate, but I never liked the creator. It is not limited to the tech world though. Even in music there are many singers that I can not stand watching or hearing an interview with them, but I truly admire their music.

I don’t know how much one could or should differentiate the creator from the creation, if at all. But separating those two, makes me judge each creation by itself. Not affected by previous creations. Not being prejudice and judgemental.

My favourite quote which I have always tried to respect is: “Accept the truth, even if your enemy says it“. Interesting enough, I heard this quote from someone whom I never liked.

The golden art of talent acquisition

In today’s world the demand for developers is more than supply and finding a good developer is not easy. A good developer is all it takes for a successful product. So in this market the technical recruiters play a major role both for companies and developers.

Most of the developers get contacted by recruiters on a regular basis. In my personal experience majority of these first contacts are very disappointing.
Sometimes I have received messages for the roles that I have close-to-zero experience in, and I figured those are only massively sent email which one out of a thousand may reply back. I consider those as spams.

It has also happened to many a few times that they start the email with my name and as it goes on, the second paragraph starts with someone else’s name, showing that the recruiter did not even spent the time to fill their email template properly.

And worst of all is miss-spelling my name. I don’t expect the correct pronunciation, that’s completely normal. But failure in a simple copy/paste is very annoying.

In one of the cases I felt like dealing with a car dealer. Someone who just wants to make a deal and could not care less about who is on the other side of the transaction.

But in some rare cases you find some genuine recruiters which are good at their jobs. I had read this before and I already knew there are some rare golden recruiters.

I recently had the honor of experiencing one. When Caroline Stocks contacted me for the first time it was a nice and warm message, but very well crafted according to what my previous experiences has been. It was a great fit to what I do and what I want to do. To me that was a hit on the spot and got my attention.

The phone conversation was an absolute delight. A very humble character who was willing to spent all the time it might take to get to know what I have been doing and what I want to do. A very good and patient listener who authentically enjoys what she is doing.

After the phone conversation I headed to their website, as a web-enthusiast, I always do that. What I enjoyed the most is “Meet the team” section. It takes a very humble person to name one-self last as the founder of a company.

She had spent sometime after to go through my twitter account. Also reading my blog posts just to get to know me better is an incredible sign of a quality recruiting process.

What makes the difference in is the personality, the skills, the patience and having everyone’s benefit in mind.

I wish her luck and hope we will have more people like her.

Software Architect & Software Architecture

Becoming a software architect is the goal for many software developers. To me a seasoned architect is someone who has a lot of experience in different tiers of the software. It is very important for an architect to have their hands on the development. I believe a good architect is a good programmer as well.

In agile era architecture is responsibility of everyone in the team. Everyone should have a firm understanding of the architecture of the application. Although there will be one person more responsible for designing, innovating and simplifying the architecture, but understanding the whole system is everyone’s responsibility.

To me architecture includes both the infra structure and the software design. This is mandatory for an scalable application.

I will keep it short and would encourage you to watch this great talk from Molly Dishman & Martin Fowler Keynote.

Vagrant CentOS 6.5 Shared Folders issue

Recently I stumbled upon a weird issue which I had it before but neglected to fix it completely, `Vagrant Shared Folders` on CentOS.

The problem is when I upgrade my VirtualBox to 4.3.22 (on a Windows 7 host) and Vagrant to 1.7.2 my previous CentOS 6.5 shared folders started acting up.

The problem was the initial `vagrant up` was working perfectly fine, all the shared folders were mapped and working like a charm. BUT subsequent `vagrant reload`s were not the same. The machine was rebooting successfully but the shared folders were raising the following issue:

> default: Machine booted and ready! 
> default: Checking for guest additions in VM... 
> default: Mounting shared folders... 
 default: /vagrant => D:/myproject
Failed to mount folders in Linux guest. This is usually because 
the "vboxsf" file system is not available. Please verify that 
the guest additions are properly installed in the guest and 
can work properly. The command attempted was: 
 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` vagrant /vagrant 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` vagrant /vagrant 
 
The error output from the last command was: 
 
/sbin/mount.vboxsf: mounting failed with the error: No such device 

The problem was because of the version discrepancy between VirtualBox guest Additions between `Host` and `Guest`.

Initially you could just log in to the VM and manually update the VirtualBox guest additions:

$vagrant ssh
-bash-4.1$ sudo /etc/init.d/vboxadd  setup

but even that was not working in my case with `CentOS 6.5`. It was raising the following:


Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably installed with
yum install kernel-devel-2.6.32-504.8.1.el6.x86_64

Building the main Guest Additions module [FAILED]
(Look at /var/log/vboxadd-install.log to find out what went wrong)
Doing non-kernel setup of the Guest Additions [ OK ]

Viewing the contents of the log files indicated the following:

$ cat /var/log/vboxadd-install.log
/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module.
-bash-4.1$

Which means that the kernel headers are miss-matching. This post helped me to get it straight and by running the following command I was able to get the kernel headers right:

$ sudo yum install  gcc dkms   kernel-devel

Now I could run the `vboxadd` again and it works smoothly:

$ sudo /etc/init.d/vboxadd  setup

Now if you log out of the VM and do a `vagrant reload` everything would work as expected.

My final step was to integrate these in to the Provisioning of the VM as it runs find the first time I’m initializing the machine.

As I’m doing a simple `shell` file to provision the machine, I just added these two lines to the end of the bash file which is used by VagrantFile:

sudo yum install  gcc dkms   kernel-devel
sudo /etc/init.d/vboxadd  setup

Hope this would save someone some time.

update 1

Adding the commands to the provisioning file is not working. When they are manually executed in the VM, they work perfectly fine, but through the provisioning it fails which I’m investigating more on that.

Update 2

My friend and colleague, Marc,  pointed out that, this could be achieved through the Vagrant plugin `vbox-guest`. Using vbox-guest is pretty straight forward and easy with latest versions of the OSs CoreOS, Ubuntu, CentOS7, etc. The problem I was facing was because of the CentOS 6.5 and the discrepancy between the kernel versions required by Virtualbox and the one comes by default with CentOS 6.5. That is the reason I ruled out the plugin and completely uninstalled it from Vagrant.

Alternative Solution with NFS (winnfsd)

Another option to avoid all the hassles above is to use `NFS`. NFS is not supported natively by Windows.But the excellent Vagrant winnfsd plugin makes it work through a winnfsd service running in the background.

After installing the plugin, all you need is to specify the shared folders in you `VagrantFile` and the set the type to `nfs` as follow:

config.vm.synced_folder ".", "/vagrant", type: "nfs"

Simplistic setup is one of the perks of `nfs`, but as it is not supported natively in the host machine, there could be some cases with some issues.

So far I only have seen a case with installation of `node-gyp` which causes an error like this:

 CXX(target) Release/obj.target/ursaNative/src/asprintf.o 
 SOLINK_MODULE(target) Release/obj.target/ursaNative.node 
/usr/bin/ld: Release/obj.target/ursaNative/src/ursaNative.o: bad relocation section name `' 
Release/obj.target/ursaNative/src/ursaNative.o: could not read symbols: File format not recognized 
collect2: ld returned 1 exit status 
make: *** [Release/obj.target/ursaNative.node] Error 1 
make: Leaving directory `/vagrant/node_modules/ursa/build' 

This is an issue with the underlying shared file system. So far the only way I could have work around this is something like below:

$ cd /tmp
$ npm install node-gyp
$ cp -r node_modules/node-gyp /vagrant/node_modules/

This is a big issue though. Because of the unpredictable errors and unhelpful messages, I decided to drop the idea of using winnfs. It adds more complexity than it brings simplicity.

Up and Running in no (less) time with dotfiles

Setting up a new computer to your comfort is very time consuming. That includes all the tiny configurations, aliases, functions and small modifications you have done throughout the time.

Some configurations are not very easy to remember or you might have lost the details.

I have some `launchd` tasks configured on my computer which I don’t remember to carry them over every time I switch systems or when I need them some where else. I usually keep a note of how the setup works in a wiki, but what could explain it more than the syntax of how it actually works?

I like the idea of dotfiles, it gets you up and running very quickly. You could branch out one of the existing ones or you could start your own.

I started my own and trying to put what ever setup I have made to my computer in to that repository categorically.
In case of `launchd`, I have created a category specially for that and in the `bootstrap.sh` file which is supposed to setup a complete system, I have written few lines of code to loop through the folder, find any file in that, copy them over to  `/Library/LaunchDaemons/` , `load` them and then `start` the tasks.

I believe my dotfiles repository would give me more confidence the next time I want to completely scrap my OS and get a fresh install.

Testing a Google Polymer Project using InternJS

For the pas few days I had the opportunity to play with Google Polymer project and start implementing something very interesting.

But no project gets pretty fun until it is testable. And testing Polymer projects is not as easy as it is expected to be.

There are some existing tools to test including:

But we have had InternJS fully setup for our project and we’ve been happy with it [1]. So moving to yet another JS testing framework is the least we would want to do.

Getting InternJS up and running to test the Web-Components is not easy. I was lucky enough to stumble upon Chris Storm‘s blog posts (1, 2) around the same time I was looking in to this. Those posts helped me to get me on my foot. Although they are not fully functional.

The nature of problem is because of Shadow DOM. Each Web-Component is encapsulated in its own DOM and are completely independent from the rest and also the main DOM.
So even if you capture a reference to the element you want to test you suddenly face the following error from Intern:

stale element reference: element is not attached to the page document

Finally I was able to get a sample test running. It is pretty ugly but at least it could be a start.

The problem

Imagine we have a structure of Web-Components as follow:

<custom-element-one flex>
 <custom-nested-element id="someId">
 </custom-nested-element>
</custom-element-one>

What we want to test is to get the `id` of `custom-nested-element` and assert its value. So you are expecting the following would work:

return this.remote
.get(require.toUrl('http://localhost:8500/test.html'))
.then(pollUntil('return document.querySelector(&amp;quot;custom-element-one&amp;quot;).shadowRoot;', 20000))
.findByTagName('custom-element-one')
.getProperty('shadowRoot')
.then(function (doc) {
    doc.findByTagName('custom-nested-element')
        .getAttribute('id')
        .then(function (value) {
            assert.equal(value, 'someId');
        });
});

But unfortunately it won’t work.

The Solution

The problem is coming from WebDriver rather than Intern or Leadfoot. The issue is you can get a reference to the element, but WebDriver thinks that the reference is stale and is not attached to the document. As it only checks the DOM of the main document.

The solution is just a work around on WebDriver’s limitation. Here is a sample which could make it work, if you want to test lets say `id` attribute of that specific nested web-component element.

The same approach would work for result of exposed methods as well.

return this.remote
.get(require.toUrl('http://localhost:8500/test.html'))
.then(pollUntil(function () {
    if (document.querySelector('custom-element-one').shadowRoot) {
        return document.querySelector('custom-element-one').shadowRoot.querySelector('custom-nested-element').id;
    }
    return null;
}, [] , 20000))
.then(function (idValue) {
    assert.equal(idValue, 'someId');
});

At the core what we are doing is using `pollUntil` helper function to fetch the `shadowRoot` of the first web-component when it is ready and use that to fetch the second one.

Response of `pollUntil` is wrapped in a Promise, so having a `then` afterwards would receive the value of the returned `id`.


[1] If you are interested to know more about the architectural approach and tools we are using, I highly recommend reading Dominique’s detailed write up, JavaScript technology Stack.