Securing MicroServices

At RedVentures we have been benefiting from migrating our architectures to MicroServices for a few years. These fine grained services have helped make us technology and protocol agnostic with resilience and elasticity.
Apart from all the benefits (and challenges), one aspect remains really important as we grow in number of different services: security.

It is extremely critical for us to control who has access to what and who is allowed to perform actions.

Authentication vs. Authorization

When we are talking about security, it’s important to distinguish between these two:

  • Authentication is verifying the identity of a user or process.
  • Authorization is permitting performing of a specific action to a user or a process.


Identifying a user or process is a very critical operation that involves checking credentials and other sensitive data. A simple mistake could result in disasters. Luckily, there are numerous solutions to solve this problem and our approach for such sensitive matter is: reuse instead of build.
Encouraging to reuse what’s already built by experts. Our Authentication solution consists of the followings:

1 — Active Directory
Provides simple and easy to use authentication for our internal users. But we have encountered other scenarios, like internal users want to use external SaaS applications and Cloud providers. Also our external partners are willing to have access to some of the internal tools provided for them. Therefore we have to extend out Authentication using:

2 — Okta
Okta syncs with our ActiveDirectory system and provides access for our internal and external users by providing SAML (Security Assertion Markup Language). The SAML is signed using a private key by the identity provider (Okta) and each application has access to the corresponding public key, so they can assure if the SAML is legitimately signed or not.

SAML is XML based and we have long passed dealing with XML and its human unfriendliness, that’s why we introduced:

3 — JWT
JSON Web Token has made authentication a lot easier by providing an access token which asserts some claims in it. The same signing/verification process as above ensures that the token and the claims are legitimate, and they can be used by different services to verify the validity of the user or process.

The structure of JWT is base64 encoded values of following three sections.

Header provides some meta data about the token. Signature ensures the validity of the identity provider and the payload itself. Payload after being decoded gives a JSON object similar to the following:

Keep in mind that no sensitive data should be included in the JWT, all the data is easily decodable.


After ensuring that each MicroService is aware of the identity of the user or process accessing it, we need to ensure that the action is permitted to be performed on that service by the user. We have multiple partners which may use the same MicroService but we have to ensure that they have only access to the parts and actions they are allowed and nothing more.

The problem
Historically, each of our services implemented Authorization differently. Some were using ActiveDirectory groups to permit/block users and actions. Some were having their own custom implementations. It was scattered, inconsistent and very difficult to monitor.

The desired Solution
We wanted our Authorization to have the following criteria to make it adaptable company-wide, as well as improving our security:

  • Decentralized
  • Simple
  • Scalable
  • Monitoring
  • Centralized Management

The majority of existing solutions would make a round trip to a central authorization server for each request to verify permissions. Due to the large number of round trips, we experienced significant network traffic and latency in our applications. We knew this could not scale and needed a self-contained service to avoid any round trips.
Besides that, many of the existing solutions support Roles, Groups and Access Levels and having flexibility of fine grained management and control comes with a price of additional complexity.

Unnecessary complexity makes the application prone to errors and therefore vulnerabilities.

Simplified Authorization
As mentioned earlier, each MicroService could serve different business partner so access and performed actions on each should be controlled.
We have defined this level of control within each MicroSearch as an Area. A user or a process that has access to all the Areas within the MicroService has a Global access.

Looking at the organization of these access controls we can easily fit them in to a JSON structure.

JWT goes hand in hand with JSON, therefore we embedded the Authorization data inside the JWT.

Having Access Control data inside JWT provides a lot of flexibility. After logging in, each user gets assigned a JWT which includes Access control data. Each MicroService is not only able to Authenticate the user, but also to Authorize the user as well and allow/block certain actions based on the permissions.
This validation happens as a self-contained process, there is no need for a central service to verify an access therefore there is no round-trip.

The Big Picture
Overall the architecture looks like the following:

After logging in with Okta, user gets redirected to a JWT identity provider, before generating the token, providers communicates with an Authorization service and gets all the permissions for the user, merges it with other data and then generate the token and send it back to the user.

Ease of integration by providing SDKs
With the current integration as is, we have checked 3 out of 5 checkboxes that we defined above. It is a decentralized system which is simple and scalable.
To make integration easier we provided the SDKs for major languages being used in our technology stacks, including C#, Go, NodeJS and PHP.

In the SDKs we have embedded the Monitoring on who is accessing which resource within which Area and what’s the action they are taking. By shipping all the logs to ElasticSearch we could also put alerts on top of suspicious attempts to draw attention to dig in to the activities of that particular user or process.


Reuse > Recreate
Security is a sensitive topic and very important. But there are plenty of tools already built which have been proven over time and they are actively maintained and patched. Try to reuse what is already built and proven rather than recreating it.

Token based Auth
Token based Auth like JWT is scalable, decentralized, stateless and easy to use. Also you can embed custom data in to parts of it which could be retrieved and used in any application. Keep in mind that no sensitive data should be embedded in the token based authentication as it is decodable easily. Signature is only to verify the validity of data.

Simple does not mean Insecure
When it comes to security try to keep things simple. Security does not necessarily need to be complex. Over complicating the case might work counter productive and make the setup prone to vulnerabilities. Keeping security simple makes it easier to monitor, trace and debug.

Business value of Polyglot programming

Many years ago, after working with PHP for about 8 years, I was pushed to start development in Python for the first time. My mentality at the time was to focus on one thing and become the expert on the subject. I wanted to be the expert of PHP & MySQL world and thought moving away from it would make me less focused on the subject.

I was very resistant to this change and facing a new world.

As I delved more into Python and learning it, I realized I was solving almost the same problems but in a very different approach. Features of the language are very similar, but the way they are used is different. The community of Python developers approach problems in a different way than PHP developers. I started learning their approach and how they thought and approached problems. And started liking it.

As I became more comfortable with Python, I started thinking through problems differently. Since I knew two languages, I had the option of approaching problems in two different ways. Whichever best solved the problem, won.

A few years passed, and again I switched projects. And again, I learned another language: This time, Node.JS. The language was not new to me, but the stack was. But just as I had lost my prejudice on programming languages, I was willing to take the opportunity to learn the platform and polish my skills.

I learned that in practice that each language has its own advantages and each is good to solve particular problems. The more I learned different technology stacks, the less I became biased in solving problems with a certain technology stack or language. When thinking about solving a problem, I spend most of my time thinking what the problem actually is, the best approach to solve it, and what tool fits the best for that approach. I then pick the best tool for the problem out of the toolbox.

Being technology and language agnostic helps the development teams to approach the problem with the best possible tools but also introduces its own business values and challenges. When you use the best tool for the job, development time decreases, the code base simplifies, and you stand to see an increase the overall performance.

However, as with any other technological and architectural choices, there are advantages and disadvantages associated with Polyglot technologies.

Simply solving the problem

When choosing multiple technologies to co-exist in an environment, the goal is to pick the best solution for the problem in hand. By picking the appropriate technology the implementation become easier and simpler.

So if you are building a bi-directional notification system, you can choose the technology stacks that are most suited for this.
Or if you want to have high performance data throughput, there are variety of different technologies to look at.

Of course a single language is capable of doing a lot, but when you pick the stack that’s more appropriate for a particular problem, you will have less time spent on trying to bend the problem to fit with the capabilities of the chosen language.


No single language is best at solving all problems. When it is needed to solve a problem with a specific technology which is not designed for that, it takes more time from the development team to handle it. Also it is very probable that the actual performance is affected by numerous tweaks that developers had to do. Also usually more work is involved to adapt a technology to do something which is not initially designed for. By picking the right technology stack, not only it reduces the development time, but also the overall performance of that component is highly affected as well.

You have a toolbox, not only hammer

When you have variety of tools from which to choose, you will pick the tool that is best for the problem, not the one with which you are most comfortable.

Abraham Maslow said, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” When the development team has more tools in their toolbox, only nails look like nails.

Team of full-stack developers

I believe architecture of a software application is a collaborative responsibility. Everyone in the team should have a firm understanding of the architecture.

The architecture includes the full spectrum of the software development, build, deployment, and operations.
Everyone in the team should understand the big picture of what is happening under the hood. This helps the entire team to have inputs for the architecture of the application and everyone is equally responsible. Also in cases of emergencies, every individual within the team is capable of jumping in and help.

DevOps and Maintenance

Operation of Polyglot systems is the biggest concern for development team.

When the number of different technologies grows, it becomes more difficult for the DevOps team to maintain the application and infrastructure. Each language and technology requires its specific environment, setup, and skill set to monitor and maintain.

Complexity of a system grows exponentially. The more complex a system gets the more difficult it becomes for maintenance.

No better time than now

Being able to use polyglot technologies and programming languages is not a new idea. Throughout the history of software development there have been many attempts to facilitate this and make integration of completely different technologies easier. CORBA, Java RMI, and different RPC methods are all examples of different approaches to solve the same problem; acting as a liaison to make different technologies communicate with one another.

It has never been easy to do so. Each of these technologies has its own drawbacks, which made them popular for certain situations. But with the increasing popularity and growth of containers, it is easier to package an application with its dependencies into one isolated image, which in turn makes it easy to deploy anywhere. Containers, in combination with microservices, makes polyglot technologies a good solution.

In a microservices approach, an application is broken into several smaller services, each could potentially be built on a different technology stack. But when packaged in a container it is ready for deployment in any environment, a lot easier.

The beauty of microservices is the isolation of each component. You can assign a single responsibility to each component and use a technology stack that best solves that particular responsibility. Each component provides a standard interface with which the rest of the microservices communicate.


When facing a technical problem, being technology agnostic enables you to pick the best tool for the job. You can also break a problem in to several smaller components and pick the best suitable technology for each.

With the emerge of containers and microservices,  it is more convenient than ever to build software systems consisting of several different technologies and have them communicate with one another. One main concern is complexity. The more technologies we have, the more complex it becomes to monitor and maintain a system.

Software engineering is the science of knowing and understanding the tools and picking the right ones for the job.

RV Tech has been a great example of this. In the past few years, we have been able to decouple our monolith applications and break them in to tiny microservices and use variety of different technologies to solve our problems.
The beauty lies under the fact that we don’t necessarily need to be certain for a specific technology when we are picking it. It’s small enough so it can be easily replaced. Relatively!

Judging a creation by the creator

I never liked any products produced by Microsoft. Never (at least before they came up with .Net core). But I have always respected Bill Gates. He is the person I look up to. A character I want to be. The humbleness, simplicity, the smile, the charity work, it all makes me want to look up to him and try to adopt and develop some of his attitudes.

On the exact opposite side of the spectrum resides, Linus Torvalds. I love all the software products this genius human produces. From Linux to Git. Everything has changed the way I work and I am very grateful to live in an era when he lives as well. But when it comes to his character and personality, I have the same feeling as Microsoft products, never liked them. I deeply respect what he does, but the way he does it, is not my type.

In my world, there are many creations which I truly enjoy and deeply appreciate, but I never liked the creator. It is not limited to the tech world though. Even in music there are many singers that I can not stand watching or hearing an interview with them, but I truly admire their music.

I don’t know how much one could or should differentiate the creator from the creation, if at all. But separating those two, makes me judge each creation by itself. Not affected by previous creations. Not being prejudice and judgemental.

My favourite quote which I have always tried to respect is: “Accept the truth, even if your enemy says it“. Interesting enough, I heard this quote from someone whom I never liked.

The golden art of talent acquisition

In today’s world the demand for developers is more than supply and finding a good developer is not easy. A good developer is all it takes for a successful product. So in this market the technical recruiters play a major role both for companies and developers.

Most of the developers get contacted by recruiters on a regular basis. In my personal experience majority of these first contacts are very disappointing.
Sometimes I have received messages for the roles that I have close-to-zero experience in, and I figured those are only massively sent email which one out of a thousand may reply back. I consider those as spams.

It has also happened to many a few times that they start the email with my name and as it goes on, the second paragraph starts with someone else’s name, showing that the recruiter did not even spent the time to fill their email template properly.

And worst of all is miss-spelling my name. I don’t expect the correct pronunciation, that’s completely normal. But failure in a simple copy/paste is very annoying.

In one of the cases I felt like dealing with a car dealer. Someone who just wants to make a deal and could not care less about who is on the other side of the transaction.

But in some rare cases you find some genuine recruiters which are good at their jobs. I had read this before and I already knew there are some rare golden recruiters.

I recently had the honor of experiencing one. When Caroline Stocks contacted me for the first time it was a nice and warm message, but very well crafted according to what my previous experiences has been. It was a great fit to what I do and what I want to do. To me that was a hit on the spot and got my attention.

The phone conversation was an absolute delight. A very humble character who was willing to spent all the time it might take to get to know what I have been doing and what I want to do. A very good and patient listener who authentically enjoys what she is doing.

After the phone conversation I headed to their website, as a web-enthusiast, I always do that. What I enjoyed the most is “Meet the team” section. It takes a very humble person to name one-self last as the founder of a company.

She had spent sometime after to go through my twitter account. Also reading my blog posts just to get to know me better is an incredible sign of a quality recruiting process.

What makes the difference in is the personality, the skills, the patience and having everyone’s benefit in mind.

I wish her luck and hope we will have more people like her.

Software Architect & Software Architecture

Becoming a software architect is the goal for many software developers. To me a seasoned architect is someone who has a lot of experience in different tiers of the software. It is very important for an architect to have their hands on the development. I believe a good architect is a good programmer as well.

In agile era architecture is responsibility of everyone in the team. Everyone should have a firm understanding of the architecture of the application. Although there will be one person more responsible for designing, innovating and simplifying the architecture, but understanding the whole system is everyone’s responsibility.

To me architecture includes both the infra structure and the software design. This is mandatory for an scalable application.

I will keep it short and would encourage you to watch this great talk from Molly Dishman & Martin Fowler Keynote.

Vagrant CentOS 6.5 Shared Folders issue

Recently I stumbled upon a weird issue which I had it before but neglected to fix it completely, `Vagrant Shared Folders` on CentOS.

The problem is when I upgrade my VirtualBox to 4.3.22 (on a Windows 7 host) and Vagrant to 1.7.2 my previous CentOS 6.5 shared folders started acting up.

The problem was the initial `vagrant up` was working perfectly fine, all the shared folders were mapped and working like a charm. BUT subsequent `vagrant reload`s were not the same. The machine was rebooting successfully but the shared folders were raising the following issue:

> default: Machine booted and ready! 
> default: Checking for guest additions in VM... 
> default: Mounting shared folders... 
 default: /vagrant => D:/myproject
Failed to mount folders in Linux guest. This is usually because 
the "vboxsf" file system is not available. Please verify that 
the guest additions are properly installed in the guest and 
can work properly. The command attempted was: 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` vagrant /vagrant 
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` vagrant /vagrant 
The error output from the last command was: 
/sbin/mount.vboxsf: mounting failed with the error: No such device 

The problem was because of the version discrepancy between VirtualBox guest Additions between `Host` and `Guest`.

Initially you could just log in to the VM and manually update the VirtualBox guest additions:

$vagrant ssh
-bash-4.1$ sudo /etc/init.d/vboxadd  setup

but even that was not working in my case with `CentOS 6.5`. It was raising the following:

Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably installed with
yum install kernel-devel-2.6.32-504.8.1.el6.x86_64

Building the main Guest Additions module [FAILED]
(Look at /var/log/vboxadd-install.log to find out what went wrong)
Doing non-kernel setup of the Guest Additions [ OK ]

Viewing the contents of the log files indicated the following:

$ cat /var/log/vboxadd-install.log
/tmp/vbox.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module.

Which means that the kernel headers are miss-matching. This post helped me to get it straight and by running the following command I was able to get the kernel headers right:

$ sudo yum install  gcc dkms   kernel-devel

Now I could run the `vboxadd` again and it works smoothly:

$ sudo /etc/init.d/vboxadd  setup

Now if you log out of the VM and do a `vagrant reload` everything would work as expected.

My final step was to integrate these in to the Provisioning of the VM as it runs find the first time I’m initializing the machine.

As I’m doing a simple `shell` file to provision the machine, I just added these two lines to the end of the bash file which is used by VagrantFile:

sudo yum install  gcc dkms   kernel-devel
sudo /etc/init.d/vboxadd  setup

Hope this would save someone some time.

update 1

Adding the commands to the provisioning file is not working. When they are manually executed in the VM, they work perfectly fine, but through the provisioning it fails which I’m investigating more on that.

Update 2

My friend and colleague, Marc,  pointed out that, this could be achieved through the Vagrant plugin `vbox-guest`. Using vbox-guest is pretty straight forward and easy with latest versions of the OSs CoreOS, Ubuntu, CentOS7, etc. The problem I was facing was because of the CentOS 6.5 and the discrepancy between the kernel versions required by Virtualbox and the one comes by default with CentOS 6.5. That is the reason I ruled out the plugin and completely uninstalled it from Vagrant.

Alternative Solution with NFS (winnfsd)

Another option to avoid all the hassles above is to use `NFS`. NFS is not supported natively by Windows.But the excellent Vagrant winnfsd plugin makes it work through a winnfsd service running in the background.

After installing the plugin, all you need is to specify the shared folders in you `VagrantFile` and the set the type to `nfs` as follow:

config.vm.synced_folder ".", "/vagrant", type: "nfs"

Simplistic setup is one of the perks of `nfs`, but as it is not supported natively in the host machine, there could be some cases with some issues.

So far I only have seen a case with installation of `node-gyp` which causes an error like this:

 CXX(target) Release/ 
 SOLINK_MODULE(target) Release/ 
/usr/bin/ld: Release/ bad relocation section name `' 
Release/ could not read symbols: File format not recognized 
collect2: ld returned 1 exit status 
make: *** [Release/] Error 1 
make: Leaving directory `/vagrant/node_modules/ursa/build' 

This is an issue with the underlying shared file system. So far the only way I could have work around this is something like below:

$ cd /tmp
$ npm install node-gyp
$ cp -r node_modules/node-gyp /vagrant/node_modules/

This is a big issue though. Because of the unpredictable errors and unhelpful messages, I decided to drop the idea of using winnfs. It adds more complexity than it brings simplicity.

Up and Running in no (less) time with dotfiles

Setting up a new computer to your comfort is very time consuming. That includes all the tiny configurations, aliases, functions and small modifications you have done throughout the time.

Some configurations are not very easy to remember or you might have lost the details.

I have some `launchd` tasks configured on my computer which I don’t remember to carry them over every time I switch systems or when I need them some where else. I usually keep a note of how the setup works in a wiki, but what could explain it more than the syntax of how it actually works?

I like the idea of dotfiles, it gets you up and running very quickly. You could branch out one of the existing ones or you could start your own.

I started my own and trying to put what ever setup I have made to my computer in to that repository categorically.
In case of `launchd`, I have created a category specially for that and in the `` file which is supposed to setup a complete system, I have written few lines of code to loop through the folder, find any file in that, copy them over to  `/Library/LaunchDaemons/` , `load` them and then `start` the tasks.

I believe my dotfiles repository would give me more confidence the next time I want to completely scrap my OS and get a fresh install.