From development to production at QDQMedia

Xabier Larrakoetxea

By Xabier Larrakoetxea, Thu 03 December 2015, in category Infrastructure

ci, dev, docker, flow, microservices, prod, tsuru

Introduction

At QDQMedia the engineering team is composed of multiple teams, each team has created its own workflow based on the type of work, experience and comfort.

Today I will talk about the flow that we have created to ship our code from dev to production on our team.

This will be the first article, but not the only one about this flow, we will talk about workflow and architecture, so in this very first post we will see a high level overview and in the next ones we'll put an eye on each of the pieces giving a low level overview, so lets start.

From the bottom to the top!

As we explained in older posts, QDQMedia is adopting a microservice oriented approach for our architecture, this improves flexibility but having this ability also makes us to support more technologies and architectures. Each team/developer selects the stack that suites better for the project, at this moment we have projects in: Python, Node.JS/Javascript/TypeScript, Go and Java

Having this in mind, when we make a decision we know that the project stack needs to be abstracted and think as a generic project.

Coding

We use Git as our VCS. This is important as we will explain later. The technology used to manage our repositories is Gitlab. The very first thing that we do is creating a Git repository on Gitlab, well... before we do this this we need to select a name for the project, and that's a task that takes a lot of time! Now we are ready to clone and start coding.

Knowing that each developer has his own environment (although all are unix-like based and almost everyone uses GNU/Linux) we need a way to have the same environment on all the developers computers. To manage this we have created a standard project environment. Each project needs to have a dev, ci, prod ready environment. Pre and beta are optional but at least one of these has to be prepared.

I'll not explain in depth these environments because it will be explained in another future post, but for the dev environment the key pieces are Docker, Docker compose and make. With this environment and these tools every developer can start coding by simply following this flow:

make up
make shell
# Do stuff
make stop
make rm

Easy to understand and apply!

Each developer starts coding the feature/fix in a new branch where the code will be pushed until the developer decides that the code is ready.

Building

Each time the code is pushed to git (regardless of the branch), Gitlab triggers a build to the CI environment.

In our case our CI system is GitlabCI, Gitlab is configured with a webhook to trigger a build on each pushed code. At this moment the ci environment of each project starts running, as we said with the dev environment, we will explain in depth in another post, but all the ci systems are based on docker so we abstract the stack of each project.

Build list on Gitlabci

Almost all projects have this requirements for a CI build:

When all this steps are executed correctly we can ensure that a deploy package is ready in our deploy package repository.

At this stage we could deploy to preproduction or beta. Some of our projects have automated the deployment to beta or preproduction if the test are passed.

Code review

When the developer has tested the branch, and decides that the code is ready for production, he/she makes a pull request from our gitlab platform so another developer takes the pull request and reviews the code. If the reviewer is happy then merges to the master branch (the stable and production one).

If the branch can't be merged automatically by gitlab, the developer, has to rebase or merge the master branch on his/her branch so the reviewer can do a one click button merge

Merge request

When the merge is made a build is triggered automatically. At this point two things can happen, a good build or a bad build.

If a good build occurs then the code can be shipped. If the build fails, the developer needs to fix the master branch (fixing the problem or reverting the merge) but the master branch can't be in a broken state.

Deployment

Before the code review, we deploy to test on our Beta or Preproduction environments and also when the code is ready to be shipped, we do it multiple times every day, releasing often is a good thing!

The deployment is automated with fabric & fabric bolt, almost all projects can deploy or rollback(the less) with a single click, With this we gain a lot of important things:

Deployment list

In Fabric bolt we have configured all the application environments (production, beta, preproduction...) so we can deploy fast to any environment.

The process is the same for all the projects, fabric bolt triggers the fabric file, which downloads the deployments package (we select the package based on branch names or timestamp), sends a deploy notification to our slack deployments channel and finally starts the deployment to our PaaS (Tsuru, see previous posts for more information)

Deployment detail

When we deploy, everyone knows that someone is deploying something, the developers and ops people only have to subscribe to our deployment slack channel.

Deployment notification

Finally our code is shipped!

Conclusion

From the beginning we wanted an easy and automated flow, we are lucky to have tools that allow us to do this, without them it would be impossible.

We have a lot to improve, but we see improvements every day and think that we are in the right direction.

In the upcoming articles we will take a deep look on each part, prepare for very techie articles! :)

Header photo credit: CEphoto, Uwe Aranas / CC-BY-SA-3.0

Tweet