How to organize your docker project
Jul 31, 2016
6 minute read

Hey you ! Today we are testing a new way to organize projects with docker !

Why Docker for your project ?

But you first ask yourself Why use Docker ? I know how to organize my projects of today ! and it’s a great question to ask. With the popularity of this little revolution, Docker has been used over and over everywhere, but it’s not always a good practice to use Docker, oh no.

If for example your project is a combination of small microservices which are talking to each other with some ports and are working to get a bigger project up, yeah Docker is your man : simple development, simple deployment of this pasta mess.

But if you have a big monolith, or if you have a lot of servers and a small amount of services (only a database and a nginx-php for example), using Docker is totally overkill ! You will see with time if your project is going to be a Docker-friendly project, but don’t do this too fast, you will loose more time to try to package your project in services instead of evolving your project.

So, have you a project with microservices ?

Yeah let’s read this !

No, I should read another article.

Let’s go !

Let’s see how to properly organize a project with Docker. I do not pretend I have the best solution, but my blog here is fully functionnal with docker, and there is a lot of other services I use on this server.

Firstly, you need to ask yourself what big services you need, for example :

  • A blog
  • A Pydio instance to access your files
  • A random nginx website

And all of that in secure connection, with Let’s Encrypt if possible.

Pfiou that’s a lot, and a way to do it with Docker is to compact each service in a container, maybe with supervisor in order to run multiple process in the same container. But I don’t like this method, I think it’s breaking the Docker philosophy, the goal is to split services in microservices as tiny as possible. So step 2 : identify microservices.

  • A blog : Nginx, Hugo (a static HTML blog generator)
  • A Pydio instance to access your files
  • A random nginx website : Nginx, Php-fpm (if php is needed), MySQL (if a db is needed too)
  • HTTPS : A global Nginx (that redirects subdomains for example to each services), Let’s Encrypt

And here we are ! You have exactly what containers you will need.

The architecture

Update 7/12/2016 : the architecture described is out of date for me, because I decided to separate my blog from other services, but it can apply to you anyway !

With this container organisation, it’s almost easy to see how to combine them in networks, volumes, and to see how microservices will communicate together.

For my server, I choosed to do only one network for the moment, and volumes are shared between nginx and “html service” to serve HTML properly. The map is like this :

  • sub.domain.com -> Global Nginx = HTTPS Gateway + Redirection according to the URL
  • Sub is matching a service -> Redirection to service nginx = HTML serving OR redirection according to another thing (like url path)
  • Etc …

For example, if you go to kodewolf.com, you will pass through the global nginx, so HTTPS will be enabled, and this container has redirection enabled for no subdomain, so you will pass through a second nginx : the blog nginx. This nginx is giving you the html of the blog, which has been generated by Hugo.

With this architecture, you always see how your containers are talking to each others, the security and comprehension is improved. If something is broken, all other services are up, and maintenance is easy without damages : you build another container for the exact part where it fails, and you re-up it. Docker-compose for example updates only new containers with a simple

$ docker-compose up -d

Docker compose and folders

I love to git everything, so my folder structure has to be as good as possible, and if you use docker-compose like me, you will see that having a main docker-compose.yml file or small files in each folder is a tough debate.

For this project, I use two main docker-compose files : a normal one, and a admin one. The normal one is where my services are, like nginx, or pydio for example, they all have the “restart: always” flag enabled, and the admin one is a file I use to create certificates for example, or generate my blog : they are useful sometimes, and they have to be run each time you want to use them.

And so here is my project’s root for example :

- docker-compose.yml
- docker-compose.admin.yml
- global-nginx
    - Dockerfile + confs
- blog
    - source (where my blog is, a volume so)
        - blahblahblah
    - hugo (my hugo container I created)
        - Dockerfile + confs
- pydio
    - Dockerfile
- letsencrypt
    - Dockerfile and scripts

In my main docker-compose file, I indicate where builds are meant to be, and volumes used. I only open port 80 and 443 on my global nginx.

In my secondary docker-compose file, the admin one, I decided to put letsencrypt, a container I launch sometimes to renew my certificates, and the Hugo service, which I run to compile markdown in html, so not everytime.

Launch it !

Docker has this incredible way to make things in production like in development, so the way is the same for both environments :

$ docker-compose up -d

In order to launch services, like global nginx, services nginx, pydio … and :

$ docker-compose -f docker-compose.admin.yml run hugo

To for example generate the blog html according to the markdown. After this, you’re done !

Conclusion

Docker is not for everything, so be careful to use it wisely for a project that needs it. But if you think it’s a good way to go, don’t hesitate and start making Dockerfiles and docker-compose.yml to improve all sides of you’re project !

My way to do things is not meant to everybody too, if you don’t like it don’t force yourself to use it, it’s more a example of how you can do things properly with the Docker mess.

If you need more inspiration, or you want to see how I use this on this server, you can go to my frostfire project’s github :) Bye !



comments powered by Disqus