10 Lessons Learned from migrating from PaaS to Docker on a VPS

David Dahan
Jul 2023 6 min
tech
server
virtualization
architecture

First, what's wrong with PaaS?

I've been a long-time Heroku user and have tried Render.com as well. They have saved me a lot of time while running early stage startup products, thanks to easy deployments, and high availability. However, over time, I realized a few things:

  • The pricing isn't very attractive when it comes to multiple personal projects, as I need to pay on a "per app" basis, regardless of the bandwidth actually used. It's frustrating to pay for a front-end, a back-end, and a database for a portfolio website that receives very few visits. Beyond the pricing, there are other tradeoffs to consider: Heroku doesn't support the Poetry package manager or mono-repos, Render and fly.io don't adhere to the 12-factors methodology, and Vercel forces the use of the serverless paradigm for the backend, etc. The more I understood what I truly needed, the more frustrated I became with these limitations.
  • DevOps skills are in high demand, and I feel a bit ashamed to be intimidated when I see requirements like Docker in a job offer.
  • After all, I already have skills with Docker, dev containers, network infrastructure, and shell scripting. So, it didn't seem to be the end of the world!

I was motivated by an exciting project, the opportunity to gain new skills, and potential savings… The decision was made to migrate!"

The plan

  • Use a raw VPS server running Debian to avoid any vendor lock-in and to ensure that I learn the low-level aspects. I used Vultr.com for this, but I suppose I could have used any other service.
  • Fully embrace Docker (with docker-compose) to maintain easy deployments and to mirror my local environment, which uses dev containers.

Here is what the proposed architecture looks like:

Essentially, the process operates in a straightforward manner after executing the deployment script:

  1. The three repositories (highlighted in purple) are cloned onto the VPS Server.
  2. The docker-compose up command is executed, either creating or updating the entire Docker environment, while injecting the appropriate environment variables from the .env files.
  3. A cleanup phase is implemented to ensure that any redundant elements are deleted, thereby saving disk space.

10 things learned

Rather than detailing the entire step-by-step process, I prefer to share my experiences along the journey by highlighting these 10 key points.

I learned more than I expected overall

Self-hosting requires a diverse set of skills. Understanding the role of a reverse proxy, knowing how to automatically create and renew certificates for free, binding a sub-domain, understanding how DNS works, building efficient Docker images, using the command line interface efficiently, monitoring the health of your system, understanding exotic HTTP headers, and so forth. As a side note, ChatGPT was an incredible co-pilot for this journey, significantly reducing the time I was stuck on certain issues.

The sense of low-level freedom is very satisfying

I had almost forgotten over time that with hosting, you can essentially do anything you want. It's a significant relief not having to pore over vendor documentation to understand the imposed limitations. Now, the only limitations I encounter are physical ones: CPU, RAM, I/O, and bandwidth.

I now understand better how PaaS work

It's somewhat paradoxical that migrating away from a PaaS has helped me better understand how they operate. But if you're somewhat familiar with Heroku or fly.io, the concept of containers (referred to as "dynos") already exists. You need to have multiple ones to avoid interruptions, and you need to build images (known as "slugs"). There are lots of small behaviors like these that I discovered when hosting by myself. Now, I can vaguely imagine how I might create a very simplified version of a platform like this using Docker and scripts. In retrospect, being a founding engineer on a team like Heroku would likely have been an exciting job.

Docker is a brillant piece of tech

It's hard to express in words just how satisfying it is for an engineer like me to work with such a tool. I think the satisfaction comes from the nature of containerization itself, which clearly stipulates that each unit has a single role. Once the entire orchestration of different containers is functioning, it feels as though nothing can go wrong - it just works. This is the kind of tool that makes me wonder, "How did we manage before this?"

Achieving dev/prod parity is obviously easier now

Utilizing Docker in both local environments with dev containers, and in remote environments, is clearly a beneficial approach to achieving dev/prod parity. This is particularly true when compared to Heroku, which relies on stacks and somewhat obscure buildpacks. However, this doesn't mean that I can use the exact same configuration files for all environments. They won't be precisely identical. Yet, the skillset required to deploy them is nearly the same.

The 12-factor methodology is still relevant in 2023

Despite being more than a decade old, I'm still a huge fan of this methodology for deploying apps. It's brilliant - it simply works, scales, and helps avoid many pitfalls. I've tried to apply it to Docker deployment as much as I can, in order to maintain the ability to scale this project with more advanced concepts like multiple environments and deployment pipelines. For instance, one challenge was separating the release phase from the build phase with Docker, as a Docker container is designed to run a single command.

A Reverse Proxy is incredibly handy

I didn't realize how useful a reverse proxy could be until I used Nginx for the first time myself. In a project using Docker, it enables the redirection of traffic to the appropriate container just by observing the URL (or the HTTP headers). It can enforce HTTPS redirection or redirection to the 'www' subdomain. Acting like a gateway, a watchguard of sorts, it decides what to do with each request. Additionally, it can cache content to relieve pressure on your web server.

Networking is challenging when using Docker and Universal Rendering

While using Nuxt, a front-end framework that supports universal rendering, I encountered unexpected errors: the first page load wouldn't work, but subsequent navigation on the site would. How could this be?

I was using the same external backend URL regardless of the source of the backend call. In server-side mode, this is the node server; on the client-side, it's the browser.

At this point, I came to understand the concept of internal network addresses and realized I needed to determine the origin of a query in order to dynamically select the correct address.

I'm not fond of the additional complexity this adds to the whole system, and I'm seeking a better solution.

Making things work isn't difficult, but securing the entire process is

In software development, simply making things work often isn't enough. This sentiment resonated with me during this project. Even though everything functioned as it should, I wasn't satisfied with the state of the project for several reasons:

  • There is configuration and coupling everywhere: Dockerfiles, Docker-compose, env files, Nginx configuration, deploy scripts, etc. The principles of DRY (Don't Repeat Yourself) are not being adhered to, and I learned this the hard way. When I tried to use my new domain name, despite a careful text search, I somehow overlooked some places, broke everything, and spent quite a while resolving the issue.
  • It's very easy to break things by making any modifications. As I currently only have a single remote environment, I can't ensure that the production environment will continue to function if I try something new.
  • Everything is difficult to test. I'm not even sure how I would go about testing a shell script.
  • I presume that a more experienced DevOps professional could challenge many of my architectural choices.

At times, all of this feels like a rabbit hole: a never-ending project that can always be improved. I'm not sure if that's necessarily a bad thing, though.

Conclusion : I'll still use PaaS for any serious project

Based on all the insights gained, my perspective is much clearer now. While I learned a lot, the more I delved into it, the more I realized how challenging it is to create one's own hosting service. For hobby projects, it's not a significant concern. But for any commercial project, I wouldn't risk issues like DDoS attacks, data loss, downtime, configuration errors, and other potential pitfalls just to save a few bucks. At some point, it's inevitable that I'd make a mistake, which could lead to a lot of stress during deployment compared to a smooth process like that provided by Heroku.

Perhaps there could be a sweet spot where I can still use Docker to fine-tune the content of my server while using a hosting platform. I believe fly.io is particularly good at this, despite some limitations I mentioned earlier.

But without a doubt, in an early-stage context, and with limited manpower, no one should spend time building infrastructure.


written by
David Dahan
5 likes