I've been a long-time Heroku user and have tried Render.com as well. They have saved me a lot of time while running early stage startup products, thanks to easy deployments, and high availability. However, over time, I realized a few things:
I was motivated by an exciting project, the opportunity to gain new skills, and potential savings… The decision was made to migrate!"
Here is what the proposed architecture looks like:
Essentially, the process operates in a straightforward manner after executing the deployment script:
Rather than detailing the entire step-by-step process, I prefer to share my experiences along the journey by highlighting these 10 key points.
Self-hosting requires a diverse set of skills. Understanding the role of a reverse proxy, knowing how to automatically create and renew certificates for free, binding a sub-domain, understanding how DNS works, building efficient Docker images, using the command line interface efficiently, monitoring the health of your system, understanding exotic HTTP headers, and so forth. As a side note, ChatGPT was an incredible co-pilot for this journey, significantly reducing the time I was stuck on certain issues.
I had almost forgotten over time that with hosting, you can essentially do anything you want. It's a significant relief not having to pore over vendor documentation to understand the imposed limitations. Now, the only limitations I encounter are physical ones: CPU, RAM, I/O, and bandwidth.
It's somewhat paradoxical that migrating away from a PaaS has helped me better understand how they operate. But if you're somewhat familiar with Heroku or fly.io, the concept of containers (referred to as "dynos") already exists. You need to have multiple ones to avoid interruptions, and you need to build images (known as "slugs"). There are lots of small behaviors like these that I discovered when hosting by myself. Now, I can vaguely imagine how I might create a very simplified version of a platform like this using Docker and scripts. In retrospect, being a founding engineer on a team like Heroku would likely have been an exciting job.
It's hard to express in words just how satisfying it is for an engineer like me to work with such a tool. I think the satisfaction comes from the nature of containerization itself, which clearly stipulates that each unit has a single role. Once the entire orchestration of different containers is functioning, it feels as though nothing can go wrong - it just works. This is the kind of tool that makes me wonder, "How did we manage before this?"
Utilizing Docker in both local environments with dev containers, and in remote environments, is clearly a beneficial approach to achieving dev/prod parity. This is particularly true when compared to Heroku, which relies on stacks and somewhat obscure buildpacks. However, this doesn't mean that I can use the exact same configuration files for all environments. They won't be precisely identical. Yet, the skillset required to deploy them is nearly the same.
Despite being more than a decade old, I'm still a huge fan of this methodology for deploying apps. It's brilliant - it simply works, scales, and helps avoid many pitfalls. I've tried to apply it to Docker deployment as much as I can, in order to maintain the ability to scale this project with more advanced concepts like multiple environments and deployment pipelines. For instance, one challenge was separating the release phase from the build phase with Docker, as a Docker container is designed to run a single command.
I didn't realize how useful a reverse proxy could be until I used Nginx for the first time myself. In a project using Docker, it enables the redirection of traffic to the appropriate container just by observing the URL (or the HTTP headers). It can enforce HTTPS redirection or redirection to the 'www' subdomain. Acting like a gateway, a watchguard of sorts, it decides what to do with each request. Additionally, it can cache content to relieve pressure on your web server.
While using Nuxt, a front-end framework that supports universal rendering, I encountered unexpected errors: the first page load wouldn't work, but subsequent navigation on the site would. How could this be?
I was using the same external backend URL regardless of the source of the backend call. In server-side mode, this is the node server; on the client-side, it's the browser.
At this point, I came to understand the concept of internal network addresses and realized I needed to determine the origin of a query in order to dynamically select the correct address.
I'm not fond of the additional complexity this adds to the whole system, and I'm seeking a better solution.
In software development, simply making things work often isn't enough. This sentiment resonated with me during this project. Even though everything functioned as it should, I wasn't satisfied with the state of the project for several reasons:
At times, all of this feels like a rabbit hole: a never-ending project that can always be improved. I'm not sure if that's necessarily a bad thing, though.
Based on all the insights gained, my perspective is much clearer now. While I learned a lot, the more I delved into it, the more I realized how challenging it is to create one's own hosting service. For hobby projects, it's not a significant concern. But for any commercial project, I wouldn't risk issues like DDoS attacks, data loss, downtime, configuration errors, and other potential pitfalls just to save a few bucks. At some point, it's inevitable that I'd make a mistake, which could lead to a lot of stress during deployment compared to a smooth process like that provided by Heroku.
Perhaps there could be a sweet spot where I can still use Docker to fine-tune the content of my server while using a hosting platform. I believe fly.io is particularly good at this, despite some limitations I mentioned earlier.
But without a doubt, in an early-stage context, and with limited manpower, no one should spend time building infrastructure.