Unnecessary Complex Deployment Workflow: Context and Ideas
Say differently: « Forcing creativity by adding constraints » :D.
Nota: The draft of this post became so long that I decided to cut it in not 2 but 3(!) different posts… This first post is about the context, constraints and the plan. The next one will be the common CI setups and the blog specific build aspect. The last one will focus on the gemini capsule and its specificity.
Introduction
This first post is (still) a long post about the concept for my deployment workflow for my blog and gemini capsule. It will describe the constraints I have as a starting points, the ideas I imagined and the actual solution I’m working on. Details about the setup will follow in the next 2 posts of this series.
This new process is already fully implemented for this blog, but not yet for my gemini capsule. 2 reasons for that:
- My capsule is still hosted (with other capsules) on a Digital Ocean droplet and need to be migrated to my homelab (goal: before end of June). I’ll setup the new deployment workflow for my capsule only on the new hosting (homelab), not before (to avoid doing it twice)
- I’m having a hard time finding a good way to transform markdown into gemtext for my blog posts. So I’m actually thinking about what is the best approach for that and the future of my Gemini capsule (more on that later in a gemlog)
My previous process was simpler and easier: a small deploy.sh script that builds public files and deploys them to the web server via rsync. And the same thing for my capsule. It was a bit more complicated than that because of gemini files and stuff, but still, the idea was just to run this deploy script from the root directory of my hugo blog and that was it. If you want to look at it, it is still in the git history.
Why do I want to change a perfectly working process? It has one major drawback: I could not publish anything on my website from outside my house network. Just to change this specific point, while keeping the same list of constraints, it took “a lot” of “work”… Some of those constraints are ideological, some are personal preferences, some are good practices, most if not all of them are fully subjective :).
At least, it has been working flawlessly for this site for the past 4 or 5 weeks :).
If you have a better solution, while keeping the constraints list as is (too easy otherwise :)), I’d be happy to hear about it! See the About page for ways to reach me out!
Constraints
So… As said, many constraints… Some basics, some weird, some due to what content is deployed where… I’m trying to be exhaustive here, so some may be obvious.
Basics constraints
Static Generated {website,capsule}
Obvious, but I don’t use a dynamic CMS but Hugo, a static site generator for this blog. It means I can’t edit its content via a web browser, but by adding markdown files within the codebase and building the public files to put on a web server. I won’t move away from Hugo (and SSG) at this stage.
Same for my gemini capsule, I’m using a static capsule generator called kiln and that most probably won’t change (this one is slightly more open than hugo, but not much).
Source code on sourcehut
I decided some time ago to use sourcehut as my primary dev forge. So all my public code are now there, including the 2 blog and capsule repos.
Sourcehut has a nice project capabilities to regroup git repo (but also todos and mailing lists) together. That’s why I have a dedicated writing project that contains the 4 repos required for my blog and capsule.
Blog hosted in house
This blog is hosted in my homelab in a small LXC container running nginx. The container can only be reached only from the reverse proxy (also nginx in my homelab) via HTTP (or via ssh only from my local network).
Capsule hosted in the cloud… For now!
All the gemini capsules I host are right now on digital ocean droplet… But they will all be migrated to my homelab soon-ish. It means that right now I can update my capsule from everywhere, but I will have the same limitation as the blog once they are migrated home. So let’s fix both at once.
Ingress HTTP(S) only
Biggest constraint that started all of this, my home network only accepts HTTP and HTTPS incoming connection on port 80 and 443 (and at some point 1965 for Gemini). It means that I can not ssh or VPN from outside to my local network. And yes, this is on purpose and won’t change.
Local Git repo not available from outside
Since the last attack on sourcehut, I decided to run a local Git repository (a forgejo instance) on my homelab. But it is only available on my local network for the same reason as above (and also to avoid git repository crawler). I’ll write later a post about how I manage my project in different git forges, but for the sake of this post, let’s just say that I can not commit on this forge for outside, so using it as the main forge with a CI tool is not possible.
Intertwined content constraints
Now the fun starts with constraints related to which content should be where! The way I want things:
- (Blog and Gemlog posts are written in emacs and exported in the right format (markdown for blog posts, gemtext for gemlog entries) in the right project / folder)
- Gemlog entry’s title visible on the web, not their content (markdown files with just the frontmatter part), fully readable on gemini. HTML files even removed from the web.
- Blog post entry fully readable on the web and gemini
- Bookmarks are retrieved from Linkding, only visible on the web, not gemini (at least for now, might change in new deployment)
- Blog static pages are only available on the web, not gemini (won’t change)
- The blog git repository only contains content for blog and pages (no bookmarks or gemlog)
- The capsule git repository only contains content for gemlog1
Additional constraints
These are constraints I added during my reflection about possible solutions. Where some of them felt too ugly for some reasons, I added those to the list.
No local CI
I surprised myself with this one, but I ended up deciding against selfhosting my own CI tools with either Forgejo runner or WoodpeckerCI. Forgejo runner requires docker and my current forgejo instance is installed on a Container, not VM, so not great for adding docker. Didn’t want to add another VM for that on proxmox. WoodpeckerCI seems great but way to complicated to manage for my micro use case, so I discarded it as well.
You may ask yourself: “well, you couldn’t commit to your local Git repo anyway, what’s the difference here?”. Well, I had the potential crazy idea of mirroring my sourcehut repo on a copy on forgejo, then use the CI tool to automate the build. But that was not a idea I liked at all. But fun to think about still :).
« Don’t be evil »©
This one is a bit weird, but came because one of the idea I had: to avoid “alerting” my webserver that a new artifact has been created via sourcehut CI, I could simply start automatically a build from the server regularly, wait for it to finish and then use this generated artifact. But that meant having regular CI jobs and generated artifact even when not needed.
While not really a problem, I didn’t want to go that road. Specially if I didn’t want to wait too long between the commit and the deployment. For example, to have a maximum of 1h between push and deployhment, the CI would had to run (and generate/store artifacts) every hour. Ridiculous idea as well, I didn’t want to store 24 artifacts per day (times 2 with the gemini capsule artifact) for 90 days… Such a waste!
Bonus 1: the Bookmarks “issue”
As said, the bookmarks comes from a selfhosted Linkding instance (read more about it). It means that there are no commits for bookmarks, they are generated when the website output is generated. So no automated CI based on pushed commits.
Would be a bonus to have a better way of doing so with the new process, because I may not post for weeks/months at some point, and having at least bookmarks still updated regularly would mean having this site kept alive without new blog or gemlog posts.
I could make the CI run every day at least once, but I don’t really find it an “elegant” solution… Waiting for a blog post to update the site may mean having 10s of new bookmarks if a new posts is published a long time after the last one.
I’m still thinking about the best approach for this one.
Bonus 2: Statistics
Not so long ago, I added a stats page to show some information about the content available on this site (how many posts per month, per type, etc…). Those statistics are calculated as well during the build process based on the markdown files. It has to run after all contents are gathered, including bookmarks and gemlogs. It will generate a json file and images that are used during the generation of the public files via hugo.
Maybe I can reuse this to display those stats within my Capsule. This is a very low priority task, but I have it in the back of my mind anyway.
Solution overview
After saying all this, the solution I found and implemented is built upon the workflow described bellow. This is a “quick” overview, the solution will be detailed future posts.
In one image

Figure 1: Diagram of the deployment workflow
The different build steps
- Push a new content via git (either on blog or capsule)
- Sourcehut CI starts a build automatically and will generate 2 artifacts: public website files and public capsule files (so 1 build for both).
- Build artifacts process:
- Download required dependencies and clone the 4 git repositories (blog, capsule, custom theme and build scripts).
- Manage content from different sources:
- Bookmarks are fetched from Linkding via API in a python script to generates markdown files in the right folder (
hugoRootDir/content/bookmarks/) for each shared bookmarks. - Gemlog entries are simply copied from the capsule git directory to the blog one after being transformed (more on that in the next post, but basically removing content after the frontmatter area and changing the extension from
gmitomd). - Blog posts are also copied to the capsule directory and will be transformed into gemtext from markdown2.
- Bookmarks are fetched from Linkding via API in a python script to generates markdown files in the right folder (
- After retrieving all content, statistics are generated (read more)
- If build is successful, send a message to a selfhosted ntfy instance on a private topic with the build number
- Web (Zoro) and Capsule (Brook) servers subscribe to this ntfy topic
- Deploy:
- Web server retrieve via sourcehut build ID the artifact with website files, uncompress and rsync it to the right folder.
- Capsule server will do the same once this new process is in place for it, but not yet.
- Success:
- Web server send ntfy message on another topic to alert me that the deployment have been successful
- Capsule server will do the same once this new process is in place for it, but not yet.
And voilà! Too complicated for nothing, but as it is now fully automated, it becomes transparent, and the complexity is almost forgotten (until something breaks :)).
Pros / Cons
Pros
- I can make updates from everywhere (I’d sure hope so, that was the initial goal…)
- Automated process, all I need is push files via git.
- A good side effet of this is that I had to do some clean up between the theme repo and the blog repo
- Can be manually started via sourcehut web UI, via API (or hut cli) if needed.
- I can work on changing blog files locally and still publish new content even if my local version is more or less broken. Before I couldn’t because the deploy script would use the build folder as is.
- Artifacts are perfectly reproducible now, wasn’t 100% the case before.
Cons
- Takes longer than before (±3min instead of 15sec), but it is still fast enough for me
- More complex to maintain / change / debug compare to a simple bash script as it contains multiple build files in both bash and python…
- The Gemini capsule part is more complex because of the lack of good solution to generate gemtext from markdown
Small introspection before ending this post…
Before going into the solution details in the next post, let’s ask myself this: “Was it worth it to spend that much time automating all this, even though you had a perfectly simple and working solution in place? Just for the possibility that, maybe one day, you may publish a post while not at home ?”.
Well, self, since you ask such a long question… I believe so…?
I had fun playing with sourcehut CI and all these scripts… So I’d say yes! I may have spent more time working on this that the amount of time I’ve spent last year writing blog post from outside (because it actually didn’t happen) and I still haven’t published a post from outside yet… So on that front you may say no…
Another reason I did it was the IndieWeb and its philosophy. What does it have to do with this, you may ask. Well, it makes me want to make this site even more my “online source of truth” (for content at least). So there is a high possibility that I’m going to add a notes content type for a more micro blog type of content (most probably linked to my personal fediverse account). At least for “toot” that I’d prefer to be less ephemeral than on the Fediverse only. More on that later, but I’m mentioning it because if I indeed add them, I will probably publish a lot more (smaller) content and there is a higher chance that I want to do that from outside my home. Finding a way to do that from my phone might be the biggest issue, but I’ll look into this later.
Conclusion
At the end of the day, it isn’t that complicated: a CI is running on git changes to create artifact to be deployed and alert my servers via ntfy. Local servers then retrieve artifacts and deploy them. Why does this need 3 long posts then?? I’m not even sure, but that may be my inability to be concise on this blog (which is weird because I’m usually not bad at doing so at my day job…).
Anyway, 2nd post of this series is already well advanced, as opposed to the 3rd one that is blocked until I find a solution that I like for my displaying (or not) my web content on my Gemini capsule.