Automatic Deployment with Forgejo Actions and Portainer CE
Self hosting repositories, containers, and automating their deployment is easier than ever. There are enough moving parts involved that I decided to write up a how-to guide to make it easier for others to follow.
Moving Parts #
This guide covers using Portainer Community Edition as a docker container manager, including stacks / docker compose scripting, forgejo as a Git repository and forgejo actions for automation.
- Forgejo: self-hosted Git forge that also runs the Actions workflow engine
- Forgejo Runner: a separate process that actually executes the workflow jobs including building docker images
- Portainer CE: Docker management UI that exposes a webhook to trigger redeployment
The end result is that when you git push an update to the repository, it triggers portainer to rebuild and deploy your docker container automatically. As of recently, this website is hosted using exactly this process.
Portainer’s “Re-pull image” and “Force redeployment” features would make parts of this easier, but they are Business-tier only (can’t fault a business for wanting to make money I suppose). The webhook trigger itself is free in CE, and that is enough.
Forgejo #
The Forgejo stack is straightforward. The one non-obvious requirement is that Actions must be explicitly enabled via an environment variable since it is off by default.
version: "3"
services:
server:
image: codeberg.org/forgejo/forgejo:10
container_name: forgejo
environment:
- USER_UID=1000
- USER_GID=1000
- FORGEJO__database__DB_TYPE=sqlite3
- FORGEJO__actions__ENABLED=true
restart: unless-stopped
volumes:
- data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
volumes:
data:
After deploying, Actions also needs to be enabled per-repository under Settings → General → Features → Actions.
Forgejo Runner #
To build Docker images, the runner needs access to a Docker daemon. Rather than mounting the host Docker socket, which would give workflow jobs control over every container on the host, we can use Docker-in-Docker (DIND): a separate container running its own isolated Docker daemon that the runner talks to over TCP.
The DIND daemon is bound to 127.0.0.1 rather than 0.0.0.0 so it is only reachable from the host itself and not exposed to the rest of the network. All three containers use network_mode: host so they share the host’s network stack and can reach each other via localhost, and so the runner inherits the host’s DNS configuration for builds, this was mostly done because I use pihole and several other custom DNS configurations in my network.
version: "3"
services:
docker-in-docker:
image: docker:dind
container_name: forgejo-dind
privileged: true
restart: unless-stopped
command: ["dockerd", "-H", "tcp://127.0.0.1:2376", "--tls=false"]
network_mode: host
runner-register:
image: data.forgejo.org/forgejo/runner:9
container_name: forgejo-runner-register
restart: "no"
depends_on:
- docker-in-docker
environment:
- DOCKER_HOST=tcp://localhost:2376
network_mode: host
command: >
/bin/sh -c "
if [ ! -f /data/.runner ]; then
forgejo-runner register
--no-interactive
--instance https://YOUR_FORGEJO_DOMAIN
--token YOUR_REGISTRATION_TOKEN
--name forgejo-runner
--labels self-hosted;
fi"
volumes:
- runner-data:/data
runner:
image: data.forgejo.org/forgejo/runner:9
container_name: forgejo-runner
restart: unless-stopped
command: /bin/sh -c "sleep 5 && forgejo-runner daemon"
depends_on:
docker-in-docker:
condition: service_started
runner-register:
condition: service_completed_successfully
environment:
- DOCKER_HOST=tcp://localhost:2376
network_mode: host
volumes:
- runner-data:/data
volumes:
runner-data:
A few things that caused friction along the way:
The instance URL. Use your public HTTPS Forgejo URL rather than a local IP. Docker’s login action assumes HTTPS and will fail if given an HTTP registry, and stripping the protocol from a URL in a workflow is messier than just using HTTPS from the start.
The sleep. The runner daemon starts faster than the DIND daemon initializes. A five second sleep in the runner command is an inelegant but reliable fix.
Registry credentials #
Forgejo includes a container registry out of the box, we only need to pass it credentials from the runner.
Forgejo supports user-level secrets that are available to all your repositories automatically, and this suits my needs to give the runner access to every repository under my user account. It gets read level permissions on all repos (even private), and package write permissions for pushing images. If you wanted to be paranoid you could have separate credentials on runners for private repos and public repos.
Create a personal access token #
Go to User Settings → Applications → Generate Token. Give it a name like registry-push and enable the following permissions:
- package: read/write (for pushing and pulling images)
- repository: read (for checking out code during the build)
Add a user-level secret #
Go to User Settings → Secrets and create one secret:
REGISTRY_PASSWORDis the personal access token you just generated
Project docker-compose.yml #
Instead of building from a Dockerfile locally, Portainer now pulls a pre-built image from the Forgejo registry. If you had one, replace the build section with an image reference:
name: trmnl-imagegen
services:
trmnl:
image: YOUR_FORGEJO_DOMAIN/YOUR_USERNAME/YOUR_IMAGE:latest
restart: unless-stopped
Portainer registry authentication #
Since the repository is private, Portainer needs credentials to pull the image. Go to Registries → Add Registry → Custom Registry and fill in:
- Registry URL: your Forgejo domain
- Username: your Forgejo username
- Password: the same personal access token created above Once added, Portainer will use these credentials automatically when pulling images from that registry.
On the stack’s GitOps updates section, enable the webhook mechanism and copy the generated URL. Add it as a secret at the repository level in Forgejo under Settings → Secrets and Variables → Actions, named PORTAINER_WEBHOOK_URL. This is the one secret that must be set per-repository since it is specific to each Portainer stack.
Workflow #
With everything in place, the workflow builds the image, pushes it to the registry, and triggers Portainer to redeploy, all from a single file at .forgejo/workflows/deploy.yml:
on:
push:
jobs:
build-and-deploy:
if: github.ref == format('refs/heads/{0}', github.event.repository.default_branch)
runs-on: self-hosted
env:
DOCKER_HOST: tcp://172.17.0.1:2376
container:
image: catthehacker/ubuntu:act-latest
options: --network host
steps:
- uses: actions/checkout@v4
- name: Log in to Forgejo registry
uses: docker/login-action@v3
with:
registry: ${{ env.GITHUB_SERVER_URL }}
username: ${{ github.repository_owner }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Set registry host
run: echo "REGISTRY_HOST=${GITHUB_SERVER_URL#https://}" >> $GITHUB_ENV
- name: Build and push image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ env.REGISTRY_HOST }}/${{ github.repository_owner }}/${{ github.event.repository.name }}:latest
- name: Trigger Portainer redeploy
run: curl -k -X POST "${{ secrets.PORTAINER_WEBHOOK_URL }}"
The registry URL, username, and image name are all derived from built-in workflow variables, making this file completely portable. If you drop it into any repository it should work without modification. The only secret that needs to be set per-repository is PORTAINER_WEBHOOK_URL, since that is specific to each Portainer stack. REGISTRY_PASSWORD is set once at the user level and inherited everywhere.
The job runs inside catthehacker/ubuntu:act-latest, a purpose-built image that replicates the GitHub-hosted runner environment with Node.js, Docker CLI, git, and common build tools pre-installed. It is the standard image used by the act project for running GitHub Actions locally, and is the most practical choice for self-hosted Forgejo runners. The DOCKER_HOST variable points the Docker CLI at the DIND daemon running on the host, and --network host ensures the job container can reach it.
The -k flag on the curl command disables SSL verification, which is necessary when Portainer is running with a self-signed certificate on a local network, if you have a valid SSL certificate on your Portainer instance this can be removed.
Why bother #
As the number of projects I write and host locally accumulates, as well as the number of updates to those projects, the time required to manually deploy them increases and starts to become more annoying. Manually clicking “Pull and redeploy” in Portainer is not a huge burden, and setting this up might not ever repay itself in the time cost, but I learned quite a bit from doing it and it feels good to watch the toolchain run automatically when I push changes.
Hope this helps some of you!