For most of this semester MiseOS lived on localhost. That was fine for development, but at some point every backend hits the same wall — it needs to run somewhere real. This week was about crossing that wall.
The goal was a pipeline where merging to main automatically tests, builds, and deploys the application without touching the server manually. By the end of the week that was working.
And honestly: I didn’t expect the GitHub Actions spinner to become this emotional.
Yellow while it builds and tests. Green when it works (big smile). Red when it fails (“oh no…”).
It’s such a small UI detail, but it makes the project feel alive.
What I Wanted to Build#
The simplest production setup that still teaches real DevOps concepts:
- GitHub Actions — run tests and build the image on every push
- Docker Hub — store the built image as an artifact
- DigitalOcean droplet — the server that runs everything
- Docker Compose — manage multiple services on the same machine
- Watchtower — watch for new images and restart the container automatically
- Caddy — reverse proxy with automatic HTTPS
No Kubernetes, no cloud-native overengineering. Just enough to understand the full deploy lifecycle from code push to running container.
The CI/CD Pipeline#
The workflow splits into two jobs: test and deploy. The deploy job only runs when the test job passes and the push is to main. Pull requests run tests only — they never deploy.
name: MiseOS CI/CD
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
name: Build and Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: '17'
distribution: 'temurin'
cache: 'maven'
- name: Run tests
env:
DEEPL_APIKEY: ${{ secrets.DEEPL_APIKEY }}
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
SECRET_KEY: ${{ secrets.SECRET_KEY }}
DB_NAME: ${{ secrets.DB_NAME }}
ISSUER: ${{ secrets.ISSUER }}
TOKEN_EXPIRE_TIME: ${{ secrets.TOKEN_EXPIRE_TIME }}
run: mvn --batch-mode test
deploy:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: test
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: '17'
distribution: 'temurin'
cache: 'maven'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots package -DskipTests
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/mise-os:latest
- name: Trigger Watchtower Webhook
run: |
curl -f -H "Authorization: Bearer ${{ secrets.WATCHTOWER_TOKEN }}" https://deploy.corral.dk/v1/updateThe test job runs with real secrets because my integration tests hit the actual auth endpoints. Without them the JWT validation fails and half the tests would not run.
The deploy job skips tests with -DskipTests — they already ran in the previous job.
Deployment flow overview#
This is the actual release path used by MiseOS.
A push to main runs CI, builds and publishes a Docker image, then triggers a webhook on the droplet.
Caddy receives the HTTPS request and reverse-proxies it to the Watchtower container,
which exposes the HTTP API endpoint. Watchtower then pulls the newest image from Docker Hub and restarts the miseOS container.
flowchart TD
subgraph GHA["GitHub Actions"]
Push([Push to main]) --> CI{CI pipeline}
CI --> Tests[Run tests]
Tests -- Fail --> Stop[Stop pipeline]
Tests -- Pass --> Build[Build Docker image]
Build --> Publish[Push image]
Publish --> Webhook[POST webhook trigger]
end
Hub[(Docker Hub
miseOS:latest)]
subgraph DO["DigitalOcean Droplet"]
Caddy[Caddy reverse proxy]
WT[Watchtower]
API[MiseOS container]
DB[(Postgres)]
Caddy --> WT
WT -->|Restart miseOS| API
API -->|SQL| DB
end
Publish --> Hub
Webhook --> Caddy
Hub -->|Pull latest image| WT
style CI fill:#f4b8ff,stroke:#333,stroke-width:2px,color:#000
style Tests fill:#fff6b3,stroke:#333,color:#000
style Stop fill:#ffb3b3,stroke:#333,color:#000
style Build fill:#b8ffb8,stroke:#333,color:#000
style Publish fill:#b8ffb8,stroke:#333,color:#000
style Caddy fill:#ffffff,stroke:#333,color:#000
style WT fill:#ffffff,stroke:#333,color:#000
style API fill:#eeeeee,stroke:#333,color:#000
style GHA fill:transparent,stroke:#666,stroke-dasharray: 5 5
style DO fill:transparent,stroke:#666,stroke-dasharray: 5 5
The Dockerfile#
The image is intentionally minimal:
FROM amazoncorretto:17-alpine
RUN apk update && apk add --no-cache curl
COPY target/app.jar /app.jar
EXPOSE 7070
CMD ["java", "-jar", "/app.jar"]Alpine base keeps the image small. curl is installed because the health check needs it. Nothing else — no build tools, no source code, just the compiled jar and the JVM.
All runtime configuration comes from environment variables. The same image runs locally, in CI, and in production — the environment is what changes, not the image.
The Droplet Setup#
On the server, Docker Compose manages all services. MiseOS sits alongside Postgres, Caddy, Watchtower, and the portfolio site.
miseOS:
image: mortenjenne/mise-os:latest
container_name: miseOS
ports:
- "7072:7070"
environment:
- DEPLOYED=${DEPLOYED}
- DB_NAME=${DB_NAME}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- CONNECTION_STR=${CONNECTION_STR}
- SECRET_KEY=${SECRET_KEY}
- ISSUER=${ISSUER}
- TOKEN_EXPIRE_TIME=${TOKEN_EXPIRE_TIME}
- DEEPL_APIKEY=${DEEPL_APIKEY}
- GEMINI_API_KEY=${GEMINI_API_KEY}
networks:
- backend
- frontend
volumes:
- ./logs:/logs
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:7070/api/v1/auth/health"]
interval: 10s
timeout: 5s
retries: 5
start_period: 15sThe health check endpoint is a simple GET /api/v1/auth/health that returns 200. Compose uses it to know when the container is actually ready — Caddy and Watchtower both depend on it. Without this, services would race to start and Caddy might try to route traffic before the API was listening.
Caddy as Reverse Proxy#
Caddy handles TLS and domain routing. The configuration is simple — each subdomain proxies to the relevant container by name:
miseos.corral.dk {
reverse_proxy miseOS:7070
}
deploy.corral.dk {
reverse_proxy watchtower:8080
}The deploy.corral.dk subdomain is how GitHub Actions reaches Watchtower — it sends a POST to that address with a bearer token to trigger the image pull. Caddy terminates TLS so the webhook arrives over HTTPS.
Caddy provisions and renews certificates automatically via Let’s Encrypt. That used to require a lot of configuration — here it requires none.
Watchtower for Automatic Updates#
After Docker Hub push, the workflow triggers:
curl -f \
-H "Authorization: Bearer ${{ secrets.WATCHTOWER_TOKEN }}" \
https://deploy.corral.dk/v1/updateWatchtower pulls the latest image for the miseOS container and restarts it. The update typically completes in under 30 seconds. Old images are cleaned up automatically with WATCHTOWER_CLEANUP=true.
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_HTTP_API_UPDATE=true
- WATCHTOWER_HTTP_API_TOKEN=${WATCHTOWER_TOKEN}
- WATCHTOWER_CLEANUP=true
networks:
- frontend
command: miseOS portfolio-siteWhat Was Difficult#
The hardest part this week was configuration, not code.
In local development I used a config.properties file (gitignored) with values like:
- DB name
- JWT issuer
- token expiration
- external API URLs
That worked locally, but GitHub Actions cannot access gitignored files.
So in CI the app started without expected config, and auth-related routes failed in ways that initially looked like logic bugs.
I tried a few quick fixes first, but the proper solution was to refactor configuration loading to environment variables (System.getenv()), so the same mechanism works in CI, Docker, and production.
I also simplified config ownership:
- Secrets and deployment-specific values come from environment variables
(SECRET_KEY,ISSUER,TOKEN_EXPIRE_TIME, API keys, DB credentials) - Stable integration base URLs were moved into
ApiConfigas private static constants
That made startup deterministic and removed dependency on local files during deployment.
To avoid silent misconfiguration, I added fail-fast validation in security startup:
if (secretKey == null || secretKey.isBlank())
throw new IllegalStateException("JWT secret key must be configured");
if (secretKey.getBytes().length < 32)
throw new IllegalStateException("JWT secret key too short. Use at least 32 bytes");Failing fast at startup is far better than failing mysteriously during a user login.
CI success does not mean production success. Tests passing in GitHub Actions confirms the code is correct. It does not confirm the runtime environment on the server is correctly configured. Those are two separate concerns and they fail in different ways.
What Worked Well#
Splitting the workflow into test and deploy jobs made the pipeline easy to reason about. If the test job fails, nothing deploys — there is no ambiguity about what ran and what did not.
It was easy to debug if something did go wrong. The logs from each job are available in GitHub Actions, and the application logs are persisted to a mounted volume on the server. This way i could quickly identify and fix the missing SECRET_KEY issue after the first deployment.
Health checks on the container meant dependent services would not start until the API was actually ready. Before adding them, Caddy occasionally tried to proxy traffic before the JVM had finished starting.
Logging to a mounted volume means logs persist across container restarts. That turned out to be immediately useful — after the first deployment the application log showed a startup warning about a misconfigured environment variable that would have been invisible without it.
What I’d Explore Next#
One thing I still want to understand better is long-term configuration strategy.
This week I moved critical runtime values to environment variables so CI/CD and deployment would work reliably. That solved the immediate problem.
As the project or in future projects grows, I want to explore where the right boundary should be between:
- environment variables for deployment-specific values,
config.propertiesfor application configuration,- and code-level constants for stable business rules.
For now the system works well, but configuration strategy becomes more important as a codebase grows. This is something I want to refine intentionally rather than just letting it evolve accidentally.
Next Steps#
With deployment automated, the next focus shifts from infrastructure back to features. The React frontend is the missing piece — the backend is fully live at miseos.corral.dk but there is no UI yet. The priority flows are the line cook request workflow and the head chef approval dashboard.
This is part 9 of my MiseOS development log. Follow along as I build a tool for professional kitchens, one commit at a time.
