How I Created a CI/CD Pipeline for Deploying a Dockerized Slim-PHP App with Caddy

In this post, I’ll walk you through how I set up a CI/CD (Continuous Integration/Continuous Deployment) pipeline to deploy a Slim-PHP application using Docker, GitHub Actions, and Caddy as the web server. The goal was simple: every time I push code to the main branch, the app automatically builds, pushes a Docker image, and gets deployed to my DigitalOcean server with minimal or zero downtime. It is also my Website: aabillify.com

🚀 Why I Built This

Managing manual deployments was time-consuming and error-prone. I wanted something that just worked—one where every update I pushed would go live smoothly without me logging into the server, pulling updates manually, or restarting containers.

Slim-PHP is my go-to framework for lightweight APIs and microservices. For the web server, I chose Caddy because of its simplicity, built-in HTTPS support, and great default configurations.

🧱 Project Setup

My app has two main parts:

  • Slim-PHP app
  • Caddy
    – frankenphp image serves the Slim app locally (http).
    • caddy image handles the https in production and the reverse_proxy.

Here’s a simplified folder structure:

#Folder Structure
/my-app
  ├── public/
  ├── src/
  ├── Dockerfile
  ├── docker-compose.yml
  └── Caddyfile

Dockerfile in the app

FROM dunglas/frankenphp:php8.4.3-alpine

# Set working directory
WORKDIR /app/public/www

# Install composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer

# Install dependencies: Composer and PHP extensions
RUN install-php-extensions 
    pdo_mysql 
    gd 
    intl 
    zip 
    opcache 
    curl  
    json 
    mbstring 
    pdo 
    openssl 
    tokenizer 
    fileinfo

COPY . .

# Install PHP dependencies
RUN composer install --no-dev --optimize-autoloader

# Set correct permissions for the web server
RUN chown -R www-data:www-data /app/public/www

# Configure entry point with FrankenPHP
CMD ["frankenphp", "run", "--config", "Caddyfile"]

Caddyfile in the app

{
  debug

  frankenphp {
    watch /app/public/www/
  }
}
# handles http
:80 {
  root * /app/public/www/public
  php_server
  file_server
}

docker-compose.yml

services:
  php:
    container_name: local-slim-app
    build: .
    networks:
      - slim-net
    ports:
      - "${PORT}:80" # HTTP 
    volumes:
      - ./:/app/public/www
      - caddy_data:/data
      - caddy_config:/config
    tty: true

# Volumes needed for Caddy certificates and configuration
volumes:
  caddy_data:
  caddy_config:
# The network slim-net must be created in server
networks:
  slim-net:
    external: true

🔄 Setting Up the CI/CD Pipeline

I chose GitHub Actions because it integrates well with GitHub and is free for public and small private repos. The flow:

  1. Install Dependencies
  2. Build Assets via Vite
  3. Build Docker image
  4. Log in to GitHub Container Registry
  5. Push Docker image to GitHub Container Registry
  6. Deploy via SSH
  7. Cleanup old versions of the image in GHCR.

🔧 GitHub Actions Workflow

Here’s a sample .github/workflows/deploy.yml:

name: Deploy Slim-app

on:
  push:
    branches:
      - main

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest

    steps:
      - name: 🛎️ Checkout code
        uses: actions/checkout@v3

      - name: 🔧 Set up Node.js
        uses: actions/setup-node@v3
        with: 
          node-version: 22

      - name: 📦 Install dependencies
        run: npm ci

      - name: 🛠️ Build Vite assets
        run: npm run build

      - name: 🐳 Build Docker image
        run: docker build -t ghcr.io/${{secrets.USER}}/slim-app:latest .

      - name: 🔐 Log in to GitHub Container Registry
        run: echo "${{ secrets.GHCR_TOKEN }}" | docker login ghcr.io -u ${{secrets.USER}} --password-stdin

      - name: 🚀 Push Docker image to GitHub Container Registry
        run: docker push ghcr.io/${{secrets.USER}}/slim-app:latest

      - name: 🪂 Deploy via SSH
        uses: appleboy/ssh-action@v1.0.3
        with:
          host: ${{secrets.SERVER_IP}}
          username: root
          key: ${{ secrets.SSH_PRIVATE_KEY }}
          script: |
            echo "${{ secrets.GHCR_TOKEN }}" | docker login ghcr.io -u ${{secrets.USER}} --password-stdin
            docker pull ghcr.io/${{secrets.USER}}/slim-app:latest

            # Run new container on a test port
            bash /docker/deploy.sh

            # delete other images
            docker images --format '{{.Repository}}:{{.Tag}} {{.ID}}' 
              | grep 'ghcr.io/${{secrets.USER}}/slim-app' 
              | grep -v 'latest' 
              | awk '{print $2}' 
              | xargs -r docker rmi

      - name: 🧹 Cleanup old images
        run: |
          sudo apt-get install jq

          # List all image versions
          curl -s -H "Authorization: token ${{ secrets.PACKAGE_TOKEN }}" 
            https://api.github.com/users/${{secrets.USER}}/packages/container/slim-app/versions 
            | jq '.[].id' 
            | tail -n +4 
            | xargs -I {} curl -X DELETE -H "Authorization: token ${{ secrets.PACKAGE_TOKEN }}" 
              https://api.github.com/users/${{secrets.USER}}/packages/container/slim-app/versions/{}

🖥️ Server Setup

On the server, I installed:

  • Docker
  • SSH keys for GitHub Actions
  • Created Docker network: slim-net
  • Created Shell script deploy.sh for setting up copy of slim-app container and updating Caddyfile reverse_proxy by container name.
#deploy.sh sample
#!/bin/bash

PORT=8085
OTHER_PORT=8086
TIMEOUT=60  # seconds
SLEEP=5     # seconds between checks
ELAPSED=0

## Check all running containers and their exposed ports
if docker ps --format '{{.ID}}: {{.Ports}}' | grep -q ":$PORT->"; then
  PORT=8086
  USED_PORT=8085
else
  PORT=8085
  USED_PORT=8086
fi

## Do the work

#docker ps --format 'Table: {{.Names}}t{{.Ports}}' | grep ":$PORT->"

echo "Use $PORT to create new container."
CURRENT_CONTAINER_NAME=$(docker ps --format '{{.Names}}t{{.Ports}}' | grep "0.0.0.0:$USED_PORT->" | awk '{print $1}')
#new container to create
CONTAINER_NAME=$( [ "$CURRENT_CONTAINER_NAME" = "slim-app-new" ] && echo "slim-app" || echo "slim-app-new" )

echo "Current Container is: $CURRENT_CONTAINER_NAME"
echo "Create Container: $CONTAINER_NAME"
#write new docker-compose file named docker-compose-phpXXXX.yml
CONTAINER_NAME="$CONTAINER_NAME" SERVICE_NAME="php$PORT" PORT="$PORT" envsubst < /docker/slim-app/docker-compose.yml.template > "/docker/slim-app/docker-compose-php$PORT.yml"
#run new container
docker-compose -f "/docker/slim-app/docker-compose-php$PORT.yml" up -d



echo "Waiting for container '$CONTAINER_NAME' to become healthy..."


while [ $ELAPSED -lt $TIMEOUT ]; do
  STATUS=$(docker inspect --format='{{.State.Health.Status}}' "$CONTAINER_NAME" 2>/dev/null)

  if [ "$STATUS" == "healthy" ]; then
    echo "Container '$CONTAINER_NAME' is healthy!"
    if curl -s "http://localhost:$PORT/health" | grep -q "OK"; then
    #update Caddyfile to slim-app-new
    CONTAINER_NAME="$CONTAINER_NAME" envsubst < /docker/caddy/Caddyfile.template > /docker/caddy/Caddyfile
    #remove old container
    echo "Removing old container:  $CURRENT_CONTAINER_NAME"
    docker stop "$CURRENT_CONTAINER_NAME" && docker rm "$CURRENT_CONTAINER_NAME"
  else
    echo "Health check failed. Keeping old container and remove new container."
    docker stop "$CONTAINER_NAME" && docker rm "$CONTAINER_NAME"
    exit 1
  fi
    exit 0
  elif [ "$STATUS" == "unhealthy" ]; then
    echo "Container '$CONTAINER_NAME' is unhealthy! Remove new container."
    docker stop "$CONTAINER_NAME" && docker rm "$CONTAINER_NAME"
    exit 1
  else
    echo "Current status: $STATUS (sleeping $SLEEP sec...)"
    sleep $SLEEP
    ELAPSED=$((ELAPSED + SLEEP))
  fi
done

echo "Timeout reached waiting for '$CONTAINER_NAME' to become healthy. Remove new container."
docker stop "$CONTAINER_NAME" && docker rm "$CONTAINER_NAME"
exit 1

  • Create /docker directory containing pre-configured caddy image in a docker-compose.yml and slim-app directory with docker-compose.yml.template.

Sample Folder structure in server

#Folder Structure
/docker
  ├── caddy/
        ├── docker-compose.yml
        ├── Caddyfile.template
        └── Caddyfile
  ├── deploy.sh
  ├── slim-app/
        ├── docker-compose-phpXXXX.yml
        └── docker-compose.yml.template

Sample slim-app/docker-compose.yml.template in server

#a docker-compose-phpXXXX.yml will be generated by this template in deploy.sh

services:
  ${SERVICE_NAME}:
    image: ghcr.io/aabill/slim-app:latest
    container_name: ${CONTAINER_NAME}
    networks:
      - slim-net
    ports:
      - "${PORT}:80" # HTTP
    volumes:
      - ./.env:/app/public/www/.env
    environment:
      - SITE_ADDRESS=http://localhost:80 
      - SUB_SITE_ADDRESS=http://subdomain.localhost:80
      - APP_URL="http://localhost:80"

networks:
  slim-net:
    external: true

Sample caddy/docker-compose.yml in server

## Run the caddy container once:
## `/docker/caddy docker-compose up -d`

services:
  caddy:
    image: caddy:alpine
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile.template:/etc/caddy/Caddyfile.template
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - slim-net
    command: ["caddy", "run", "--watch", "--config", "/etc/caddy/Caddyfile"]

volumes:
  caddy_data:
  caddy_config:

networks:
  slim-net:
    external: true

Sample Caddyfile.template in server

#a Caddyfile will be generated by this template in deploy.sh

yourdomain.com {
  redir https://www.yourdomain.com{uri} 301
}

www.yourdomain.com {
    reverse_proxy ${CONTAINER_NAME}:80
}

⚠️ Challenges I Faced

  • Caddy Reverse Proxy: At first, Caddy couldn’t connect to slim-app:80 because I didn’t define a Docker network or use the correct service name.
  • Creating a new container with health check: At first, creating a new container takes a while to be healthy, so the health check always failed. So, I fixed it by doing a loop/sleep to wait for the container to be healthy before doing a simple health check in deploy.sh.
  • Docker images in Server: At first, whenever I push or deploy, docker images in server are creating more and more since it is pulling the image from GHCR slim-app:latest, So, I solved it by adding # delete other images in the GH action workflow: Deploy via SSH
  • GHCR packages for slim-app: Every push, a new image is being created in package for slim-app. At first, there were total of 19 versions of the slim-app package, so I created another step in workflow: 🧹 Cleanup old images for removing old images except for the last 3 versions.
  • Frankenphp & Caddy: At first, I planned to only use one image which is the built image: slim-app:latest but then there are problems with zero-downtime by restarting the container`. So I created a separate container for Caddy image to handle reverse_proxy to ensure zero-downtime.

📚 Lessons Learned

  • Caddy is an underrated gem for PHP developers. It removes a lot of the friction that Nginx usually brings with reverse proxies and SSL.
  • Caddy can watch Caddyfile changes and automatically applies the reverse_proxy :80.
  • Running a new container takes time to be healthy which can cause health checks to fail.

✅ Conclusion

Now, every time I push to main, my Slim app is rebuilt and deployed automatically to production—no manual steps, no downtime. This simple CI/CD pipeline saved me hours of deployment work and made the app more robust.

If you’re building a Dockerized PHP app (Slim or otherwise), I highly recommend automating your deployment this way.

Leave a Reply