Skip to content

Caddy Reverse Proxy with Coraza WAF and Management Dashboard

A single reverse proxy handling 20+ services needs more than just TLS termination. This guide walks through the architecture and implementation of a Docker Compose stack that layers five distinct security tiers — Cloudflare edge filtering, IP reputation blocklists, OWASP CRS WAF, 2FA forward authentication, and per-service rate limiting — behind one Caddy instance, with a purpose-built Go sidecar (wafctl) and React dashboard for real-time WAF management.

The stack is designed around a few non-negotiable constraints: zero external Go dependencies in the sidecar, read-only root filesystems on all containers, atomic file operations for config changes, and a single source of truth for WAF mode that lives in the dashboard rather than scattered across config files. wafctl doubles as both an HTTP API server and a CLI tool for scriptable WAF management.


d2 diagram

Traffic flows through five security layers in order:

  1. Cloudflare edge — DDoS mitigation and bot filtering before traffic reaches the origin
  2. IPsum blocklist — known-bad IPs get a 403 before any WAF processing
  3. Coraza WAF — OWASP Core Rule Set with custom pre/post rules and Policy Engine exclusions
  4. Authelia forward auth — TOTP/WebAuthn 2FA for protected services
  5. Rate limiting — condition-based rate limiting with per-service rules and dynamic thresholds

The wafctl sidecar sits alongside Caddy, continuously tailing logs and serving a management API. The React dashboard is compiled into static files and served directly by Caddy.


The stack runs three containers with aggressive security hardening:

services:
caddy:
image: your-registry/caddy:1.x.x-2.x.x
network_mode: host
read_only: true
cap_drop: [ALL]
cap_add: [NET_BIND_SERVICE, DAC_OVERRIDE]
security_opt: [no-new-privileges]
deploy:
resources:
limits:
cpus: "8"
memory: 2048M
volumes:
- caddy_data:/data
- caddy_config:/config
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- coraza_config:/data/coraza
- ratelimit_config:/data/rl
- caddy_logs:/var/log/caddy
authelia:
image: authelia/authelia:4.39
read_only: true
cap_drop: [ALL]
user: "1000:1000"
security_opt: [no-new-privileges]
deploy:
resources:
limits:
cpus: "1"
memory: 256M
networks:
authelia:
ipv4_address: 172.19.99.2
volumes:
- ./authelia:/config:ro
- /path/to/secrets:/secrets:ro
environment:
# Secrets loaded from files — not visible in docker inspect
AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET_FILE: "/secrets/jwt_secret"
AUTHELIA_SESSION_SECRET_FILE: "/secrets/session_secret"
AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: "/secrets/storage_encryption_key"
AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: "/secrets/smtp_password"
wafctl:
image: your-registry/wafctl:0.x.x
read_only: true
cap_drop: [ALL]
security_opt: [no-new-privileges]
deploy:
resources:
limits:
cpus: "0.5"
memory: 128M
networks:
waf:
ipv4_address: 172.19.98.2
extra_hosts:
- "caddy:host-gateway"
volumes:
- coraza_config:/data/coraza
- ratelimit_config:/data/rl
- caddy_logs:/var/log:ro
- wafctl_data:/data
- ./Caddyfile:/data/Caddyfile:ro
- geoip_data:/data/geoip:ro
environment:
- WAF_CADDY_ADMIN_URL=http://caddy:2020
networks:
authelia:
driver: bridge
ipam:
config:
- subnet: 172.19.99.0/24
waf:
driver: bridge
ipam:
config:
- subnet: 172.19.98.0/24

Each support service gets its own bridge network with a static IP. Caddy runs on the host network and reaches them by address. This prevents cross-container communication except through Caddy and makes firewall rules predictable.

Key design choices in the compose file:

  • read_only: true on all containers — writable space is provided only through explicit volume mounts and tmpfs
  • cap_drop: [ALL] — only Caddy gets NET_BIND_SERVICE (port 443) and DAC_OVERRIDE back
  • user: "1000:1000" — Authelia runs as a non-root user with matching host UID/GID
  • extra_hosts: caddy:host-gateway — lets wafctl reach Caddy’s admin API through Docker’s host gateway since Caddy is on the host network
  • Admin API proxy on :2020 — Caddy proxies its admin API on a second port restricted to the wafctl network subnet, since the sidecar can’t reach localhost:2019 across the host network boundary
  • Authelia secrets via _FILE env vars — secrets are mounted as individual files with restrictive permissions, never passed as plain environment variables (which would be visible in docker inspect)

The Caddy image uses a 6-stage build. Five stages produce artifacts for the final image; the sixth (wafctl) validates the Go sidecar compiles but its binary is not baked in — wafctl runs as a separate container with its own Dockerfile.

ARG VERSION=2.11.1
# Stage 1: Build Caddy with plugins
FROM caddy:${VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare \
--with github.com/mholt/caddy-dynamicdns \
--with github.com/mholt/caddy-ratelimit \
--with github.com/corazawaf/coraza-caddy/v2=github.com/erfianugrah/coraza-caddy/v2@60543c28852ea99bf58579e7b2b5f3189ea318b1
# Stage 2: Fetch IPsum blocklist — inline awk filters by min score,
# then generates a Caddy client_ip matcher + respond block with X-Blocked-By header
FROM alpine:3.21 AS ipsum
ARG IPSUM_MIN_SCORE=3
RUN apk add --no-cache curl \
&& curl -fsSL --retry 3 --max-time 60 \
https://raw.githubusercontent.com/stamparm/ipsum/master/ipsum.txt \
| awk -v min="${IPSUM_MIN_SCORE}" '/^#/{next} /^[[:space:]]*$/{next} {if($2+0>=min) printf "%s ",$1}' \
> /tmp/ipsum_ips \
&& COUNT=$(wc -w < /tmp/ipsum_ips) \
&& { printf '# AUTO-GENERATED at build time\n# Updated: %s\n# IPs: %s (min_score=%s)\n@ipsum_blocked client_ip %s\n' \
"$(date -Iseconds)" "$COUNT" "$IPSUM_MIN_SCORE" "$(cat /tmp/ipsum_ips)"; \
printf 'route @ipsum_blocked {\n\theader X-Blocked-By ipsum\n\trespond 403 {\n\t\tbody "Blocked"\n\t\tclose\n\t}\n}\n'; \
} > /tmp/ipsum_block.caddy
# Stage 3: Fetch Cloudflare IP ranges for trusted_proxies
FROM alpine:3.21 AS cloudflare-ips
RUN apk add --no-cache curl \
&& curl -fsSL --retry 3 https://www.cloudflare.com/ips-v4 > /tmp/cf_ipv4 \
&& curl -fsSL --retry 3 https://www.cloudflare.com/ips-v6 > /tmp/cf_ipv6 \
&& { printf 'trusted_proxies static'; \
while IFS= read -r cidr; do [ -n "$cidr" ] && printf ' %s' "$cidr"; done < /tmp/cf_ipv4; \
while IFS= read -r cidr; do [ -n "$cidr" ] && printf ' %s' "$cidr"; done < /tmp/cf_ipv6; \
echo; } > /tmp/cf_trusted_proxies.caddy
# Stage 4: Build the WAF dashboard
FROM node:22-alpine AS waf-dashboard
WORKDIR /build
COPY waf-dashboard/package.json waf-dashboard/package-lock.json ./
RUN npm ci
COPY waf-dashboard/ ./
RUN npm run build
# Stage 5: Build the Go sidecar (validates compilation; binary not used here)
FROM golang:1.24-alpine AS wafctl
ARG WAFCTL_VERSION=dev
WORKDIR /build
COPY wafctl/go.mod ./
COPY wafctl/*.go ./
RUN CGO_ENABLED=0 go build -ldflags="-s -w -X main.version=${WAFCTL_VERSION}" -o wafctl .
# Stage 6: Final image — assemble Caddy + baked-in data + dashboard
FROM caddy:${VERSION}-alpine
RUN apk add --no-cache curl
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY --from=ipsum /tmp/ipsum_block.caddy /etc/caddy/ipsum_block.caddy
COPY --from=cloudflare-ips /tmp/cf_trusted_proxies.caddy /etc/caddy/cf_trusted_proxies.caddy
COPY --from=waf-dashboard /build/dist/ /etc/caddy/waf-ui/
COPY errors/ /etc/caddy/errors/
COPY coraza/ /etc/caddy/coraza/
COPY scripts/rotate-audit-log.sh /usr/local/bin/rotate-audit-log.sh
COPY scripts/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/rotate-audit-log.sh /usr/local/bin/entrypoint.sh \
&& echo '0 * * * * /usr/local/bin/rotate-audit-log.sh >> /var/log/audit-rotate.log 2>&1' \
>> /var/spool/cron/crontabs/root \
&& chmod 0600 /var/spool/cron/crontabs/root
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

The --with original=replacement@ref syntax for coraza-caddy tells xcaddy to use a fork that fixes WebSocket connection handling (upstream PR). The fork also bumps the OWASP CRS from v4.21.0 to v4.23.0, picking up a critical security fix and Vite.js path traversal detection. This replacement can be removed once the upstream merges the fix.

Note that the wafctl stage (5) compiles the sidecar binary but the final stage does not COPY --from=wafctl — the sidecar runs as a separate container, defined in compose.yaml with its own wafctl/Dockerfile. This keeps the Caddy image lean and allows independent sidecar updates without rebuilding the entire image.

The final stage installs one cron job: the Coraza audit log rotation (hourly, copytruncate when >256MB). The IPsum blocklist refresh is handled by wafctl’s Go scheduler (see below), not by cron.

The build-time data fetching (stages 2-3) means the image is never empty on first boot. The entrypoint script bridges the gap between build-time and runtime: it seeds the IPsum blocklist from the baked-in snapshot if the runtime copy is missing or lacks an # Updated: header, starts crond for audit log rotation, then execs Caddy as PID 1.


The Caddyfile is organized around composable snippets that site blocks mix and match:

{
admin localhost:2019
metrics
acme_dns cloudflare {$CF_API_TOKEN}
order coraza_waf first
# Encrypted Client Hello — hides real SNI from eavesdroppers
dns cloudflare {$CF_API_TOKEN}
ech ech.example.com
servers {
strict_sni_host on
protocols h1 h2 h3
trusted_proxies_strict
import /etc/caddy/cf_trusted_proxies.caddy
}
}
# --- Reusable Snippets ---
(cors) {
@cors_preflight method OPTIONS
handle @cors_preflight {
header Access-Control-Allow-Origin "https://{http.request.host}"
header Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE"
header Access-Control-Allow-Headers "Content-Type"
header Access-Control-Max-Age "3600"
respond "" 204
}
header Access-Control-Allow-Origin "https://{http.request.host}"
header Vary Origin
}
(static_cache) {
@fonts path *.woff *.woff2 *.ttf *.eot *.otf
header @fonts ?Cache-Control "public, max-age=31536000, immutable"
@images path *.png *.jpg *.jpeg *.gif *.webp *.avif *.svg *.ico
header @images ?Cache-Control "public, max-age=604800, stale-while-revalidate=86400"
@styles path *.css
header @styles ?Cache-Control "public, max-age=604800, stale-while-revalidate=86400"
@scripts path *.js
header @scripts ?Cache-Control "public, max-age=604800, stale-while-revalidate=86400"
}
(tls_config) {
tls {
issuer acme {
dns cloudflare {$CF_API_TOKEN}
propagation_delay 60s
propagation_timeout -1
resolvers 1.1.1.1
}
}
}
(security_headers) {
header {
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=(), payment=()"
Cross-Origin-Opener-Policy "same-origin"
Cross-Origin-Resource-Policy "same-origin"
X-Permitted-Cross-Domain-Policies "none"
?Content-Security-Policy "default-src 'self'; ..."
-Server
-X-Powered-By
}
}
(site_log) {
log {
level INFO
output file /var/log/{args[0]}-access.log { ... }
format json { time_format wall }
}
log combined {
level INFO
output file /var/log/combined-access.log { ... }
format json { time_format wall }
}
}
# WebSocket upgrades are handled gracefully: the initial HTTP request is
# inspected (phases 1-2), then response processing is skipped once the
# connection is hijacked (erfianugrah/coraza-caddy fork).
# CRS is loaded from Coraza's embedded copy via @-prefixed paths.
(waf) {
coraza_waf {
load_owasp_crs
directives `
Include /etc/caddy/coraza/pre-crs.conf
Include /data/coraza/custom-pre-crs.conf
Include @crs-setup.conf.example
Include /data/coraza/custom-waf-settings.conf
Include @owasp_crs/*.conf
Include /etc/caddy/coraza/post-crs.conf
Include /data/coraza/custom-post-crs.conf
SecAuditEngine RelevantOnly
SecAuditLog /var/log/coraza-audit.log
SecAuditLogFormat json
SecAuditLogParts ABCFHKZ
`
}
handle_errors 400 403 429 {
root * /etc/caddy/errors
templates
rewrite * /error.html
file_server
}
}
(waf_off) {
# No WAF processing — for metrics-only or respond-only services.
}
# IPsum blocklist — the snippet file is generated at build time and
# updated daily by wafctl's Go scheduler. It defines @ipsum_blocked and a route block.
(ipsum_blocklist) {
import /data/coraza/ipsum_block.caddy
}
(forward_auth) {
forward_auth 172.19.99.2:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
}
(proxy_headers) {
trusted_proxies private_ranges
header_up X-Forwarded-For {client_ip}
header_down -Access-Control-Allow-Origin
header_down -Access-Control-Allow-Methods
}
(error_pages) {
handle_errors {
root * /etc/caddy/errors
templates
rewrite * /error.html
file_server
}
}
# --- Site Blocks ---
app.example.com {
import tls_config
import security_headers
import static_cache
import site_log app
import ipsum_blocklist
import waf
import forward_auth
# Rate limit zone files are imported per-service via glob
import /data/caddy/rl/app_rl*.caddy
reverse_proxy backend-host:port {
import proxy_headers
}
import error_pages
}

The Caddyfile is organized around composable snippets — cors, static_cache, tls_config, security_headers, site_log, waf, ipsum_blocklist, forward_auth, proxy_headers, error_pages — that each site block imports as needed. A waf_off variant exists for services that don’t need WAF processing (metrics endpoints, respond-only blocks).

The trusted_proxies_strict directive in the servers block is a critical security measure. By default, Caddy parses X-Forwarded-For left-to-right, taking the first untrusted IP as the client address. An attacker can exploit this by sending a spoofed X-Forwarded-For: 127.0.0.1 header — Cloudflare appends the real IP to the right, and Caddy picks up the spoofed leftmost value. With trusted_proxies_strict, parsing is right-to-left: the first untrusted IP from the right is the one appended by the last trusted proxy (Cloudflare), which is the real client IP. For direct connections (not through a trusted proxy), X-Forwarded-For is ignored entirely and the raw TCP peer address is used.

The critical architectural decision is the WAF config loading order. Seven files are loaded in a specific sequence that mixes baked-in rules with dynamically generated ones:

  1. pre-crs.conf — body size limits, JSON body processor, XXE detection (baked in at /etc/caddy/coraza/)
  2. custom-pre-crs.conf — Policy Engine runtime exclusions (generated by wafctl at /data/coraza/)
  3. @crs-setup.conf.example — CRS default configuration (embedded in Coraza, @-prefixed)
  4. custom-waf-settings.confSecRuleEngine, paranoia level, anomaly thresholds (generated at /data/coraza/)
  5. @owasp_crs/*.conf — OWASP CRS detection rules (embedded in Coraza via load_owasp_crs)
  6. post-crs.conf — RCE, CRLF detection, honeypot paths (baked in at /etc/caddy/coraza/)
  7. custom-post-crs.conf — Policy Engine post-CRS exclusions (generated at /data/coraza/)

wafctl is a stdlib-only Go HTTP service and CLI tool — zero external dependencies, not even a router library. It handles log parsing, SecRule generation, configuration management, and the dashboard REST API in roughly 10,700 lines of Go (plus 12,300 lines of tests across 16 test files).

FileResponsibility
main.goHTTP server, route handlers, CORS middleware
models.goAll data models (events, config, exclusions, rate limits)
logparser.goIncremental Coraza audit log parser with offset tracking
generator.goSecRule generation from Policy Engine exclusions
deploy.goDeploy pipeline, SHA-256 fingerprinting, Caddy admin API reload
config.goWAF settings store (mode, paranoia, thresholds, CRS v4 extended settings)
exclusions.goPolicy Engine exclusion CRUD, validation, atomic file writes
blocklist.goIPsum file parser, cached stats, on-demand refresh, scheduled refresh
geoip.goPure-Go MMDB reader, three-tier IP resolution
crs_rules.goCRS catalog with search/autocomplete
cli.goCLI subcommands (config, rules, events, deploy, blocklist, ratelimit)
rl_rules.goRate limit rule store with CRUD, validation, v1 migration
rl_generator.goRate limit Caddy config generator, condition-to-matcher translation
rl_analytics.goCombined access log parser for 429/IPsum events, rule attribution
rl_advisor.goTraffic analysis, statistical anomaly detection (MAD, Fano, IQR), recommendations

Go 1.22+ method-pattern routing eliminates the need for a third-party router:

mux := http.NewServeMux()
// Core
mux.HandleFunc("GET /api/health", handleHealth(store, als, geoStore, exclusionStore, blocklistStore))
mux.HandleFunc("GET /api/summary", handleSummary(store, als))
mux.HandleFunc("GET /api/events", handleEvents(store, als))
// Policy Engine
mux.HandleFunc("GET /api/exclusions", handleListExclusions(exclusionStore))
mux.HandleFunc("POST /api/exclusions", handleCreateExclusion(exclusionStore))
mux.HandleFunc("GET /api/exclusions/{id}", handleGetExclusion(exclusionStore))
mux.HandleFunc("PUT /api/exclusions/{id}", handleUpdateExclusion(exclusionStore))
mux.HandleFunc("DELETE /api/exclusions/{id}", handleDeleteExclusion(exclusionStore))
// Config + Deploy
mux.HandleFunc("GET /api/config", handleGetConfig(configStore))
mux.HandleFunc("PUT /api/config", handleUpdateConfig(configStore))
mux.HandleFunc("POST /api/config/deploy", handleDeploy(configStore, exclusionStore, rateLimitStore, deployCfg))

Handlers use the closure pattern for dependency injection — each handler factory takes the stores it needs and returns an http.HandlerFunc:

func handleSummary(store *Store, als *AccessLogStore) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
store.mu.RLock()
defer store.mu.RUnlock()
// ... build summary from store data
writeJSON(w, http.StatusOK, summary)
}
}

Every store wraps its state with sync.RWMutex. Reads acquire RLock, mutations acquire Lock. Getters return deep copies to prevent callers from mutating shared state:

type ConfigStore struct {
mu sync.RWMutex
config WAFConfig
filePath string
}
func (s *ConfigStore) Get() WAFConfig {
s.mu.RLock()
defer s.mu.RUnlock()
// Return a deep copy
cp := s.config
cp.Services = make(map[string]WAFServiceSettings, len(s.config.Services))
for k, v := range s.config.Services {
cp.Services[k] = v
}
return cp
}

All file writes go through an atomic helper that prevents Caddy from reading partially-written config during a reload:

func atomicWriteFile(path string, data []byte, perm os.FileMode) error {
dir := filepath.Dir(path)
tmp, err := os.CreateTemp(dir, filepath.Base(path)+".tmp.*")
if err != nil {
return fmt.Errorf("creating temp file: %w", err)
}
tmpPath := tmp.Name()
// Clean up the temp file on any error.
success := false
defer func() {
if !success {
tmp.Close()
os.Remove(tmpPath)
}
}()
if _, err := tmp.Write(data); err != nil {
return fmt.Errorf("writing temp file: %w", err)
}
if err := tmp.Sync(); err != nil {
return fmt.Errorf("syncing temp file: %w", err)
}
if err := tmp.Close(); err != nil {
return fmt.Errorf("closing temp file: %w", err)
}
if err := os.Chmod(tmpPath, perm); err != nil {
return fmt.Errorf("chmod temp file: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
return fmt.Errorf("renaming temp file: %w", err)
}
success = true
return nil
}

When a user changes settings or policies in the dashboard, the deploy flow is:

  1. Generate — translate stored JSON state into SecRule .conf files
  2. Write atomically — each config file written via the temp+fsync+rename pattern
  3. Fingerprint — SHA-256 hash of all generated config files
  4. Reload — read the Caddyfile, prepend a fingerprint comment, POST to Caddy’s admin API

The fingerprint step solves a subtle problem: when only included config files change (not the Caddyfile itself), Caddy’s /load endpoint sees identical Caddyfile text and skips reprovisioning. By prepending # wafctl deploy <timestamp> fingerprint:<sha256> to the in-memory Caddyfile content before POSTing, the admin API always sees a “changed” config. The on-disk Caddyfile is never modified.

// reloadCaddy sends the Caddyfile to Caddy's admin API to trigger a reload.
// configFiles are the paths whose contents contribute to the fingerprint.
func reloadCaddy(caddyfilePath, caddyAdminURL string, configFiles ...string) error {
content, err := os.ReadFile(caddyfilePath)
if err != nil {
return fmt.Errorf("reading Caddyfile: %w", err)
}
fingerprint := deployFingerprint(configFiles)
header := fmt.Sprintf("# wafctl deploy %s fingerprint:%s\n",
time.Now().UTC().Format(time.RFC3339), fingerprint)
payload := append([]byte(header), content...)
req, _ := http.NewRequest("POST", caddyAdminURL+"/load", bytes.NewReader(payload))
req.Header.Set("Content-Type", "text/caddyfile")
client := &http.Client{Timeout: 90 * time.Second}
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("Caddy admin API request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("Caddy reload failed (status %d): %s", resp.StatusCode, body)
}
return nil
}

On startup, generateOnBoot() regenerates all config files from stored JSON state (WAF config, exclusions, rate limit rules). This means a full stack restart always picks up the latest generator output without requiring a manual deploy action. No Caddy reload is performed — Caddy reads the files fresh on its own startup.


The Policy Engine is the heart of the WAF management layer. It translates high-level exclusion rules into ModSecurity SecRule directives that Coraza understands.

TypeGenerated ActionUse Case
allowctl:ruleEngine=OffFull WAF bypass for trusted paths/IPs
blockdeny,status:403Explicit deny for malicious patterns
skip_rulectl:ruleRemoveById / ByTagSkip specific CRS rules for false positives
honeypotdeny,status:403 (consolidated @pm)Dynamic honeypot paths merged into one rule
rawVerbatim SecRuleFree-form directives for edge cases
remove_by_idPost-CRS SecRuleRemoveByIdConfigure-time rule exclusion
remove_by_tagPost-CRS SecRuleRemoveByTagConfigure-time exclusion by tag category
update_target_by_idPost-CRS SecRuleUpdateTargetByIdExclude specific variables from a rule
update_target_by_tagPost-CRS SecRuleUpdateTargetByTagExclude variables from a tag category
runtime_remove_by_idctl:ruleRemoveByIdRemove entire rule for matching requests
runtime_remove_by_tagctl:ruleRemoveByTagRemove rule category for matching requests
runtime_remove_target_by_idctl:ruleRemoveTargetByIdExclude variable from specific rule at runtime
runtime_remove_target_by_tagctl:ruleRemoveTargetByTagExclude variable from tag category at runtime

Conditions map to ModSecurity variables with operator support:

FieldSecRule VariableSupported Operators
ipREMOTE_ADDReq, neq, ip_match, not_ip_match
pathREQUEST_URIeq, neq, contains, begins_with, ends_with, regex, in
hostSERVER_NAMEeq, neq, contains
methodREQUEST_METHODeq, neq, in
user_agentREQUEST_HEADERS:User-Agenteq, contains, regex
headerREQUEST_HEADERS:<Name>eq, contains, regex
queryQUERY_STRINGcontains, regex
countryREQUEST_HEADERS:Cf-Ipcountryeq, neq, in
cookieREQUEST_COOKIES:<Name>eq, neq, contains, regex
bodyREQUEST_BODYcontains, regex
argsARGS:<Name>eq, neq, contains, regex
uri_pathREQUEST_FILENAMEeq, neq, contains, begins_with, ends_with, regex
refererREQUEST_HEADERS:Referereq, neq, contains, regex
response_headerRESPONSE_HEADERS:<Name>eq, contains, regex
response_statusRESPONSE_STATUSeq, neq, in
http_versionREQUEST_PROTOCOLeq, neq

Multiple conditions on a single exclusion combine with chain (logical AND). The in operator generates separate rules for each value (logical OR) — e.g., blocking multiple countries produces individual SecRule directives rather than one chained rule.

Rather than generating one rule per honeypot path, the generator merges all honeypot-type exclusions into a single rule using the @pm (phrase match) operator:

SecRule REQUEST_URI "@pm /wp-admin /xmlrpc.php /.env /phpmyadmin" \
"id:9100021,phase:1,deny,status:403,log,msg:'Dynamic honeypot match'"

This keeps the rule count low and Coraza’s evaluation fast, even with hundreds of honeypot paths.


wafctl parses two log streams simultaneously:

The audit log is configured with SecAuditLogFormat json and parts ABCFHKZ — Coraza outputs structured JSON with transaction metadata, request/response headers, and matched rules. The parser reads incrementally with offset tracking:

type Store struct {
mu sync.RWMutex
events []Event
// file tailing state
path string
offset int64
maxAge time.Duration
geoIP *GeoIPStore
}
// Load reads new lines appended since last offset and parses them.
func (s *Store) Load() {
f, err := os.Open(s.path)
if err != nil {
if os.IsNotExist(err) {
log.Printf("audit log not found at %s, will retry", s.path)
return
}
log.Printf("error opening audit log: %v", err)
return
}
defer f.Close()
// Detect log rotation: file shrunk since last read
info, _ := f.Stat()
if info.Size() < s.offset {
log.Printf("audit log appears rotated, re-reading from start")
s.offset = 0
s.mu.Lock()
s.events = nil
s.mu.Unlock()
}
if info.Size() == s.offset {
return // nothing new
}
// ... seek to offset, parse new entries, update offset
}

From each JSON audit log entry, the parser extracts:

  • Anomaly scores from evaluation rules 949110 / 959100 / 980170
  • Matched rule IDs and messages from the messages array
  • Request URI, method, headers from the transaction.request object
  • Client IP, response status, and interruption details from transaction

A second parser tails Caddy’s access log (JSON format) looking for:

  • 429 responses — rate limit events, tagged with the triggering zone
  • IPsum blocks — requests matched by the blocklist, identified by a specific handler marker

Both parsers run on configurable intervals (default 5s) and maintain in-memory event stores with configurable maximum age (default 2160h / 90 days). Parsed events are persisted to JSONL files (/data/events.jsonl and /data/access-events.jsonl) so they survive restarts without re-parsing the raw logs.


IP geolocation uses a priority chain that degrades gracefully:

Priority 1: Cf-Ipcountry header (free, zero latency, present behind Cloudflare)
Priority 2: Local MMDB lookup (sub-microsecond, offline capable)
Priority 3: Online API fallback (configurable, cached)

The MMDB reader is a pure-Go implementation ported from a parallel Kubernetes project — no CGO, no libmaxminddb, just stdlib encoding/binary and net packages parsing the MaxMind binary tree format directly.

Online API results are cached in a shared in-memory LRU cache (24h TTL, 100k entries). The API URL supports both placeholder (https://api.example.com/%s/json where %s is the IP) and path-append styles. An optional API key is sent as a Bearer token.


The frontend is an Astro 5 + React 19 + TypeScript 5.7 application with file-based routing, built into static files and served by Caddy.

RouteComponentPurpose
/OverviewDashboardTimeline chart, service breakdown, recent events, top blocked IPs/URIs/countries, filter bar
/analyticsAnalyticsDashboardIP Lookup — investigation tool with GeoIP, event history, pagination
/blocklistBlocklistPanelIPsum stats, IP check search, manual refresh
/eventsEventsTablePaginated event log with expandable detail rows, JSON export
/policyPolicyEngineThree-tab rule builder: Quick Actions, Advanced, Raw Editor
/rate-limitsRateLimitsPanelCondition-based rate limit rule management
/servicesServicesListPer-service stats, top URIs, triggered rules
/settingsSettingsPanelGlobal + per-service WAF mode, paranoia, thresholds

The Go backend returns snake_case JSON. The frontend’s api.ts module maps everything to camelCase TypeScript interfaces, providing full type safety across the boundary:

const API_BASE = "/api";
export interface WAFEvent {
id: string;
timestamp: string;
clientIp: string; // mapped from client_ip
requestUri: string; // mapped from request_uri
anomalyScore: number; // mapped from anomaly_score
matchedRules: MatchedRule[];
// ...
}
export async function fetchEvents(params: EventParams): Promise<WAFEvent[]> {
const res = await fetch(`${API_BASE}/events?${buildQuery(params)}`);
if (!res.ok) throw new Error(`Events fetch failed: ${res.status}`);
const data = await res.json();
return data.map(mapEvent);
}
  • shadcn/ui — 16 primitives (button, card, badge, dialog, tabs, etc.) in src/components/ui/
  • Tailwind CSS 4 — utility-first styling with cn() helper (clsx + tailwind-merge)
  • Recharts — timeline charts, donut charts, stacked bar charts
  • CodeMirror 6 — raw SecRule editor with ModSecurity syntax highlighting
  • Lucide — icon set

The IP reputation blocklist has a multi-phase lifecycle ensuring the stack is never unprotected:

Build time — the Dockerfile fetches the latest IPsum list, filters by minimum score, and generates a Caddy client_ip matcher snippet baked into the image.

First boot — the entrypoint script checks if the runtime volume has an ipsum_block.caddy file. If missing, or if the file lacks an # Updated: header comment (indicating it came from an older build), the baked-in snapshot is copied to the volume.

Scheduled refresh — wafctl’s Go scheduler (StartScheduledRefresh()) runs a background goroutine that sleeps until the next refresh hour (default 06:00 UTC, configurable via WAF_BLOCKLIST_REFRESH_HOUR), then loops daily. This replaced an earlier cron-based approach, keeping the refresh lifecycle entirely within the Go process for better observability and error handling.

On-demand refreshPOST /api/blocklist/refresh triggers the same pipeline immediately from the dashboard. The Go handler downloads the list, parses it, generates the Caddy snippet, atomically writes the file, refreshes the in-memory cache, and reloads Caddy — all in one synchronous request.

The BlocklistStore parser extracts the update timestamp from a # Updated: <RFC3339> comment at the top of the file. If the comment is missing (older format), it falls back to the file’s mtime.


Coraza writes directly to /var/log/coraza-audit.log with no built-in rotation. An hourly cron job (rotate-audit-log.sh) uses copytruncate when the file exceeds 256MB — it copies the log, then truncates the original in-place. Settings: 5 rotated files retained for 90 days.

wafctl’s offset tracking detects the size shrink (file is now smaller than the stored offset) and resets automatically. In-memory events are preserved across rotations — they age out naturally via the configurable maxAge eviction (default 90 days).


The stack applies defense-in-depth at every layer:

LayerMechanism
Container runtimeread_only, cap_drop: ALL, no-new-privileges, non-root users
Network isolationSeparate bridge networks with static IPs, no cross-container communication
SecretsSOPS-encrypted .env (age), Authelia secrets as mounted files via _FILE vars
Pre-commit hooksBlock unencrypted secrets, check for SOPS markers, glob-based exemptions
File operationsAtomic writes (temp + fsync + rename) prevent partial reads
WAF configSingle source of truth in wafctl, not scattered across Caddyfile
Client IP integritytrusted_proxies_strict prevents XFF header spoofing via right-to-left parsing
TLSACME DNS challenge (no exposed port 80), Encrypted Client Hello, strict SNI

The pre-commit hook deserves special mention. It inspects staged files for unencrypted secrets by checking for SOPS ENC[AES256_GCM,...] markers in YAML/JSON files and blocking .env, .tfvars, and .tfstate files that lack encryption. Projects can opt out per-file via .allow-unencrypted-paths glob patterns.


Terminal window
# Build all images
make build
# Build only the Caddy image (includes dashboard)
make build-caddy
# Build only the standalone wafctl image
make build-wafctl
# Run all tests
make test
# Go tests only
make test-go
# Frontend tests only (Vitest)
make test-frontend
# Single Go test
cd wafctl && go test -run TestGenerateSecRules -count=1 -timeout 60s ./...
# Single frontend test
cd waf-dashboard && npx vitest run -t "fetchEvents"

Tests are whitebox (package main) using table-driven subtests:

func TestWAFModeTransitions(t *testing.T) {
tests := []struct {
name string
from string
to string
wantErr bool
}{
{"enabled to detection", "enabled", "detection_only", false},
{"enabled to disabled", "enabled", "disabled", false},
{"invalid mode", "enabled", "turbo", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
store := newTestConfigStore(t, tt.from)
_, err := store.Update(WAFConfig{Mode: tt.to})
if (err != nil) != tt.wantErr {
t.Errorf("got err=%v, wantErr=%v", err, tt.wantErr)
}
})
}
}

Handler tests use httptest.NewServer to mock the Caddy admin API and verify the full deploy pipeline end-to-end. Temp file helpers (writeTempLog, newTestConfigStore) keep test setup concise.

Vitest tests mock fetch globally and verify the API layer’s snake-to-camel mapping:

describe("fetchEvents", () => {
it("maps snake_case to camelCase", async () => {
vi.fn(globalThis, "fetch").mockResolvedValue(
new Response(JSON.stringify([{
client_ip: "1.2.3.4",
request_uri: "/test",
anomaly_score: 15,
}]))
);
const events = await fetchEvents({});
expect(events[0].clientIp).toBe("1.2.3.4");
expect(events[0].requestUri).toBe("/test");
expect(events[0].anomalyScore).toBe(15);
});
});

The Makefile supports two deployment modes configurable via .env.mk:

Full pipeline (make deploy):

  1. Build both Docker images locally
  2. Push to registry
  3. SCP Caddyfile and compose.yaml to the remote host
  4. Pull updated images on the remote
  5. Restart the stack

Partial deploys for faster iteration:

  • make deploy-caddy — rebuild and deploy only the Caddy image (includes dashboard changes)
  • make deploy-wafctl — build, push, and restart only the sidecar container

Config-only changes require no image rebuild:

  • SCP the Caddyfile, then docker exec caddy caddy reload
  • Or change settings through the dashboard, which triggers the sidecar’s deploy pipeline

GitHub Actions automates the full build-sign-release cycle:

build.yml — triggered on push to main with path filters:

  1. Builds both Docker images with multi-platform support
  2. Runs Trivy vulnerability scans
  3. Pushes to Docker Hub
  4. Signs images by digest using cosign keyless (Sigstore OIDC)
  5. Generates SBOMs via syft/anchore for both images
  6. Attaches SBOM attestations via cosign attest

release.yml — triggered on v* tag push:

  1. Generates release notes from commit log since previous tag
  2. Creates a GitHub release with the changelog

Cosign keyless signing uses GitHub’s OIDC identity — no signing keys to manage. Verification:

Terminal window
cosign verify your-registry/caddy:2.2.2-2.11.1 \
--certificate-identity-regexp='github.com/your-org/caddy-compose' \
--certificate-oidc-issuer='https://token.actions.githubusercontent.com'

wafctl is both an HTTP server and a CLI tool. When run without arguments (or with serve), it starts the API server. All other commands are CLI clients that talk to a running wafctl instance via HTTP:

wafctl # Start HTTP server (default)
wafctl serve # Same as above
wafctl version # Print version
wafctl health # Check server health
wafctl config get # Show WAF configuration
wafctl config set # Update config (JSON on stdin or --file)
wafctl rules list # List policy exclusion rules
wafctl rules get ID # Get rule by ID
wafctl rules create # Create rule (JSON on stdin or --file)
wafctl rules delete ID
wafctl deploy # Deploy WAF config to Caddy
wafctl events # List events (--hours, --limit, --service, --type, --client, --method, --rule)
wafctl ratelimit list # List all rate limit rules (alias: rl)
wafctl ratelimit get ID # Get a rate limit rule by ID
wafctl ratelimit create # Create rule (JSON on stdin or --file)
wafctl ratelimit delete ID # Delete a rate limit rule
wafctl ratelimit deploy # Deploy rate limit configs to Caddy
wafctl ratelimit global # Show global rate limit settings
wafctl blocklist stats
wafctl blocklist check IP
wafctl blocklist refresh

Global flags: --addr (API address), --json (raw JSON output), --file/-f (read input from file instead of stdin).


Custom rules follow a strict ID allocation scheme to avoid collisions with the CRS:

RangePurpose
9100001–9100006Pre-CRS rules (body settings, XXE, JSON processor)
9100010–9100019Post-CRS custom detection (RCE, CRLF)
9100020–9100029Honeypot rules (static + dynamic Policy Engine)
9100030–9100039Heuristic bot signal rules (scanner UA matching)
9100050–9100059Reserved for GeoIP blocking
95xxxxxGenerated Policy Engine exclusion rules
97xxxxxGenerated WAF settings overrides
910000–979999OWASP CRS (do not use)

This namespacing ensures that generated rules from the Policy Engine never collide with baked-in custom rules or the CRS itself.