Skip to content

Gloryhole DoT DNS service interop with Magic WAN and VyOS

Expose a (Glory‑Hole DoT listener)[https://github.com/erfianugrah/gloryhole] on VyOS 1.4 via DNAT (853 → 10.0.10.10:853) directly to the internet. LAN hosts hairpin through Magic WAN IPsec. All Cloudflare traffic routes via Magic WAN since we can’t differentiate site-local Cloudflare services from internet Cloudflare.

DoT port 853

DNAT rule 29

853→10.0.10.10:853

Hairpin via

Magic WAN

table 20

To Cloudflare IPs

rule 1 → table 20

Catch-all rule 2

→ main table

Internet Clients

VyOS WAN

pppoe0

195.240.x.x

Glory-Hole Container

10.0.10.10:853

pod-podman-2

LAN Hosts

10.0.69.x

10.0.71.x

Magic WAN

IPsec vti0

set container name glory-hole cap-add 'net-bind-service'
set container name glory-hole environment TZ value 'Europe/Amsterdam'
set container name glory-hole host-name 'glory-hole'
set container name glory-hole image 'erfianugrah/glory-hole:v0.9.11'
set container name glory-hole memory '1024'
set container name glory-hole network podman-2 address '10.0.10.10'
set container name glory-hole port dot_tcp destination '853'
set container name glory-hole port dot_tcp protocol 'tcp'
set container name glory-hole port dot_tcp source '8853'
set container name glory-hole port dot_udp destination '853'
set container name glory-hole port dot_udp protocol 'udp'
set container name glory-hole port dot_udp source '8853' # DoT is TCP-only; UDP kept for parity
set container name glory-hole restart 'always'
set container name glory-hole shared-memory '512'
set container name glory-hole volume data destination '/var/lib/glory-hole'
set container name glory-hole volume data mode 'rw'
set container name glory-hole volume data source '/config/glory-hole/data'
set container name glory-hole volume etc destination '/etc/glory-hole'
set container name glory-hole volume etc mode 'rw'
set container name glory-hole volume etc source '/config/glory-hole/etc'
set container name glory-hole volume localtime destination '/etc/localtime'
set container name glory-hole volume localtime mode 'ro'
set container name glory-hole volume localtime source '/etc/localtime'
set container name glory-hole volume logs destination '/var/log/glory-hole'
set container name glory-hole volume logs mode 'rw'
set container name glory-hole volume logs source '/config/glory-hole/logs'

Ensure config.yml inside /config/glory-hole/etc has:

  • server.dot_enabled: true
  • server.dot_address: ":853"
  • TLS configured (manual PEM or ACME) for your DoT hostname.

Direct port 853 exposure (no intermediate port like 8853):

set nat destination rule 29 description 'gloryhole-dot'
set nat destination rule 29 destination port '853'
set nat destination rule 29 inbound-interface name 'pppoe0'
set nat destination rule 29 log
set nat destination rule 29 protocol 'tcp_udp'
set nat destination rule 29 translation address '10.0.10.10'
set nat destination rule 29 translation port '853'

Other services (optional, for reference):

# CoreDNS on alternative port
set nat destination rule 30 description 'coredns'
set nat destination rule 30 destination port '5055'
set nat destination rule 30 inbound-interface name 'pppoe0'
set nat destination rule 30 log
set nat destination rule 30 protocol 'tcp_udp'
set nat destination rule 30 translation address '10.0.10.6'
set nat destination rule 30 translation port '53'
# PiKVM web interface
set nat destination rule 31 description 'kvm'
set nat destination rule 31 destination port '2054'
set nat destination rule 31 inbound-interface name 'pppoe0'
set nat destination rule 31 log
set nat destination rule 31 protocol 'tcp_udp'
set nat destination rule 31 translation address '10.0.69.5'
set nat destination rule 31 translation port '80'
# PiKVM HTTPS
set nat destination rule 32 description 'kvm'
set nat destination rule 32 destination port '2054'
set nat destination rule 32 inbound-interface name 'pppoe0'
set nat destination rule 32 log
set nat destination rule 32 protocol 'tcp_udp'
set nat destination rule 32 translation address '10.0.69.5'
set nat destination rule 32 translation port '443'
# RIPE Atlas probe
set nat destination rule 33 description 'atlas-probe'
set nat destination rule 33 destination port '2023'
set nat destination rule 33 inbound-interface name 'pppoe0'
set nat destination rule 33 log
set nat destination rule 33 protocol 'tcp_udp'
set nat destination rule 33 translation address '10.0.10.7'
set nat destination rule 33 translation port '2023'
# RIPE Atlas web
set nat destination rule 34 description 'atlas-probe'
set nat destination rule 34 destination port '8080'
set nat destination rule 34 inbound-interface name 'pppoe0'
set nat destination rule 34 log
set nat destination rule 34 protocol 'tcp_udp'
set nat destination rule 34 translation address '10.0.10.7'
set nat destination rule 34 translation port '8080'

Outbound internet traffic is masqueraded on pppoe0:

set nat source rule 20 description 'pppoe'
set nat source rule 20 log
set nat source rule 20 outbound-interface name 'pppoe0'
set nat source rule 20 source address '10.0.0.0/8'
set nat source rule 20 translation address 'masquerade'

EXTERNAL-IN already allows it; ensure:

set firewall ipv4 name EXTERNAL-IN rule 50 action 'accept'
set firewall ipv4 name EXTERNAL-IN rule 50 description 'Allow DoT to glory-hole'
set firewall ipv4 name EXTERNAL-IN rule 50 protocol 'tcp'
set firewall ipv4 name EXTERNAL-IN rule 50 destination address '10.0.10.10'
set firewall ipv4 name EXTERNAL-IN rule 50 destination port '853'

Policy routing controls how traffic from glory-hole reaches different destinations:

  • Rule 1: All Cloudflare traffic via Magic WAN (cf-ipv4 → table 20)
  • Rule 2: Catch-all to main table/pppoe0 (current order puts this before hairpin rules)
  • Rules 5–8: Hairpin specific LAN hosts through IPsec (shadowed by rule 2 unless rule 2 is moved lower)
  • Rule 100: Final catch-all to main

Current order: 1, 2, 5, 6, 7, 8, 100. Move rules 5–8 above rule 2 if you need hairpin replies to stay on IPsec.

set policy route magic-wan-ipsec-glory-hole default-log
set policy route magic-wan-ipsec-glory-hole interface 'pod-podman-2'
# Rule 1: All Cloudflare traffic via Magic WAN (REQUIRED)
set policy route magic-wan-ipsec-glory-hole rule 1 description 'gh -> cloudflare magic wan'
set policy route magic-wan-ipsec-glory-hole rule 1 destination group network-group 'cf-ipv4'
set policy route magic-wan-ipsec-glory-hole rule 1 set table '20'
set policy route magic-wan-ipsec-glory-hole rule 1 source address '10.0.10.10'
# Rule 2: Current catch-all for internet traffic via pppoe0
set policy route magic-wan-ipsec-glory-hole rule 2 description 'All other traffic from gh -> main (WAN)'
set policy route magic-wan-ipsec-glory-hole rule 2 set table 'main'
set policy route magic-wan-ipsec-glory-hole rule 2 source address '10.0.10.10'
# Rules 5-8: Hairpin specific LAN hosts through IPsec
# (These are shadowed by rule 2 unless you move them above it)
set policy route magic-wan-ipsec-glory-hole rule 5 description 'gh -> erfipie'
set policy route magic-wan-ipsec-glory-hole rule 5 destination address '10.0.69.7'
set policy route magic-wan-ipsec-glory-hole rule 5 set table '20'
set policy route magic-wan-ipsec-glory-hole rule 5 source address '10.0.10.10'
set policy route magic-wan-ipsec-glory-hole rule 6 description 'gh -> traefik'
set policy route magic-wan-ipsec-glory-hole rule 6 destination address '10.0.71.100'
set policy route magic-wan-ipsec-glory-hole rule 6 set table '20'
set policy route magic-wan-ipsec-glory-hole rule 6 source address '10.0.10.10'
set policy route magic-wan-ipsec-glory-hole rule 7 description 'gh -> erfi1'
set policy route magic-wan-ipsec-glory-hole rule 7 destination address '10.0.69.3'
set policy route magic-wan-ipsec-glory-hole rule 7 set table '20'
set policy route magic-wan-ipsec-glory-hole rule 7 source address '10.0.10.10'
set policy route magic-wan-ipsec-glory-hole rule 8 description 'gh -> pikvm'
set policy route magic-wan-ipsec-glory-hole rule 8 destination address '10.0.69.5'
set policy route magic-wan-ipsec-glory-hole rule 8 set table '20'
set policy route magic-wan-ipsec-glory-hole rule 8 source address '10.0.10.10'
# Rule 100: Final catch-all to main
set policy route magic-wan-ipsec-glory-hole rule 100 description 'gh -> main'
set policy route magic-wan-ipsec-glory-hole rule 100 set table 'main'
set policy route magic-wan-ipsec-glory-hole rule 100 source address '10.0.10.10'
commit
save

Define a network group containing Cloudflare’s IP ranges (since we can’t differentiate site-local from internet Cloudflare):

set firewall group network-group cf-ipv4 network '173.245.48.0/20'
set firewall group network-group cf-ipv4 network '103.21.244.0/22'
set firewall group network-group cf-ipv4 network '103.22.200.0/22'
set firewall group network-group cf-ipv4 network '103.31.4.0/22'
set firewall group network-group cf-ipv4 network '141.101.64.0/18'
set firewall group network-group cf-ipv4 network '108.162.192.0/18'
set firewall group network-group cf-ipv4 network '190.93.240.0/20'
set firewall group network-group cf-ipv4 network '188.114.96.0/20'
set firewall group network-group cf-ipv4 network '197.234.240.0/22'
set firewall group network-group cf-ipv4 network '198.41.128.0/17'
set firewall group network-group cf-ipv4 network '162.158.0.0/15'
set firewall group network-group cf-ipv4 network '104.16.0.0/13'
set firewall group network-group cf-ipv4 network '104.24.0.0/14'
set firewall group network-group cf-ipv4 network '172.64.0.0/13'
set firewall group network-group cf-ipv4 network '131.0.72.0/22'

With the current order (1, 2, 5–8, 100), the broad catch-all rule 2 matches before the hairpin rules. Move rules 5–8 above rule 2 if you need those hairpins to take effect.

Yes

No

Matches

If narrowed

Yes

No

Packet from Glory-Hole

10.0.10.10

Rule 1:

cf-ipv4?

table 20

(vti0 IPsec)

Rule 2:

catch-all to main

main table

(pppoe0)

Rules 5-8:

LAN hairpins

table 20

(vti0 IPsec)

Rule 100:

catch-all

main table

(pppoe0)

The current config defines per-host policy routes:

# erfi1 (no interface set)
set policy route magic-wan-ipsec-erfi1 default-log
set policy route magic-wan-ipsec-erfi1 rule 5 description 'erfi1 -> gh'
set policy route magic-wan-ipsec-erfi1 rule 5 destination address '10.0.10.10'
set policy route magic-wan-ipsec-erfi1 rule 5 set table '20'
set policy route magic-wan-ipsec-erfi1 rule 5 source address '10.0.69.3'
set policy route magic-wan-ipsec-erfi1 rule 100 set table 'main'
set policy route magic-wan-ipsec-erfi1 rule 100 source address '10.0.69.3'
# erfipie on eth1
set policy route magic-wan-ipsec-erfipie default-log
set policy route magic-wan-ipsec-erfipie interface 'eth1'
set policy route magic-wan-ipsec-erfipie rule 5 description 'erfipie -> gh'
set policy route magic-wan-ipsec-erfipie rule 5 destination address '10.0.10.10'
set policy route magic-wan-ipsec-erfipie rule 5 set table '20'
set policy route magic-wan-ipsec-erfipie rule 5 source address '10.0.69.7'
set policy route magic-wan-ipsec-erfipie rule 100 set table 'main'
set policy route magic-wan-ipsec-erfipie rule 100 source address '10.0.69.7'
# pikvm
set policy route magic-wan-ipsec-pikvm default-log
set policy route magic-wan-ipsec-pikvm rule 5 description 'pikvm -> gh'
set policy route magic-wan-ipsec-pikvm rule 5 destination address '10.0.10.10'
set policy route magic-wan-ipsec-pikvm rule 5 set table '20'
set policy route magic-wan-ipsec-pikvm rule 5 source address '10.0.69.5'
set policy route magic-wan-ipsec-pikvm rule 100 set table 'main'
set policy route magic-wan-ipsec-pikvm rule 100 source address '10.0.69.5'
# traefik on vlan 200
set policy route magic-wan-ipsec-traefik default-log
set policy route magic-wan-ipsec-traefik interface 'eth1.200'
set policy route magic-wan-ipsec-traefik rule 5 description 'traefik -> gh'
set policy route magic-wan-ipsec-traefik rule 5 destination address '10.0.10.10'
set policy route magic-wan-ipsec-traefik rule 5 set table '20'
set policy route magic-wan-ipsec-traefik rule 5 source address '10.0.71.100'
set policy route magic-wan-ipsec-traefik rule 100 set table 'main'
set policy route magic-wan-ipsec-traefik rule 100 source address '10.0.71.100'
  • magic-wan-ipsec-glory-hole is bound to pod-podman-2; the per-host routes above include the provided interfaces (two are unset).
  • VyOS processes rules top-down; if you want hairpin routes (rules 5–8) to win, place them above any catch-alls.
  • Add interfaces to the erfi1/pikvm policy routes if they should be interface-bound like the others.
set protocols static table 10 route 0.0.0.0/0 interface tun0
set protocols static table 20 route 0.0.0.0/0 interface vti0
set protocols static table 30 route 0.0.0.0/0 interface wg0

From WAN (public):

Terminal window
kdig @your-dot-host -p 853 +tls-host=your-dot-host +tls-ca A example.com

Direct to IP (bypassing any proxy/CDN):

Terminal window
kdig @195.240.81.42 -p 853 +tls-host=your-dot-host +tls-ca A example.com

Local sanity:

Terminal window
kdig @10.0.10.10 -p 853 +tls-host=your-dot-host +tls-ca A example.com

Routing sanity:

Terminal window
# Check IP rules (run from VyOS shell)
ip rule show
# Test routing decision for glory-hole traffic
sudo ip route get 8.8.8.8 from 10.0.10.10

Expect lookup main via pppoe0.

If you see Network is unreachable, PBR isn’t selecting main; ensure rule 100 exists as the catch-all.

Packet capture (if issues):

Terminal window
# Capture on podman interface
sudo tcpdump -i pod-podman-2 -n host 10.0.10.10 and port 853
# Capture on WAN interface
sudo tcpdump -i pppoe0 -n host 195.240.81.42 and port 853
  • Catch-all ordering matters: with rule 2 as a broad catch-all, hairpin rules (5–8) never match. Move hairpin rules above rule 2 or narrow rule 2’s match if hairpins are required.
  • Optional return-path rule: if Magic WAN source-NATs hairpin traffic to 172.71.0.0/16, add a rule above the catch-all that sends that /16 to table 20; otherwise replies will exit pppoe0.
  • Symptoms of wrong order/return path: [UNREPLIED] in conntrack, DNS queries timeout, hairpin hosts unreachable.
  • DoT is TCP only; UDP 853 mapping is not required but harmless if configured

If LAN hosts cannot reach glory-hole through the hairpin:

Terminal window
# From VyOS operational mode
show policy route magic-wan-ipsec-glory-hole

Ensure rules 5–8 (hairpins) sit above the catch-all(s) so they are evaluated.

Terminal window
# Check nftables rules and counters (run from VyOS shell)
sudo nft list ruleset | grep -B2 -A10 "magic-wan-ipsec"

Check the counters for both LAN host and glory-hole PBR chains. The packets and bytes counters should increment when testing hairpin traffic.

Terminal window
# From VyOS operational mode
show ip route table 20
show ip route table main
# From VyOS shell - test routing decision
sudo ip route get 10.0.10.10 mark 0x7fffffeb

The marked packet should use table 20 and route via vti0.

Terminal window
# From VyOS shell
sudo conntrack -L | grep 10.0.10.10

Look for [UNREPLIED] entries—indicates return traffic is failing. Successful hairpins show [ASSURED].

Terminal window
# From VyOS shell - monitor IPsec tunnel traffic
sudo tcpdump -i vti0 -n host 10.0.10.10

From LAN host, query glory-hole (e.g., dig @10.0.10.10 example.com). You should see:

  1. Outbound: 10.0.100.20.xxxxx > 10.0.10.10.853 (VTI IP → glory-hole)
  2. Inbound: 172.71.x.x.xxxxx > 10.0.10.10.853 (Magic WAN edge → glory-hole)

If you only see outbound, Magic WAN isn’t routing return traffic back through the tunnel.

Terminal window
# From VyOS shell - monitor podman bridge traffic
sudo tcpdump -i pod-podman-2 -n host 10.0.10.10 and port 853

From LAN host, query glory-hole. You should see:

  • Inbound: Packets with src=172.71.x.x (Magic WAN edge IP, not LAN host IP)
  • Outbound: Replies to dst=172.71.x.x

If packets arrive with src=<LAN-host-IP>, hairpin isn’t working—traffic is routing directly instead of through Magic WAN.

Terminal window
# From VyOS operational mode - check LAN host PBR
show policy route magic-wan-lan-host
# From VyOS shell - test LAN host routing decision
sudo ip route get 10.0.10.10 from 10.0.69.7 iif eth1

The routing decision should show table 20 and vti0. If it shows direct routing via pod-podman-2, the LAN host PBR isn’t working.

Terminal window
# From VyOS operational mode
show vpn ipsec sa
show interfaces vti

Ensure IPsec tunnel is UP and passing traffic (check Bytes In/Out).

In your Magic WAN dashboard, confirm:

  • Both glory-hole subnet (10.0.10.0/24) and LAN subnets (10.0.69.0/24, etc.) are configured
  • Routes point to the correct IPsec tunnel (check tunnel name matches)
  • No conflicting routes with higher priority
  • All prefixes show “Healthy” status
Terminal window
# Policy routing
show policy route magic-wan-ipsec-glory-hole
show policy route magic-wan-lan-host
# Routing tables
show ip route table 20
show ip route table main
# IPsec status
show vpn ipsec sa
show interfaces vti
# Network groups
show firewall group network-group cf-ipv4
Terminal window
# Check routing decision
sudo ip route get 10.0.10.10 from 10.0.69.7 iif eth1
sudo ip route get 10.0.10.10 mark 0x7fffffeb
# Check packet marking
sudo nft list ruleset | grep -B2 -A10 "magic-wan-ipsec"
# Check connection tracking
sudo conntrack -L | grep 10.0.10.10
# Capture traffic
sudo tcpdump -i vti0 -n host 10.0.10.10
sudo tcpdump -i pod-podman-2 -n host 10.0.10.10 and port 853