Files
indiekit-blog/content/articles/schroedingers-honeypot-on-freebsd-and.md
svemagie fceb794e36
Build & Deploy / build-and-deploy (push) Successful in 1m54s
update article post
2026-04-27 08:35:02 +02:00

13 KiB
Raw Permalink Blame History

date, title, summary, category, gardenStage, visibility, aiTextLevel, syndication, updated, webmentionResults, webmentionSent, mpUrl, permalink
date title summary category gardenStage visibility aiTextLevel syndication updated webmentionResults webmentionSent mpUrl permalink
2026-04-21T06:32:52.000Z Schrödinger's Honeypot on FreeBSD and nginx Every day, bots scan my site for WordPress paths that do not exist. With a small nginx trick, those probes become self-inflicted bans. Here is how I adapted Schrödinger's Honeypot for a FreeBSD, nginx, jail setup. bsd evergreen Public 1
https://bsky.app/profile/did:plc:g4utqyolpyb5zpwwodmm3hht/post/3mjyebefxei2c
https://blog.giersig.eu/articles/schroedingers-honeypot-on-freebsd-and/
https://news.indieweb.org/en/blog.giersig.eu/articles/schroedingers-honeypot-on-freebsd-and/
2026-04-27T06:35:02.227Z
sent failed skipped details timestamp
0 0 1
sent failed skipped
target reason
https://bsky.app/profile/did:plc:g4utqyolpyb5zpwwodmm3hht/post/3mjyebefxei2c No webmention endpoint found
2026-04-21T06:33:30.743Z
true https://blog.giersig.eu/articles/schroedingers-honeypot-on-freebsd-and/ /articles/schroedingers-honeypot-on-freebsd-and/

There is a thought experiment at the heart of this technique: the scanner cannot know whether WordPress is installed on your server without actually probing. Until it probes, both states exist simultaneously: WordPress and no-WordPress, Schrödingers CMS. The moment it reaches out to check, it collapses the superposition and gets banned.

The technique is not mine. c0t0d0s0.org described it beautifully for Apache and nftables. My server runs FreeBSD, bastille jails, and nginx. Here is the adaptation.


The idea

My site has no WordPress. No wp-admin, no xmlrpc.php, no .env file leaking credentials. Yet every single minute, bots arrive and probe for exactly those paths. They are not reading my posts. They are looking for something to exploit.

The original insight: instead of silently returning 404 and moving on, log these requests to a separate file - one where every entry is already suspicious. Feed that file to fail2ban. Set maxretry = 1. One probe, immediate ban.

The Schrödinger framing earns its name: from the scanners perspective, your site is a superposition. WordPress might be there. By measuring - by sending that GET request - it collapses the state and outs itself.


The nginx approach

The Apache version uses .htaccess rewrite rules to set an environment variable, then conditional logging. nginx does not have .htaccess. But it has map, which is actually cleaner.

In the nginx http block, add a file conf.d/honeypot.conf:

map $request_uri $is_honeypot {
    default 0;
    ~*/(wp-admin|wp-login|xmlrpc\.php|\.env|\.git/|phpmyadmin|pma|myadmin|admin\.php|setup\.php|install\.php|shell\.php|config\.php) 1;
    ~*\.(asp|aspx|jsp)(\?|$) 1;
    ~*/cgi-bin/ 1;
    ~*/actuator/ 1;
    ~*/solr/ 1;
    ~*/jmx-console/ 1;
    ~*/manager/html 1;
}

access_log /var/log/nginx/honeypot.log combined if=$is_honeypot;

The map sets $is_honeypot = 1 for any request URI that matches known scanner bait. The access_log with if= writes only those requests to a separate logfile. All legitimate traffic continues to the normal access log, unaffected.

One gotcha: nginxs conf.d/ directory is not always a wildcard include. On this server, nginx.conf lists specific files explicitly rather than using include conf.d/*.conf. I had to add an explicit line:

include /usr/local/etc/nginx/conf.d/honeypot.conf;

The config test passed immediately; nothing in the honeypot log yet. On to the second gotcha.

Another gotcha: nginxs access_log inheritance. If a server {} block defines its own access_log, it completely replaces the http-level one - no additive inheritance. Three of my virtual hosts had per-site log lines. Each needed an extra line alongside its existing one:

access_log /var/log/nginx/blog.giersig.access.log;
access_log /var/log/nginx/honeypot.log combined if=$is_honeypot;  # added

After that, the log lit up immediately.


fail2ban and pf

fail2ban was already running on the host. FreeBSDs pf was already the default banaction. The only new pieces were a filter and a jail section.

Filter at /usr/local/etc/fail2ban/filter.d/nginx-honeypot.conf - trivially simple, because every line in this log is already a hit:

[Definition]
failregex = ^<HOST> -
ignoreregex =

Jail section in jail.d/feral.conf:

[nginx-honeypot]
enabled  = true
port     = http,https
filter   = nginx-honeypot
logpath  = /usr/local/bastille/jails/web/root/var/log/nginx/honeypot.log
maxretry = 1
findtime = 1d
bantime  = 30d
action   = %(action_mw)s

Note the log path: the nginx jail runs inside a bastille jail. The host can read its logs directly at /usr/local/bastille/jails/web/root/var/log/nginx/. No nullfs mount needed - fail2ban runs as root on the host and the path is accessible.

Last gotcha: fail2ban will refuse to load a jail if the log file does not exist yet. Touch it first:

touch /usr/local/bastille/jails/web/root/var/log/nginx/honeypot.log
fail2ban-client reload

The gotcha that almost made the whole thing silent: After all of this was running, pfctl -vvsTables showed only 1 address blocked while fail2ban reported 6 active bans. The bans were recorded but not enforced.

fail2bans pf action writes to per-jail anchors - f2b/nginx-honeypot containing a table named f2b-nginx-honeypot. For pf to consult those anchors, the main ruleset needs this line before the pass rules:

anchor "f2b/*"

Without it, fail2ban and pf are talking past each other. fail2ban thinks it banned someone. pf has no idea. The scanner walks straight through.

To verify the anchors are live and bans are enforced:

pfctl -a f2b/* -sr                                        # anchor rules
pfctl -a f2b/nginx-honeypot -t f2b-nginx-honeypot -T show  # banned IPs

The deeper pf.conf issue - rdr and filter rules: My setup redirects public ports into bastille jails. Port 443 on the public IP becomes port 8443 on 10.100.0.100 inside the jail. In FreeBSD pf, rdr pass is a single atomic operation: redirect and bypass all filter rules. This is convenient but means the anchor "f2b/*" never fires for redirected traffic - banned IPs can still reach nginx (where they get 404, harmless enough).

To make bans apply at the firewall level for web traffic, rdr must be written without pass, and the filter section needs explicit rules matching the post-translation addresses - not the original public IP:

# Redirect (no pass — filter rules will apply)
rdr on $ext_if inet proto tcp from any to ($ext_if) port 443 -> 10.100.0.100 port 8443

# Filter: match post-NAT destination, not the original public IP
anchor "f2b/*" in on $ext_if
block in log all
pass in quick on $ext_if inet proto tcp to 10.100.0.100 port 8443 keep state

The critical ordering: anchor "f2b/*" must appear before the pass rule, not after. If the anchor fires first, a banned IP is blocked by the anchors block rule. If the pass rule fires first (quick), pf stops evaluating and the anchor is never reached.

A confusing detail: pfctl -f preserves existing connection state entries. If you reload rules and immediately test with curl, existing states survive for ~30 seconds. Always test with a fresh connection - or better, from an external device. Testing curl https://yourdomain from the same server bypasses rdr entirely via internal routing and returns 200 regardless of what the filter rules say.


Does it work?

Status for the jail: nginx-honeypot
|- Filter
|  |- Currently failed:	0
|  |- Total failed:	262
|  `- File list: /usr/local/bastille/jails/web/root/var/log/nginx/honeypot.log
`- Actions
   |- Currently banned:	16
   |- Total banned:	17
   `- Banned IP list: [redacted x16]

262 probe attempts in the first hours, 17 IPs banned — including Google and Apple crawlers that apparently probe WordPress paths on every site they index. They are not reading my posts either.

The superposition is collapsing at a healthy rate.


Expanding the pattern set

After a week of bans, the honeypot log becomes a catalogue of everything bots are looking for. Mining it reveals patterns that were not in the original set.

The access logs across all virtual hosts tell one half of the story. A quick count of 404 paths shows what scanners are probing:

cat /var/log/nginx/*access.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -60

The honeypot log tells the other half: what is already being caught. Comparing the two reveals the gaps. From this servers logs:

Pattern

Hits

Caught?

info.php, phpinfo.php

64

etc/passwd (path traversal)

48

test.php, debug.php, php.php

70

wp_filemanager.php (underscore, not hyphen)

28

_profiler/ (Symfony debug endpoint)

18

.gitlab-ci.yml

15

PHP probe files are particularly common: scanners drop phpinfo.php, test.php, info.php to fingerprint the stack. etc/passwd probes arrive as both direct paths and Vite/Nuxt path traversal variants (/@fs/etc/passwd). The Symfony _profiler/ endpoint is a favourite for Laravel and Symfony shops that leave debug mode on in production.

The wp_filemanager pattern is a subtle miss: the original set matched wp-admin and wp-login with hyphens, but WordPress plugin probes often use underscores.

The additions to honeypot.conf:

map $request_uri $is_honeypot {
    default 0;
    ~*/(wp-admin|wp-login|xmlrpc\.php|\.env|\.git/|phpmyadmin|pma|myadmin|admin\.php|setup\.php|install\.php|shell\.php|config\.php) 1;
    ~*\.(asp|aspx|jsp)(\?|$) 1;
    ~*/cgi-bin/ 1;
    ~*/actuator/ 1;
    ~*/solr/ 1;
    ~*/jmx-console/ 1;
    ~*/manager/html 1;
    ~*/(phpinfo|info\.php|test\.php|debug\.php|php\.php) 1;
    ~*/etc/passwd 1;
    ~*/_profiler/ 1;
    ~*/\.gitlab-ci\.yml 1;
    ~*/wp_filemanager 1;
    ~*/recordings/index\.php 1;
}

After reload, the new patterns immediately started catching hits that had been slipping through - roughly 200 in accumulated logs that would have gone undetected.

The pattern set is never finished. Bots evolve, new CVEs surface, and the log keeps accumulating evidence. The mining step is worth repeating every few weeks.


FreeBSD bonus: fix the sshd filter while you are here

While auditing the other fail2ban jails, I found that sshd was reporting zero bans despite 122 failed login attempts in /var/log/auth.log. The default fail2ban sshd filter has _daemon = sshd, which matches log lines from the sshd process. On modern FreeBSD, OpenSSH splits into a session subprocess that logs as sshd-session instead. The filter never matches.

The fix is a one-line local override. Create /usr/local/etc/fail2ban/filter.d/sshd.local:

[Definition]
_daemon = sshd(?:-session)?

Reload with fail2ban-client reload sshd. Verify with fail2ban-regex /var/log/auth.log /usr/local/etc/fail2ban/filter.d/sshd.conf /usr/local/etc/fail2ban/filter.d/sshd.local - matched lines should jump from 0 to something real.


Grafana

So many bans in the first minutes is satisfying, but a number in a terminal is not a dashboard. A quick textfile collector script makes fail2ban visible in Prometheus.

/usr/local/sbin/fail2ban-metrics.sh runs every minute via cron and writes to the node_exporter textfile directory:

#!/bin/sh
OUTFILE=/var/lib/node_exporter/textfile_collector/fail2ban.prom
TMPFILE="${OUTFILE}.tmp"

printf '# HELP fail2ban_banned_ips Number of currently banned IPs\n' > "$TMPFILE"
printf '# TYPE fail2ban_banned_ips gauge\n' >> "$TMPFILE"
printf '# HELP fail2ban_total_bans Total bans since jail start\n' >> "$TMPFILE"
printf '# TYPE fail2ban_total_bans counter\n' >> "$TMPFILE"

for jail in $(fail2ban-client status 2>/dev/null | grep 'Jail list' | sed 's/.*Jail list://;s/,/ /g'); do
    current=$(fail2ban-client status "$jail" 2>/dev/null | awk '/Currently banned/{print $NF}')
    total=$(fail2ban-client status "$jail" 2>/dev/null | awk '/Total banned/{print $NF}')
    printf 'fail2ban_banned_ips{jail="%s"} %s\n' "$jail" "${current:-0}" >> "$TMPFILE"
    printf 'fail2ban_total_bans{jail="%s"} %s\n' "$jail" "${total:-0}" >> "$TMPFILE"
done

mv "$TMPFILE" "$OUTFILE"

Add --collector.textfile.directory=/var/lib/node_exporter/textfile_collector to node_exporters args and restart. Prometheus picks it up on the next scrape.

All fail2ban jails at once:

fail2ban_banned_ips{job="node"}
  • Visualization: Bar gauge or Table
  • Legend: {{jail}}

The scanner arrived expecting to find WordPress. It found a ban instead. The superposition collapsed the wrong way.