Compare commits

..

10 Commits

Author SHA1 Message Date
Louis Simoneau
79279595ac Keep downloaded EPUBs so kobodl can skip them on future syncs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 21:16:04 +10:00
Louis Simoneau
5197f92685 Fix calibre sync to import books as correct user
Prevents root-owned files in the library volume that
calibre-web can't write to.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 21:11:53 +10:00
Louis Simoneau
bcdc0c6cef Add WireGuard VPN, kobodl, and calibre-web
WireGuard for private service access (kobodl behind VPN).
kobodl downloads and de-DRMs Kobo store purchases.
calibre-web serves the library at books.monotrope.au.
sync.sh script handles ongoing download + import workflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 20:56:26 +10:00
Louis Simoneau
6a54777c5c Move Hermes config into volume, add pre-deploy sync check
Config.yaml was bind-mounted, blocking runtime writes (/sethome).
Move it into the Docker volume via docker cp instead. Add
hermes-sync Makefile target that diffs remote config against local
before deploying, to catch runtime changes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 17:06:19 +10:00
Louis Simoneau
66b0588f52 Rewrite Miniflux plugin to use requests, add filter and bookmark tools
Drop the miniflux pip client in favour of requests (already in the
container). Add update_feed_filters (keeplist/blocklist regex),
toggle_bookmark, get_entry (full content), and category filtering.
Remove the pip install step from Ansible.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 16:45:46 +10:00
Louis Simoneau
9b83d56932 Fix Hermes plugin config: use config file instead of env vars
Hermes plugins don't inherit container env vars. Switch the Miniflux
plugin to read credentials from a config.json written by Ansible,
and drop requires_env / container env vars.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 16:28:23 +10:00
Louis Simoneau
bbeecde448 Add shared Docker network and Miniflux plugin for Hermes
- Create external 'monotrope' Docker network so services can
  communicate by container name
- Add Miniflux to the shared network (db stays on internal network)
- Add Hermes Miniflux plugin with list_feeds and get_unread_entries tools
- Mount plugin directory and pass Miniflux API key to Hermes container

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 16:16:34 +10:00
Louis Simoneau
3a9e3a7916 Add Hermes agent, self-host fonts, new blog post
- Add Hermes (Nous Research LLM agent) with Telegram gateway,
  Ansible provisioning, and Makefile targets
- Self-host JetBrains Mono and Spectral fonts (remove Google Fonts)
- Add "An Experiment in Self-Hosting" blog post
- Update CLAUDE.md with high-level server overview

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 16:06:48 +10:00
Louis Simoneau
ab050fddd7 Pin image versions, add security headers, log limits, unattended upgrades
- Pin Miniflux to 2.2.19, Gitea to 1.25 (from :latest)
- Add security headers (X-Content-Type-Options, X-Frame-Options,
  Referrer-Policy, Permissions-Policy) to all Caddy sites
- Add Docker JSON log rotation (10m x 3 files) to all containers
- Add SHA256 checksum verification for GoatCounter binary download
- Install and configure unattended-upgrades for security patches

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 08:31:41 +10:00
Louis Simoneau
a9e063867a Harden SSH, add fail2ban, remove redundant setup.sh
Disable password auth, restrict root login, limit auth retries.
Add fail2ban with SSH jail (3 retries, 1hr ban). Remove setup.sh
which predated Ansible and was no longer used.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 08:29:15 +10:00
27 changed files with 970 additions and 104 deletions

View File

@@ -22,6 +22,17 @@ CSS. The design should feel minimal, typographic, and monospaced-first.
DigitalOcean droplet, Sydney region, Ubuntu 24.04 LTS. DigitalOcean droplet, Sydney region, Ubuntu 24.04 LTS.
### What's on the server
- **Hugo static site** — built locally, rsynced to `/var/www/monotrope`
- **Caddy** — reverse proxy and TLS for all services
- **Miniflux** — RSS reader (Docker, PostgreSQL)
- **Gitea** — self-hosted git server (Docker, PostgreSQL, SSH on port 2222)
- **GoatCounter** — privacy-friendly analytics (native binary, SQLite)
- **Hermes Agent** — Nous Research's LLM agent (`nousresearch/hermes-agent`),
exposed via Telegram bot. Routes through OpenRouter. Used as a personal
assistant reachable from mobile. Docker, config in `infra/hermes/`.
## Conventions ## Conventions
- All shell scripts use `set -euo pipefail` - All shell scripts use `set -euo pipefail`

View File

@@ -1,4 +1,4 @@
.PHONY: build serve deploy ssh setup miniflux gitea goatcounter enrich .PHONY: build serve deploy ssh setup miniflux gitea goatcounter hermes hermes-sync hermes-chat enrich wireguard calibre calibre-sync
# Load .env if it exists # Load .env if it exists
-include .env -include .env
@@ -37,5 +37,46 @@ goatcounter:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1) @test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ansible-playbook -i "$(MONOTROPE_HOST)," -u root infra/ansible/playbook.yml --tags goatcounter ansible-playbook -i "$(MONOTROPE_HOST)," -u root infra/ansible/playbook.yml --tags goatcounter
hermes: hermes-sync
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ansible-playbook -i "$(MONOTROPE_HOST)," -u root infra/ansible/playbook.yml --tags hermes
hermes-sync:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
@echo "Checking for remote config changes..."
@ssh root@$(MONOTROPE_HOST) docker cp hermes:/opt/data/config.yaml - 2>/dev/null | tar -xO > /tmp/hermes-remote-config.yaml || true
@if ! diff -q infra/hermes/config.yaml /tmp/hermes-remote-config.yaml >/dev/null 2>&1; then \
echo ""; \
echo "Remote config.yaml differs from local:"; \
echo "─────────────────────────────────────"; \
diff -u infra/hermes/config.yaml /tmp/hermes-remote-config.yaml || true; \
echo "─────────────────────────────────────"; \
echo ""; \
read -p "Overwrite remote with local? [y/N] " ans; \
if [ "$$ans" != "y" ] && [ "$$ans" != "Y" ]; then \
echo "Aborting. Merge remote changes into infra/hermes/config.yaml first."; \
exit 1; \
fi; \
else \
echo "Config in sync."; \
fi
hermes-chat:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ssh -t root@$(MONOTROPE_HOST) docker exec -it hermes hermes chat
wireguard:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ansible-playbook -i "$(MONOTROPE_HOST)," -u root infra/ansible/playbook.yml --tags wireguard
calibre:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ansible-playbook -i "$(MONOTROPE_HOST)," -u root infra/ansible/playbook.yml --tags calibre
calibre-sync:
@test -n "$(MONOTROPE_HOST)" || (echo "Error: MONOTROPE_HOST is not set"; exit 1)
ssh root@$(MONOTROPE_HOST) /opt/calibre/sync.sh
enrich: enrich:
uv run enrich.py uv run enrich.py

View File

@@ -2,6 +2,14 @@ monotrope.au {
root * /var/www/monotrope root * /var/www/monotrope
file_server file_server
# Security headers
header {
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
}
# Compression # Compression
encode zstd gzip encode zstd gzip
@@ -16,6 +24,8 @@ monotrope.au {
path *.html / /posts/ /posts/* path *.html / /posts/ /posts/*
} }
header @html Cache-Control "public, max-age=0, must-revalidate" header @html Cache-Control "public, max-age=0, must-revalidate"
} }
# Redirect www to apex # Redirect www to apex
@@ -27,6 +37,13 @@ www.monotrope.au {
reader.monotrope.au { reader.monotrope.au {
reverse_proxy localhost:8080 reverse_proxy localhost:8080
header {
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
}
encode zstd gzip encode zstd gzip
} }
@@ -34,6 +51,26 @@ reader.monotrope.au {
git.monotrope.au { git.monotrope.au {
reverse_proxy localhost:3000 reverse_proxy localhost:3000
header {
X-Content-Type-Options "nosniff"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
}
encode zstd gzip
}
# Calibre-web
books.monotrope.au {
reverse_proxy localhost:8083
header {
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
}
encode zstd gzip encode zstd gzip
} }
@@ -41,5 +78,12 @@ git.monotrope.au {
stats.monotrope.au { stats.monotrope.au {
reverse_proxy localhost:8081 reverse_proxy localhost:8081
header {
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
}
encode zstd gzip encode zstd gzip
} }

View File

@@ -14,6 +14,11 @@
goatcounter_version: "2.7.0" goatcounter_version: "2.7.0"
goatcounter_admin_email: "{{ lookup('env', 'GOATCOUNTER_ADMIN_EMAIL') }}" goatcounter_admin_email: "{{ lookup('env', 'GOATCOUNTER_ADMIN_EMAIL') }}"
goatcounter_admin_password: "{{ lookup('env', 'GOATCOUNTER_ADMIN_PASSWORD') }}" goatcounter_admin_password: "{{ lookup('env', 'GOATCOUNTER_ADMIN_PASSWORD') }}"
hermes_openrouter_api_key: "{{ lookup('env', 'HERMES_OPENROUTER_API_KEY') }}"
hermes_telegram_bot_token: "{{ lookup('env', 'HERMES_TELEGRAM_BOT_TOKEN') }}"
hermes_telegram_allowed_users: "{{ lookup('env', 'HERMES_TELEGRAM_ALLOWED_USERS') }}"
hermes_miniflux_api_key: "{{ lookup('env', 'HERMES_MINIFLUX_API_KEY') }}"
wg_client_pubkey: "{{ lookup('env', 'WG_CLIENT_PUBKEY') }}"
tasks: tasks:
@@ -33,8 +38,35 @@
- apt-transport-https - apt-transport-https
- curl - curl
- ufw - ufw
- unattended-upgrades
state: present state: present
- name: Configure unattended-upgrades
copy:
dest: /etc/apt/apt.conf.d/50unattended-upgrades
owner: root
group: root
mode: '0644'
content: |
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
"${distro_id}ESMApps:${distro_codename}-apps-security";
"${distro_id}ESM:${distro_codename}-infra-security";
};
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "false";
- name: Enable automatic updates
copy:
dest: /etc/apt/apt.conf.d/20auto-upgrades
owner: root
group: root
mode: '0644'
content: |
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
# ── Caddy ─────────────────────────────────────────────────────────────── # ── Caddy ───────────────────────────────────────────────────────────────
- name: Add Caddy GPG key - name: Add Caddy GPG key
@@ -69,6 +101,7 @@
- miniflux - miniflux
- gitea - gitea
- goatcounter - goatcounter
- calibre
- name: Enable and start Caddy - name: Enable and start Caddy
systemd: systemd:
@@ -86,6 +119,48 @@
shell: /usr/sbin/nologin shell: /usr/sbin/nologin
state: present state: present
# ── SSH hardening ───────────────────────────────────────────────────────
- name: Harden SSH configuration
copy:
dest: /etc/ssh/sshd_config.d/hardening.conf
owner: root
group: root
mode: '0644'
content: |
PasswordAuthentication no
PermitRootLogin prohibit-password
MaxAuthTries 3
notify: Restart sshd
# ── Fail2ban ────────────────────────────────────────────────────────────
- name: Install fail2ban
apt:
name: fail2ban
state: present
- name: Configure fail2ban SSH jail
copy:
dest: /etc/fail2ban/jail.local
owner: root
group: root
mode: '0644'
content: |
[sshd]
enabled = true
port = ssh
maxretry = 3
bantime = 3600
findtime = 600
notify: Restart fail2ban
- name: Enable and start fail2ban
systemd:
name: fail2ban
enabled: true
state: started
# ── UFW ───────────────────────────────────────────────────────────────── # ── UFW ─────────────────────────────────────────────────────────────────
- name: Set UFW default incoming policy to deny - name: Set UFW default incoming policy to deny
@@ -125,6 +200,75 @@
ufw: ufw:
state: enabled state: enabled
# ── WireGuard ───────────────────────────────────────────────────────────
- name: Install WireGuard
apt:
name: wireguard
state: present
tags: wireguard
- name: Generate WireGuard server private key
shell: wg genkey > /etc/wireguard/server_privatekey && chmod 600 /etc/wireguard/server_privatekey
args:
creates: /etc/wireguard/server_privatekey
tags: wireguard
- name: Generate WireGuard server public key
shell: cat /etc/wireguard/server_privatekey | wg pubkey > /etc/wireguard/server_publickey
args:
creates: /etc/wireguard/server_publickey
tags: wireguard
- name: Read server private key
slurp:
src: /etc/wireguard/server_privatekey
register: wg_server_privkey
tags: wireguard
- name: Read server public key
slurp:
src: /etc/wireguard/server_publickey
register: wg_server_pubkey
tags: wireguard
- name: Write WireGuard config
copy:
dest: /etc/wireguard/wg0.conf
owner: root
group: root
mode: '0600'
content: |
[Interface]
PrivateKey = {{ wg_server_privkey.content | b64decode | trim }}
Address = 10.100.0.1/24
ListenPort = 51820
[Peer]
PublicKey = {{ wg_client_pubkey }}
AllowedIPs = 10.100.0.2/32
notify: Restart WireGuard
tags: wireguard
- name: Allow WireGuard UDP port
ufw:
rule: allow
port: '51820'
proto: udp
tags: wireguard
- name: Enable and start WireGuard
systemd:
name: wg-quick@wg0
enabled: true
state: started
tags: wireguard
- name: Display server public key
debug:
msg: "WireGuard server public key: {{ wg_server_pubkey.content | b64decode | trim }}"
tags: wireguard
# ── Docker ────────────────────────────────────────────────────────────── # ── Docker ──────────────────────────────────────────────────────────────
- name: Create Docker keyring directory - name: Create Docker keyring directory
@@ -166,6 +310,15 @@
enabled: true enabled: true
state: started state: started
- name: Create shared Docker network
command: docker network create monotrope
register: docker_net
changed_when: docker_net.rc == 0
failed_when: docker_net.rc != 0 and 'already exists' not in docker_net.stderr
tags:
- miniflux
- hermes
# ── Miniflux ──────────────────────────────────────────────────────────── # ── Miniflux ────────────────────────────────────────────────────────────
- name: Create Miniflux directory - name: Create Miniflux directory
@@ -242,6 +395,164 @@
chdir: /opt/gitea chdir: /opt/gitea
tags: gitea tags: gitea
# ── Hermes Agent ────────────────────────────────────────────────────────
- name: Create Hermes directory
file:
path: /opt/hermes
state: directory
owner: root
group: root
mode: '0750'
tags: hermes
- name: Copy Hermes docker-compose.yml
copy:
src: ../hermes/docker-compose.yml
dest: /opt/hermes/docker-compose.yml
owner: root
group: root
mode: '0640'
tags: hermes
- name: Stage Hermes config.yaml
copy:
src: ../hermes/config.yaml
dest: /opt/hermes/config.yaml
owner: root
group: root
mode: '0640'
tags: hermes
- name: Copy config.yaml into Hermes volume
command: docker cp /opt/hermes/config.yaml hermes:/opt/data/config.yaml
notify: Restart Hermes
tags: hermes
- name: Copy Hermes plugins
copy:
src: ../hermes/plugins/
dest: /opt/hermes/plugins/
owner: root
group: root
mode: '0640'
directory_mode: '0750'
notify: Restart Hermes
tags: hermes
- name: Write Miniflux plugin config
copy:
dest: /opt/hermes/plugins/miniflux/config.json
owner: root
group: root
mode: '0600'
content: |
{
"base_url": "http://miniflux:8080",
"api_key": "{{ hermes_miniflux_api_key }}"
}
no_log: true
notify: Restart Hermes
tags: hermes
- name: Write Hermes .env
copy:
dest: /opt/hermes/.env
owner: root
group: root
mode: '0600'
content: |
OPENROUTER_API_KEY={{ hermes_openrouter_api_key }}
TELEGRAM_BOT_TOKEN={{ hermes_telegram_bot_token }}
TELEGRAM_ALLOWED_USERS={{ hermes_telegram_allowed_users }}
no_log: true
tags: hermes
- name: Pull and start Hermes
command: docker compose up -d --pull always
args:
chdir: /opt/hermes
tags: hermes
# ── Calibre (kobodl + calibre-web) ────────────────────────────────────
- name: Create Calibre directory
file:
path: /opt/calibre
state: directory
owner: root
group: root
mode: '0750'
tags: calibre
- name: Copy Calibre docker-compose.yml
copy:
src: ../calibre/docker-compose.yml
dest: /opt/calibre/docker-compose.yml
owner: root
group: root
mode: '0640'
tags: calibre
- name: Pull and start Calibre services
command: docker compose up -d --pull always
args:
chdir: /opt/calibre
tags: calibre
- name: Fix downloads volume ownership
command: >
docker compose exec -T kobodl
chown 1000:1000 /downloads
args:
chdir: /opt/calibre
tags: calibre
- name: Check if Calibre library exists
command: >
docker compose exec -T calibre-web
test -f /library/metadata.db
args:
chdir: /opt/calibre
register: calibre_db_check
changed_when: false
failed_when: false
tags: calibre
- name: Initialise Calibre library
command: >
docker compose exec -T --user abc calibre-web
calibredb add --empty --with-library /library/
args:
chdir: /opt/calibre
when: calibre_db_check.rc != 0
tags: calibre
- name: Install calibre-sync script
copy:
dest: /opt/calibre/sync.sh
owner: root
group: root
mode: '0755'
content: |
#!/bin/bash
set -euo pipefail
cd /opt/calibre
# Download all books from Kobo
docker compose exec -T kobodl kobodl --config /home/config/kobodl.json book get --get-all --output-dir /downloads
# Import any new EPUBs into Calibre library
# Files are kept in /downloads so kobodl can skip them next run
docker compose exec -T --user abc calibre-web sh -c '
for f in /downloads/*.epub; do
[ -f "$f" ] || continue
calibredb add "$f" --with-library /library/ || true
done
'
tags: calibre
# ── GoatCounter ───────────────────────────────────────────────────────── # ── GoatCounter ─────────────────────────────────────────────────────────
- name: Create goatcounter system user - name: Create goatcounter system user
@@ -267,6 +578,7 @@
url: "https://github.com/arp242/goatcounter/releases/download/v{{ goatcounter_version }}/goatcounter-v{{ goatcounter_version }}-linux-amd64.gz" url: "https://github.com/arp242/goatcounter/releases/download/v{{ goatcounter_version }}/goatcounter-v{{ goatcounter_version }}-linux-amd64.gz"
dest: /tmp/goatcounter.gz dest: /tmp/goatcounter.gz
mode: '0644' mode: '0644'
checksum: "sha256:98d221cb9c8ef2bf76d8daa9cca647839f8d8b0bb5bc7400ff9337c5da834511"
tags: goatcounter tags: goatcounter
- name: Decompress GoatCounter binary - name: Decompress GoatCounter binary
@@ -381,3 +693,23 @@
systemd: systemd:
name: goatcounter name: goatcounter
state: restarted state: restarted
- name: Restart sshd
systemd:
name: ssh
state: restarted
- name: Restart fail2ban
systemd:
name: fail2ban
state: restarted
- name: Restart Hermes
command: docker compose restart
args:
chdir: /opt/hermes
- name: Restart WireGuard
systemd:
name: wg-quick@wg0
state: restarted

View File

@@ -0,0 +1,50 @@
services:
kobodl:
image: ghcr.io/subdavis/kobodl
restart: unless-stopped
user: "1000:1000"
command: --config /home/config/kobodl.json serve --host 0.0.0.0 --output-dir /downloads
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
ports:
- "10.100.0.1:5100:5000"
volumes:
- kobodl_config:/home/config
- downloads:/downloads
networks:
- default
- monotrope
calibre-web:
image: lscr.io/linuxserver/calibre-web:latest
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
ports:
- "127.0.0.1:8083:8083"
volumes:
- calibre_config:/config
- library:/library
- downloads:/downloads
environment:
PUID: "1000"
PGID: "1000"
TZ: "Australia/Sydney"
DOCKER_MODS: "linuxserver/mods:universal-calibre"
networks:
default:
monotrope:
external: true
volumes:
kobodl_config:
calibre_config:
library:
downloads:

View File

@@ -1,7 +1,12 @@
services: services:
gitea: gitea:
image: gitea/gitea:latest image: gitea/gitea:1.25
restart: unless-stopped restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
@@ -26,6 +31,11 @@ services:
db: db:
image: postgres:16-alpine image: postgres:16-alpine
restart: unless-stopped restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
volumes: volumes:
- gitea_db:/var/lib/postgresql/data - gitea_db:/var/lib/postgresql/data
environment: environment:

9
infra/hermes/config.yaml Normal file
View File

@@ -0,0 +1,9 @@
model:
provider: openrouter
default: openrouter/auto
memory:
memory_enabled: true
user_profile_enabled: true
agent:
max_turns: 70
TELEGRAM_HOME_CHANNEL: '8455090116'

View File

@@ -0,0 +1,29 @@
services:
hermes:
image: nousresearch/hermes-agent:latest
container_name: hermes
restart: unless-stopped
command: gateway run
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
- monotrope
volumes:
- hermes_data:/opt/data
- ./plugins:/opt/data/plugins:ro
environment:
OPENROUTER_API_KEY: "${OPENROUTER_API_KEY}"
TELEGRAM_BOT_TOKEN: "${TELEGRAM_BOT_TOKEN}"
TELEGRAM_ALLOWED_USERS: "${TELEGRAM_ALLOWED_USERS}"
env_file:
- .env
networks:
monotrope:
external: true
volumes:
hermes_data:

View File

@@ -0,0 +1,40 @@
from . import schemas, tools
def register(ctx):
ctx.register_tool(
name="list_feeds",
toolset="miniflux",
schema=schemas.LIST_FEEDS,
handler=tools.list_feeds,
)
ctx.register_tool(
name="get_unread_entries",
toolset="miniflux",
schema=schemas.GET_UNREAD_ENTRIES,
handler=tools.get_unread_entries,
)
ctx.register_tool(
name="get_entry",
toolset="miniflux",
schema=schemas.GET_ENTRY,
handler=tools.get_entry,
)
ctx.register_tool(
name="toggle_bookmark",
toolset="miniflux",
schema=schemas.TOGGLE_BOOKMARK,
handler=tools.toggle_bookmark,
)
ctx.register_tool(
name="update_feed_filters",
toolset="miniflux",
schema=schemas.UPDATE_FEED_FILTERS,
handler=tools.update_feed_filters,
)
ctx.register_tool(
name="mark_as_read",
toolset="miniflux",
schema=schemas.MARK_AS_READ,
handler=tools.mark_as_read,
)

View File

@@ -0,0 +1,10 @@
name: miniflux
version: 2.0.0
description: Read and manage feeds and entries from the local Miniflux RSS reader
provides_tools:
- list_feeds
- get_unread_entries
- get_entry
- toggle_bookmark
- update_feed_filters
- mark_as_read

View File

@@ -0,0 +1,116 @@
LIST_FEEDS = {
"name": "list_feeds",
"description": (
"List all subscribed RSS feeds from Miniflux. "
"Returns feed titles, URLs, and unread counts."
),
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
}
GET_UNREAD_ENTRIES = {
"name": "get_unread_entries",
"description": (
"Get unread entries from Miniflux. "
"Optionally filter by feed ID and limit the number of results."
),
"parameters": {
"type": "object",
"properties": {
"feed_id": {
"type": "integer",
"description": "Filter to a specific feed. Omit for all feeds.",
},
"category_id": {
"type": "integer",
"description": "Filter to a specific category. Omit for all categories.",
},
"limit": {
"type": "integer",
"description": "Maximum number of entries to return. Defaults to 20.",
},
},
"required": [],
},
}
GET_ENTRY = {
"name": "get_entry",
"description": (
"Get a single entry from Miniflux by ID, including its full content. "
"Use this to read an article's text."
),
"parameters": {
"type": "object",
"properties": {
"entry_id": {
"type": "integer",
"description": "The entry ID to retrieve.",
},
},
"required": ["entry_id"],
},
}
TOGGLE_BOOKMARK = {
"name": "toggle_bookmark",
"description": "Toggle the bookmark/star status of a Miniflux entry.",
"parameters": {
"type": "object",
"properties": {
"entry_id": {
"type": "integer",
"description": "The entry ID to bookmark or unbookmark.",
},
},
"required": ["entry_id"],
},
}
UPDATE_FEED_FILTERS = {
"name": "update_feed_filters",
"description": (
"Update the keep or block filter rules on a Miniflux feed. "
"Rules are case-insensitive regexes matched against entry titles and URLs. "
"keeplist_rules: only entries matching are kept. "
"blocklist_rules: entries matching are excluded. "
"Pass an empty string to clear a rule."
),
"parameters": {
"type": "object",
"properties": {
"feed_id": {
"type": "integer",
"description": "The feed ID to update.",
},
"keeplist_rules": {
"type": "string",
"description": "Regex pattern. Only matching entries are kept. Omit to leave unchanged.",
},
"blocklist_rules": {
"type": "string",
"description": "Regex pattern. Matching entries are excluded. Omit to leave unchanged.",
},
},
"required": ["feed_id"],
},
}
MARK_AS_READ = {
"name": "mark_as_read",
"description": "Mark one or more Miniflux entries as read.",
"parameters": {
"type": "object",
"properties": {
"entry_ids": {
"type": "array",
"items": {"type": "integer"},
"description": "List of entry IDs to mark as read.",
},
},
"required": ["entry_ids"],
},
}

View File

@@ -0,0 +1,144 @@
import json
from pathlib import Path
import requests
_PLUGIN_DIR = Path(__file__).parent
with open(_PLUGIN_DIR / "config.json") as _f:
_CONFIG = json.loads(_f.read())
_BASE = _CONFIG.get("base_url", "http://miniflux:8080").rstrip("/")
_HEADERS = {"X-Auth-Token": _CONFIG.get("api_key", "")}
def _get(path, **params):
resp = requests.get(f"{_BASE}/v1{path}", headers=_HEADERS, params=params, timeout=10)
resp.raise_for_status()
return resp.json()
def _put(path, body):
resp = requests.put(f"{_BASE}/v1{path}", headers=_HEADERS, json=body, timeout=10)
resp.raise_for_status()
return resp
def list_feeds(args: dict, **kwargs) -> str:
try:
feeds = _get("/feeds")
counters = _get("/feeds/counters")
unreads = counters.get("unreads", {})
result = []
for f in feeds:
result.append({
"id": f["id"],
"title": f["title"],
"site_url": f.get("site_url", ""),
"category": f.get("category", {}).get("title", ""),
"unread": unreads.get(str(f["id"]), 0),
})
result.sort(key=lambda x: x["unread"], reverse=True)
return json.dumps({"feeds": result, "total": len(result)})
except Exception as e:
return json.dumps({"error": str(e)})
def get_unread_entries(args: dict, **kwargs) -> str:
try:
params = {
"status": "unread",
"limit": args.get("limit", 20),
"direction": "desc",
"order": "published_at",
}
if args.get("feed_id"):
path = f"/feeds/{args['feed_id']}/entries"
elif args.get("category_id"):
path = f"/categories/{args['category_id']}/entries"
else:
path = "/entries"
data = _get(path, **params)
entries = []
for e in data.get("entries", []):
entries.append({
"id": e["id"],
"title": e["title"],
"url": e.get("url", ""),
"feed": e.get("feed", {}).get("title", ""),
"category": e.get("feed", {}).get("category", {}).get("title", ""),
"author": e.get("author", ""),
"published_at": e.get("published_at", ""),
"reading_time": e.get("reading_time", 0),
})
return json.dumps({
"entries": entries,
"total": data.get("total", len(entries)),
})
except Exception as e:
return json.dumps({"error": str(e)})
def get_entry(args: dict, **kwargs) -> str:
try:
entry = _get(f"/entries/{args['entry_id']}")
return json.dumps({
"id": entry["id"],
"title": entry["title"],
"url": entry.get("url", ""),
"author": entry.get("author", ""),
"feed": entry.get("feed", {}).get("title", ""),
"category": entry.get("feed", {}).get("category", {}).get("title", ""),
"published_at": entry.get("published_at", ""),
"reading_time": entry.get("reading_time", 0),
"content": entry.get("content", ""),
})
except Exception as e:
return json.dumps({"error": str(e)})
def toggle_bookmark(args: dict, **kwargs) -> str:
try:
_put(f"/entries/{args['entry_id']}/bookmark", {})
return json.dumps({"ok": True, "entry_id": args["entry_id"]})
except Exception as e:
return json.dumps({"error": str(e)})
def update_feed_filters(args: dict, **kwargs) -> str:
try:
feed_id = args["feed_id"]
body = {}
if "keeplist_rules" in args:
body["keeplist_rules"] = args["keeplist_rules"]
if "blocklist_rules" in args:
body["blocklist_rules"] = args["blocklist_rules"]
if not body:
return json.dumps({"error": "Provide keeplist_rules and/or blocklist_rules"})
resp = requests.put(
f"{_BASE}/v1/feeds/{feed_id}",
headers=_HEADERS, json=body, timeout=10,
)
resp.raise_for_status()
feed = resp.json()
return json.dumps({
"ok": True,
"feed_id": feed["id"],
"title": feed["title"],
"keeplist_rules": feed.get("keeplist_rules", ""),
"blocklist_rules": feed.get("blocklist_rules", ""),
})
except Exception as e:
return json.dumps({"error": str(e)})
def mark_as_read(args: dict, **kwargs) -> str:
try:
entry_ids = args.get("entry_ids", [])
if not entry_ids:
return json.dumps({"error": "No entry_ids provided"})
_put("/entries", {"entry_ids": entry_ids, "status": "read"})
return json.dumps({"ok": True, "marked_read": entry_ids})
except Exception as e:
return json.dumps({"error": str(e)})

View File

@@ -1,12 +1,20 @@
services: services:
miniflux: miniflux:
image: miniflux/miniflux:latest image: miniflux/miniflux:2.2.19
restart: unless-stopped restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
ports: ports:
- "127.0.0.1:8080:8080" - "127.0.0.1:8080:8080"
networks:
- default
- monotrope
environment: environment:
DATABASE_URL: "postgres://miniflux:${MINIFLUX_DB_PASSWORD}@db/miniflux?sslmode=disable" DATABASE_URL: "postgres://miniflux:${MINIFLUX_DB_PASSWORD}@db/miniflux?sslmode=disable"
RUN_MIGRATIONS: "1" RUN_MIGRATIONS: "1"
@@ -20,6 +28,11 @@ services:
db: db:
image: postgres:16-alpine image: postgres:16-alpine
restart: unless-stopped restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
volumes: volumes:
- miniflux_db:/var/lib/postgresql/data - miniflux_db:/var/lib/postgresql/data
environment: environment:
@@ -34,5 +47,10 @@ services:
timeout: 5s timeout: 5s
retries: 5 retries: 5
networks:
default:
monotrope:
external: true
volumes: volumes:
miniflux_db: miniflux_db:

View File

@@ -1,97 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# setup.sh — Provision a fresh Ubuntu 24.04 droplet for monotrope.au
# Run as root via: ssh root@<DROPLET_IP> 'bash -s' < infra/setup.sh
DEPLOY_USER="deploy"
SITE_DIR="/var/www/monotrope"
DEPLOY_PUBKEY="${DEPLOY_PUBKEY:-}" # Set this env var before running, or edit below
echo "==> Updating packages"
apt-get update -y
apt-get upgrade -y
# ── Caddy ─────────────────────────────────────────────────────────────────
echo "==> Installing Caddy"
apt-get install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' \
| gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
| tee /etc/apt/sources.list.d/caddy-stable.list
apt-get update -y
apt-get install -y caddy
# ── Site directory ─────────────────────────────────────────────────────────
echo "==> Creating www user and site directory"
id -u www &>/dev/null || useradd --system --no-create-home --shell /usr/sbin/nologin www
mkdir -p "$SITE_DIR"
chown www:www "$SITE_DIR"
chmod 755 "$SITE_DIR"
# ── Caddyfile ──────────────────────────────────────────────────────────────
echo "==> Installing Caddyfile"
cp "$(dirname "$0")/Caddyfile" /etc/caddy/Caddyfile
chown root:caddy /etc/caddy/Caddyfile
chmod 640 /etc/caddy/Caddyfile
systemctl enable caddy
systemctl restart caddy
# ── UFW ────────────────────────────────────────────────────────────────────
echo "==> Configuring UFW"
apt-get install -y ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow http
ufw allow https
ufw --force enable
# ── Docker ────────────────────────────────────────────────────────────────
echo "==> Installing Docker"
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| tee /etc/apt/sources.list.d/docker.list
apt-get update -y
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable docker
# ── Deploy user ───────────────────────────────────────────────────────────
echo "==> Creating deploy user"
id -u "$DEPLOY_USER" &>/dev/null || useradd --create-home --shell /bin/bash "$DEPLOY_USER"
# Give deploy user write access to the site directory
chown -R "$DEPLOY_USER":www "$SITE_DIR"
chmod 775 "$SITE_DIR"
# Set up SSH key auth
DEPLOY_HOME="/home/$DEPLOY_USER"
mkdir -p "$DEPLOY_HOME/.ssh"
chmod 700 "$DEPLOY_HOME/.ssh"
touch "$DEPLOY_HOME/.ssh/authorized_keys"
chmod 600 "$DEPLOY_HOME/.ssh/authorized_keys"
chown -R "$DEPLOY_USER":"$DEPLOY_USER" "$DEPLOY_HOME/.ssh"
if [[ -n "$DEPLOY_PUBKEY" ]]; then
echo "$DEPLOY_PUBKEY" >> "$DEPLOY_HOME/.ssh/authorized_keys"
echo "==> Deploy public key installed"
else
echo "WARNING: DEPLOY_PUBKEY not set. Add your public key to $DEPLOY_HOME/.ssh/authorized_keys manually."
fi
echo ""
echo "==> Done. Checklist:"
echo " - Point DNS A records for monotrope.au and www.monotrope.au to this server's IP"
echo " - If DEPLOY_PUBKEY was not set, add your key to $DEPLOY_HOME/.ssh/authorized_keys"
echo " - Run 'make deploy' from your local machine to push the site"

View File

@@ -0,0 +1,19 @@
---
title: "An Experiment in Self-Hosting"
date: 2026-04-10T00:00:00+10:00
draft: false
---
One of the things I wanted to do with this site is to see how much tooling I could self-host on a small VPS, in particular with the acceleration afforded by AI coding through Claude Code.
So far I have:
* A [Hugo](https://gohugo.io/) static site
* [Caddy](https://caddyserver.com/) webserver
* Self-hosted feed reader with [Miniflux](https://miniflux.app/)
* Analytics using [Goatcounter](https://www.goatcounter.com/)
* Git server using [Gitea](https://about.gitea.com/) (you can check out the source for the whole project, inception-style, at [git.monotrope.au/louis/monotrope](https://git.monotrope.au/louis/monotrope))
The only external dependency for the whole setup is the server itself (a DigitalOcean droplet), and I'm sure I'll come up with more tools I can add to the server over time.
All of this I would estimate took less than 4 hours to set up and deploy. I think previously even the small amount of effort required to deploy a static blog would have pushed me towards free platforms like Medium or GitHub Pages. This mode of production has the potential be a great thing for the Web if more tinkerers can build and host their own stuff instead of relying on centralised platforms where you are the product.

View File

@@ -6,9 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{ if not .IsHome }}{{ .Title }} · {{ end }}{{ .Site.Title }}</title> <title>{{ if not .IsHome }}{{ .Title }} · {{ end }}{{ .Site.Title }}</title>
<meta name="description" content="{{ with .Description }}{{ . }}{{ else }}{{ .Site.Params.description }}{{ end }}"> <meta name="description" content="{{ with .Description }}{{ . }}{{ else }}{{ .Site.Params.description }}{{ end }}">
<link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preload" href="/fonts/jetbrains-mono-latin.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> <link rel="preload" href="/fonts/spectral-400-latin.woff2" as="font" type="font/woff2" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:ital,wght@0,300;0,400;0,500;1,400&family=Spectral:ital,wght@0,400;0,600;1,400&display=swap" rel="stylesheet">
<link rel="icon" href="/favicon.svg" type="image/svg+xml"> <link rel="icon" href="/favicon.svg" type="image/svg+xml">
<link rel="stylesheet" href="/css/main.css"> <link rel="stylesheet" href="/css/main.css">
{{ range .AlternativeOutputFormats -}} {{ range .AlternativeOutputFormats -}}

View File

@@ -1,5 +1,96 @@
/* ── Fonts ─────────────────────────────────────── */ /* ── Fonts ─────────────────────────────────────── */
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:ital,wght@0,300;0,400;0,500;1,400&family=Spectral:ital,wght@0,400;0,600;1,400&display=swap');
/* JetBrains Mono — latin-ext */
@font-face {
font-family: 'JetBrains Mono';
font-style: normal;
font-weight: 300 500;
font-display: swap;
src: url('/fonts/jetbrains-mono-latin-ext.woff2') format('woff2');
unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* JetBrains Mono — latin */
@font-face {
font-family: 'JetBrains Mono';
font-style: normal;
font-weight: 300 500;
font-display: swap;
src: url('/fonts/jetbrains-mono-latin.woff2') format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* JetBrains Mono italic — latin-ext */
@font-face {
font-family: 'JetBrains Mono';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('/fonts/jetbrains-mono-italic-400-latin-ext.woff2') format('woff2');
unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* JetBrains Mono italic — latin */
@font-face {
font-family: 'JetBrains Mono';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('/fonts/jetbrains-mono-italic-400-latin.woff2') format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* Spectral — latin-ext */
@font-face {
font-family: 'Spectral';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url('/fonts/spectral-400-latin-ext.woff2') format('woff2');
unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* Spectral — latin */
@font-face {
font-family: 'Spectral';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url('/fonts/spectral-400-latin.woff2') format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* Spectral 600 — latin-ext */
@font-face {
font-family: 'Spectral';
font-style: normal;
font-weight: 600;
font-display: swap;
src: url('/fonts/spectral-600-latin-ext.woff2') format('woff2');
unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* Spectral 600 — latin */
@font-face {
font-family: 'Spectral';
font-style: normal;
font-weight: 600;
font-display: swap;
src: url('/fonts/spectral-600-latin.woff2') format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* Spectral italic — latin-ext */
@font-face {
font-family: 'Spectral';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('/fonts/spectral-italic-400-latin-ext.woff2') format('woff2');
unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* Spectral italic — latin */
@font-face {
font-family: 'Spectral';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('/fonts/spectral-italic-400-latin.woff2') format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* ── Reset ─────────────────────────────────────── */ /* ── Reset ─────────────────────────────────────── */
*, *::before, *::after { *, *::before, *::after {

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.