Dave's Blog

Privacy Policy

Dave Madelin Personal Automation Tools

This site hosts a small set of personal tools used by Dave Madelin to connect to Google services such as Gmail, Google Calendar, and Google Drive through Google OAuth.

The app is used to automate personal workflows, read and organise account data when explicitly authorised, and perform actions requested by the signed-in user.

It is not a public consumer service and is intended for personal use only.

Google user data is only used to provide the requested functionality. It is not sold, shared with advertisers, or used for unrelated purposes.

Privacy Policy

Last updated: 13 May 2026 Website: https://dave.madel.in/

Dave’s Blog is a personal website operated by David Madelin.

This Privacy Policy explains what information Dave’s Blog collects, how it is used, and how you can contact us about your data.

Information we collect

Dave’s Blog may collect the following information when you use the website:

Information you provide directly

If you contact us, leave a comment, sign in, or use any interactive feature, we may collect information such as:

your name; your email address; any message or content you choose to submit. Information collected through Google Sign-In

If you choose to sign in using Google, Dave’s Blog may request access to basic Google account information, such as:

your name; your email address; your Google account profile picture; your Google account identifier.

Dave’s Blog uses this information only to authenticate you, create or manage your account/session, identify you when using the website, and provide any features that require you to be signed in.

Dave’s Blog does not request access to your Google password.

How we use your information

We use the information we collect to:

provide and operate the website; allow you to sign in using Google; manage user sessions and account access; respond to messages or enquiries; prevent abuse, spam, fraud, or unauthorised access; maintain the security and reliability of the website; improve the website and understand how it is used. Use of Google user data

Dave’s Blog’s use of information received from Google APIs will comply with the Google API Services User Data Policy, including the Limited Use requirements where applicable. Google requires apps using OAuth to accurately disclose how Google user data is accessed, used, stored, or shared.

Dave’s Blog will only use Google user data for the purposes described in this Privacy Policy.

Dave’s Blog will not:

sell Google user data; use Google user data for advertising purposes; transfer Google user data to third parties except where necessary to provide or secure the website, comply with the law, or with your explicit consent; use Google user data to build user profiles for unrelated purposes. Cookies and similar technologies

Dave’s Blog may use cookies or similar technologies to:

keep you signed in; remember your preferences; protect the website from abuse; understand basic website usage.

You can control cookies through your browser settings, although disabling cookies may prevent some parts of the website from working correctly.

Analytics and logs

Like most websites, Dave’s Blog may collect basic technical information automatically, such as:

IP address; browser type; device type; pages visited; date and time of access; referring website; server logs and error logs.

This information is used for security, troubleshooting, analytics, and improving the website.

How we share information

Dave’s Blog does not sell your personal information.

We may share limited information with service providers who help operate the website, such as hosting providers, analytics providers, email providers, security services, or authentication providers. These providers are only allowed to use the information as needed to provide their services.

We may also disclose information if required to do so by law, regulation, legal process, or to protect the rights, safety, or security of Dave’s Blog, its users, or others.

Data retention

Dave’s Blog keeps personal information only for as long as necessary for the purposes described in this Privacy Policy, unless a longer retention period is required by law.

Google Sign-In data is retained only for as long as needed to provide account access or signed-in functionality. If you ask us to delete your account or associated data, we will delete or anonymise it unless we need to keep certain information for legal, security, or legitimate operational reasons.

Data deletion

You can request deletion of your personal information by contacting us using the details below.

You can also manage or revoke Dave’s Blog’s access to your Google account through your Google Account permissions page.

Security

We take reasonable technical and organisational measures to protect personal information against unauthorised access, loss, misuse, alteration, or disclosure.

However, no website or internet transmission is completely secure, so we cannot guarantee absolute security.

Children’s privacy

Dave’s Blog is not directed at children under 13, and we do not knowingly collect personal information from children under 13. If you believe a child has provided personal information, please contact us so we can delete it.

Your rights

Depending on where you live, you may have rights over your personal information, including the right to:

access the personal information we hold about you; request correction of inaccurate information; request deletion of your information; object to or restrict certain processing; withdraw consent where processing is based on consent.

To exercise these rights, please contact us using the details below.

Changes to this Privacy Policy

We may update this Privacy Policy from time to time. When we do, we will update the “Last updated” date at the top of this page.

Contact

For questions about this Privacy Policy or your personal information, contact:

David Madelin Email: [email protected]

Google's Antigravity and WSL

If like me you use WSL2 for your dev work on Windows and you launch vscode from WSL with code . you probably expect Antigravity to behave the same. But it doesn't. The hacky fix: (Update the WIN_AGY line).

mkdir -p ~/.local/bin

cat > ~/.local/bin/agy <<'SH'
#!/usr/bin/env bash
set -euo pipefail

WIN_AGY="/mnt/c/Users/[YOUR PROFILE HERE]/AppData/Local/Programs/Antigravity/bin/antigravity"

# Default: open current folder
if [ $# -eq 0 ]; then
  set -- .
fi

# Launch Antigravity (Windows binary) but pass WSL paths through.
# Antigravity/VS Code launchers understand WSL paths and will open the folder.
exec "$WIN_AGY" "$@"
SH

chmod +x ~/.local/bin/agy

Make sure ~/.local/bin is on your PATH (most distros do this automatically). If not:

echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

While agy . now works it doesn't launch Antigravity in WSL2 mode it just opens the current folder with the network path from Windows. To fix that we need to change some things. First open this file in Windows:

C:\Users\[YOUR PROFILE]\AppData\Local\Programs\Antigravity\bin\antigravity

Find this line: WSL_EXT_ID="ms-vscode-remote.remote-wsl"

Change to: WSL_EXT_ID="google.antigravity-remote-wsl"

Now, when you try running agy . it will fail and say it can't find wslCode.sh.

To fix that you need to copy the files in: C:\Users\[YOUR PROFILE]\.vscode\extensions\ms-vscode-remote.remote-wsl-*\scripts\

To: C:\Users\[YOUR PROFILE]\AppData\Local\Programs\Antigravity\resources\app\extensions\antigravity-remote-wsl\scripts\

Note you have to make the scripts folder it won't be there by default.

After that, agy . should behave like code ..

I use both Amazon AWS and Cloudflare, primarily all my compute is on Amazon and I just use Cloudflare for DNS and tunnelling to my K3s cluster at home.

Cloudflare has been growing it's compute infrastructer over the last few years and I'm curious as to what it can offer me as a PHP / Symfony developer.

Here's my simple guide to getting a POC running on Cloudflare.

Prerequisites: You pay $5 a month for Cloudflare Workers.

npm i -D wrangler@latest

npm create cloudflare@latest -- --template=cloudflare/templates/containers-template

It will ask you to name your project, I went with 'symfony-worker-poc'.

I chose to deploy my app, which does so immediately, but I guess you can do this later if you want.

cd symfony-worker-poc

(I've assumed you already have symfony cli installed, if not do that first) symfony new symfonyapp --version="7.3.x" --webapp

Now edit your Dockerfile:

# ------------------------------------------
# Stage 1: build vendor/ with Composer only
# ------------------------------------------
FROM composer:latest AS vendor
WORKDIR /app

# Copy only dependency manifests first for layer caching
COPY symfonyapp/composer.json symfonyapp/composer.lock* /app/

# Install prod deps (skip scripts; bin/console not present here)
# BuildKit cache (optional) speeds rebuilds if you enable DOCKER_BUILDKIT=1
RUN --mount=type=cache,target=/tmp/composer-cache \
    composer install \
      --no-dev --no-interaction --prefer-dist --no-progress --no-scripts \
      --no-ansi \
      --working-dir=/app \
      --no-cache

# (No need to copy the whole app in this stage; we'll do it in final)

# ------------------------------------------
# Stage 2: final runtime (FrankenPHP)
# ------------------------------------------
FROM dunglas/frankenphp:1.9.1-php8.3

# Keep your extensions
RUN install-php-extensions intl opcache

ENV APP_ENV=prod \
    PHP_OPCACHE_ENABLE=1 \
    PHP_OPCACHE_VALIDATE_TIMESTAMPS=0 \
    SERVER_NAME=":8080"

WORKDIR /app

# Bring only vendor from builder
COPY --from=vendor /app/vendor /app/vendor
COPY --from=vendor /app/composer.json /app/composer.json
COPY --from=vendor /app/composer.lock /app/composer.lock

# Copy only what's needed at runtime (avoid tests, docs, node_modules, etc.)
# Adjust if your app uses additional dirs (e.g., translations/)
COPY symfonyapp/bin/        /app/bin/
COPY symfonyapp/config/     /app/config/
COPY symfonyapp/public/     /app/public/
COPY symfonyapp/src/        /app/src/
COPY symfonyapp/templates/  /app/templates/
COPY symfonyapp/.env*       /app/
# COPY symfonyapp/translations/ /app/translations/

# Minimal Caddyfile for FrankenPHP
RUN printf ':8080 {\n  root * /app/public\n  encode zstd gzip\n  php_server\n  try_files {path} /index.php\n}\n' > /etc/caddy/CaddyFILE

# Warm cache now that bin/console exists (safe to ignore failures for POC)
RUN php bin/console cache:clear --env=prod || true

CMD ["frankenphp", "run", "--config", "/etc/caddy/CaddyFILE"]

After that you should be able to run: npx wrangler deploy

At the end you'll get a URL you can hit, something like: https://symfony-worker-poc.madelin.workers.dev/

Just be aware you don't yet have a controller, so it will show the Symfony error page.

I've noticed the cold start is quite long, ~1 second ish but a bigger app will be longer.

You have some control over this, it's the sleepAfter value in the index.ts file. It can be “0s” so it doesn't sleep however, that could get expensive. Other acceptable values are: “30s”, “60s” and “120s”.

You could also ping it periodically to keep it alive.

In the next installment we'll look at how we handle classic SaaS stuff like uploads, databases and all that jazz. Keeping it as Cloudflare native as possible.

It's been a while since I last posted about the Pi Cluster. So what's on it now?

From memory there is:

  • This blog, which is a WriteFreely instance. Super simple blogging platform that uses markdown.
  • A matrix, synapse server. Which is essentially a self hosted WhatsApp, quite blown away by how good it is. It's obviously not quite as polished as WhatsApp but it's not far off, all of the core functionality is there, sharing pictures, location, polls and more. I wanted it so I could send automated messages to myself or my partner without having to rely on Telegram (which also makes that relatively easy but supposedly has Russian links – there are further alternatives I'm just a control freak).
  • A demo WordPress instance for testing plugins and stuff.
  • A single MySQL instance to handle DB storage over the cluster. Not exactly super sophisticated or fault tolerant but fine for my needs.
  • Graphana for monitoring stuff, though largely unused yet.
  • n8n, this is probably my most used self hosted tool, it's an automation platform with a lovely drag and drop interface. I've set up quite a few things now, for work I have some automations that monitor emails that summarize and report back anything important with the help of AI. Converting emails into Asana tasks and things like that. On a personal level there's a few YouTube channels I watch for stock market tips and they are often long and dull, so I monitor those through their RSS feeds and grab the transcript, summarize it and the send myself a note through the matrix server. I also have another that monitors a Google Sheet with my portfolio in it, alerting me to any major movements.
  • For work I'm looking at stream monitoring tech so I rolled my own API with Node/Express and a Rust script and that's being tested there before deployment.
  • A few puppeteer based node scripts for monitoring/interacting with sites that don't have API's. I won't mention it here as I'm surely breaking ToS but it summarizes my interactions with people over the course of the day and pops a note in my journal.
  • Speaking of journals, that's an instance of SilverBullet which I previously blogged about.

There is definitely more but that's all I can remember for now.

It's getting to the point where it needs a few new nodes. I have a Pi 5 that's not really doing anything so might add that to it. If I had the money I would make another case of 4 x 16GB Pi5's for more demanding tasks (puppeteer for example is quite a heavy load on a Pi 4). 🤔

I've been looking for a self hosted journaling solution for a few weeks now on and off and have tried a few. But today I've settled on SilverBullet. It's incredibly flexible and developer-friendly.

It's also really easy to host in k8s, here's a quick intro into what it can do:

Here's my no frills deployment: (I don't think you really need any frills to be fair)

You probably want to change the PVC storageClassName unless you also use Longhorn.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: silverbullet-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: longhorn

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: silverbullet
spec:
  replicas: 1
  selector:
    matchLabels:
      app: silverbullet
  template:
    metadata:
      labels:
        app: silverbullet
    spec:
      containers:
        - name: silverbullet
          image: ghcr.io/silverbulletmd/silverbullet:v2
          ports:
            - containerPort: 3000
          volumeMounts:
            - mountPath: /space
              name: silverbullet-data
      volumes:
        - name: silverbullet-data
          persistentVolumeClaim:
            claimName: silverbullet-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: silverbullet
spec:
  type: ClusterIP
  selector:
    app: silverbullet
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

All of my services run as ClusterIP as I expose them externally with a Cloudflared Tunnel from inside the cluster. If just using locally then NodePort would suffice.

In the process of moving house (a year ago) my original Raspberry Pi cluster that housed this blog died, or became so overly complicated and cluttered that I decided to start a fresh.

Previously this blog was a WordPress install, which is great, very powerful but overkill for my simple needs. The journey that got me here didn't start out as “let's resurrect my blog”, I was actually looking for a light weight journalling tool.

My as yet unrealised plan was to take all the messages of the day from people relevant to work and personal life be it on email, slack and possibly other platforms and through the power of AI create a journal entry each day – automatically.

Anyway, that has yet to materialise but my journaling requirements were that I wanted an easy to use tool that I could self host, had an API and supported markdown for simple AI-friendly formatting.

There are probably hundreds, definitely tens of options. Eventually I settled on writefreely. It's written in go, is super light weight, has an API, you can self host and it's based around markdown.

Anywho, it's day one of using it. So far so good.

If you want to have a play yourself I crudely documented my setup here:

https://github.com/pintofbeer/writefreely

#cluster #tech #raspberrypi #k8s #k3s