<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>strang.es</title>
        <link>https://www.strang.es/</link>
        <description>From Melbourne, Australia</description>
        <lastBuildDate>Sun, 05 Apr 2026 07:42:37 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[Learning Cron]]></title>
            <link>https://www.strang.es/learning_cron.html</link>
            <guid>https://www.strang.es/learning_cron.html</guid>
            <pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>I'm ashamed at how many times I've stared at a <code>* * * * *</code> and wondered what to
do from here. Well... I'm a little better off now due to a game that I've
recently vibed with OpenCode using the newly released GLM 4.7.</p>
<p><em>little note: GLM 4.7 I'm super impressed with. Feels alot like Sonnet 4 to
me... and at $10AUD a month its an absolute steal.</em></p>
<h2>Cron; but gamified</h2>
<p>The 'game' asks you to produce various cron expressions using natural language.
It's forked from the <code>crontab-guru</code> TUI built with Bubble Tea that helps you
understand cron expressions using natural language based on the
<a href="https://crontab.guru/">crontab guru</a> site.</p>
<video controls="" width="100%">
  <source src="./assets/crontab_guru_game.mp4" type="video/mp4">
</video>

<p>Clone and build the game from
<a href="https://github.com/samuelstranges/crontab-guru">here</a>.</p>
<pre><code class="hljs language-bash">git <span class="hljs-built_in">clone</span> https://github.com/samuelstranges/crontab-guru
<span class="hljs-built_in">cd</span> crontab-guru
go build .

<span class="hljs-comment"># Run game</span>
crontab-guru --<span class="hljs-built_in">test</span>

<span class="hljs-comment"># Run game in 'hard mode' (no real time preview)</span>
crontab-guru --<span class="hljs-built_in">test</span> --hard</code></pre>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Tap Tempo]]></title>
            <link>https://www.strang.es/tap_tempo.html</link>
            <guid>https://www.strang.es/tap_tempo.html</guid>
            <pubDate>Sat, 06 Dec 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>I keep finding reasons to write silly programs; this is no exception</p>
<p>My go to tap tempo has been <a href="https://taptempo.io/">taptempo.io</a>. It's come in
handy on several recording sessions I've been a part of. It's simple, good,
works.</p>
<p>However, I wanted an offline only version for when I'm connected to my IEM
mixer's Wi-Fi network rather than my home network. This prompted me to create
the very originally named
<a href="https://github.com/samuelstranges/tap_tempo">tap_tempo</a>. Just a small CLI tool
built with Bubble Tea in Go.</p>
<p>I hope it helps somebody!</p>
<video controls="" width="100%">
  <source src="./assets/tap_tempo.mp4" type="video/mp4">
</video>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Doctors Hate This One Weird Ergo Mech Trick]]></title>
            <link>https://www.strang.es/doctors_hate_this_one_weird_trick.html</link>
            <guid>https://www.strang.es/doctors_hate_this_one_weird_trick.html</guid>
            <pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Going Down The Rabbit Hole]]></description>
            <content:encoded><![CDATA[<p>I have been using split keyboards for a couple of years now, first starting with
an Iris, and eventually landing on building the more Corne-like
<a href="https://ergonautkb.com/docs/keyboards/ergonaut-one/intro/">Ergonaut One</a>.</p>
<p>The killer feature of the Ergonaut One is that it's wireless and has a battery
life that can last months at a time.</p>
<h2>The Problem</h2>
<p>One complaint that I have had with split keyboards is the height of the setup,
particularly at more extreme tenting angles; <strong>it can feel like you are lifting
your arms to use your keyboard at times</strong>. Don't get me wrong, tenting is still
an improvement over typing flat, but can be improved.</p>
<p><img src="https://strang.es/assets/unorthodox_keyboard_setup_003.jpeg" alt="Old Phone Stand Method"></p>
<p><strong>Above: You can tent an arbitrary keyboard to some pretty gnarly angles with a
couple of phone stands and a couple of magnetic stickers, but you start raising
your arms to the point where its uncomfortable again.</strong></p>
<p>I think I've come up with a pretty unique solution to overcome this.</p>
<h2>The Solution</h2>
<p>The simple explanation: tie two mobile phone stands together (with a few
scrunchies) at their bases, and allow the arms of the stands to hang free on the
left and right, utilising a magnetic sticker at the bottom of each keyboard.
Now, just place the stands on your legs! From here, tent to taste.</p>
<p><img src="https://strang.es/assets/unorthodox_keyboard_setup_000.jpeg" alt="New Phone Stand Method"></p>
<p><strong>Above: The new and improved phone stand method; comfort your hands can only
dream of!</strong></p>
<p><em>"Typing should like a handshake"</em> ©</p>
<h3>More Photos</h3>
<p><img src="https://strang.es/assets/unorthodox_keyboard_setup_001.jpeg" alt="Horizontal View"></p>
<p><img src="https://strang.es/assets/unorthodox_keyboard_setup_002.jpeg" alt="Back Of Keyboard"></p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Private Eyes]]></title>
            <link>https://www.strang.es/building_a_private_cloud.html</link>
            <guid>https://www.strang.es/building_a_private_cloud.html</guid>
            <pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Building A Private Cloud]]></description>
            <content:encoded><![CDATA[<p>They don't deserve your data. You know that already. What you probably don't
know is that <strong>it's actually easy to opt out</strong>.</p>
<p>This is how I use a VPS as a remote homelab to run sensitive compute and
projects with Docker, and securely ensure it's recoverable by backing up to S3.
It's <a href="https://github.com/samuelstranges/public_homelab">reproducible</a> using a
Docker Compose file, Terraform, and a couple of bash scripts.</p>
<h2>Getting The VPS Up</h2>
<p>When setting up this project, I first tried going with Digital Ocean, as I've
always found their droplets in Sydney reliable. However, when I ran out of RAM
with multiple Docker containers running all at once, I thought rather than
paying a premium for RAM with Digital Ocean, it was a good opportunity to look
for an alternative.</p>
<p>Meet <a href="https://flowvps.com/">FlowVPS</a>. Prices are fair, set up is
straightforward, and the speed is out of this world. I'm not kidding, the
latency blew me away; <strong>it genuinely feels local</strong>. Every other box I’ve SSH’d
into has been in Sydney, but this Melbourne based VPS responds like it’s on my
desk.</p>
<p>I used <code>stow</code> to import my dotfiles to the machine, and set up <code>brew</code> and <code>zsh</code>
on the Ubuntu OS I set up; meaning that in about 10 minutes, I had entirely
replicated my MacOS terminal experience remotely on a different operating
system. Now if I could only get copy/paste working over SSH...</p>
<h2>How I Access My Services</h2>
<p>On my laptop, I am able to SSH into the VPS using my public/private keypair I've
set up with a user on the VPS box.</p>
<p>I'm running <code>autossh</code> to bind ports from the VPS to my laptop's localhost,
meaning the applications work essentially as if they were running on a local
Docker container, and automatically connect on login.</p>
<p>This means that you don't need to expose ANY inbound ports on your VPS outside
of SSH. I've also heavily restricted the outbound ports for the Docker
containers by defining two Docker networks in Docker Compose and setting <code>ufw</code>
rules for them:</p>
<ul>
<li>br-internet: containers that need external access</li>
<li>br-isolated: containers that must stay internal</li>
</ul>
<p>This might seem overkill, but there is a good reason
<a href="https://github.com/Stirling-Tools/Stirling-PDF/issues/3283">why</a> it might be
wise to lock down containers that might want to phone home for telemetry. I used
<code>ufw</code> with the following rules to get it going:</p>
<pre><code class="hljs language-bash"><span class="hljs-comment"># Base rules</span>
<span class="hljs-built_in">sudo</span> ufw allow OpenSSH
<span class="hljs-built_in">sudo</span> ufw default deny incoming
<span class="hljs-built_in">sudo</span> ufw default allow outgoing

<span class="hljs-comment"># Allow 'br-internet' (defined in Docker Compose) network (172.18.0.0/16) to access DNS, HTTP/S</span>
<span class="hljs-built_in">sudo</span> ufw route allow <span class="hljs-keyword">in</span> on br-internet out on ens3 from 172.18.0.0/16 to any port 53 proto udp
<span class="hljs-built_in">sudo</span> ufw route allow <span class="hljs-keyword">in</span> on br-internet out on ens3 from 172.18.0.0/16 to any port 53 proto tcp
<span class="hljs-built_in">sudo</span> ufw route allow <span class="hljs-keyword">in</span> on br-internet out on ens3 from 172.18.0.0/16 to any port 80
<span class="hljs-built_in">sudo</span> ufw route allow <span class="hljs-keyword">in</span> on br-internet out on ens3 from 172.18.0.0/16 to any port 443

<span class="hljs-comment"># Block 'br-isolated' (defined in Docker Compose file) network from internet</span>
<span class="hljs-built_in">sudo</span> ufw route deny <span class="hljs-keyword">in</span> on br-isolated out on ens3 from 172.19.0.0/16 to any

<span class="hljs-comment"># Allow host access to isolated containers</span>
<span class="hljs-built_in">sudo</span> ufw route allow <span class="hljs-keyword">in</span> on br-isolated to 172.19.0.0/16</code></pre><h2>What's Running?</h2>
<p>Here are the services that are currently up on the box:</p>
<ul>
<li><a href="https://github.com/atuinsh/atuin"><strong>Atuin</strong></a>: The best shell history command
(fuzzy, persistent across devices, secure). But my shell history shouldn't be
on a foreign server.</li>
<li><a href="https://github.com/RSS-Bridge/rss-bridge"><strong>RSS-Bridge</strong></a>: Creates RSS feeds
for TikTok and Instagram accounts, so I don't need to buy into the algorithm
to follow the few people I need to. This isn't particularly secure, as they
use third party bridges to fetch the data, meaning those third parties see
what you are accessing on the original service. However, I don't see a
reliable way around this, and the overall net positive of avoiding the
Instagram cesspool is a net positive.</li>
<li><a href="https://github.com/FreshRSS/FreshRSS"><strong>FreshRSS</strong></a>: A no nonsense RSS feed
fetcher/reader with a clean webapp. I like it cause I can integrate it with
<a href="https://github.com/newsboat/newsboat">newsboat</a>; a great little RSS TUI.</li>
<li><a href="https://github.com/firefly-iii/firefly-iii"><strong>Firefly III</strong></a>: Personal
budgeting management</li>
<li><a href="https://github.com/C4illin/ConvertX"><strong>ConvertX</strong></a>: I've always felt
uncomfortable uploading documents to free document conversion websites, and
can't be bothered learning CLI tools and flags</li>
<li><a href="https://github.com/lyqht/mini-qr"><strong>Mini QR</strong></a>: Relying on a third party to
create and decode QR codes is a security vulnerability.</li>
<li><a href="https://github.com/sabre-io/Baikal"><strong>Baikal</strong></a>: CalDAV/CardDAV for calendar
and contacts.</li>
<li><a href="https://github.com/Stirling-Tools/Stirling-PDF"><strong>Stirling PDF</strong></a>: I can't
recommend this one due to the aforementioned tracking pixel, but it works
well.</li>
</ul>
<h2>Automated Backups</h2>
<p>I have set up automated daily backups for data that needs to be persistent
(think my Atuin database, RSS feed database, etc). This is using a cronjob that
calls a shell script that every morning does the following:</p>
<ol>
<li>Turn off containers with volumes needing backups</li>
<li>Mount the volumes on fake containers</li>
<li>Tar/copy the volumes to the VPS</li>
<li>Turn off the fake containers</li>
<li>Use <code>rclone</code> to push the backups to the S3 bucket</li>
<li>Turn the real containers back on</li>
</ol>
<p>The key trick for running this simplified script was to ensure all persistent
data was located on Docker volumes, which provides uniformity in how you can
backup the data with a simple for loop, rather than considering how each
container might need its data backed up. <strong>Simplicity wins out here over
availability</strong>... I don't need to keep my FreshRSS running at 3am in the morning
for a few minutes.</p>
<p>How the VPS accesses the bucket is via an IAM user created alongside the S3
bucket using Terraform.</p>
<h2>Infrastructure As Code</h2>
<p>I've created a <a href="https://github.com/samuelstranges/public_homelab">Github repo</a>
with the Docker Compose file, the terraform file, and a couple of scripts to
help you get set up on your own VPS if you wanted to set up your own homelab
using a similar system.</p>
<h2>Conclusion</h2>
<p>If this sounds like too much work, you can get 80% of the benefits with 5% of
the effort by spinning up the Docker containers using the <code>docker-compose.yml</code>
file on your local machine. This provides all the services, albeit without the
flexibility, security, and backup capabilities of what I've outlined in this
post.</p>
<pre><code class="hljs language-bash">git <span class="hljs-built_in">clone</span> https://github.com/samuelstranges/public_homelab.git &amp;&amp; <span class="hljs-built_in">cd</span> public_homelab

<span class="hljs-comment"># SET ENVIRONMENT VARIABLES</span>

docker compose up -d</code></pre><p>Even as I'm writing this I'm considering other services I can add to my system.
It has opened my mind to the benefits of the self hosted mindset. Again, they
don't deserve my data, and now, they don't have it.</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Best Note Taking System Is One You Already Have]]></title>
            <link>https://www.strang.es/markdown_plus_git.html</link>
            <guid>https://www.strang.es/markdown_plus_git.html</guid>
            <pubDate>Mon, 20 Oct 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>I've tried your note taking system...</p>
<ul>
<li><strong>Google Keep</strong>: too proprietary</li>
<li><strong>Apple Notes</strong>: too walled off</li>
<li><strong>jrnl</strong>: too fragile</li>
<li><strong>OneNote</strong>: too different</li>
<li><strong>Joplin</strong>: too difficult to setup</li>
<li><strong>Obsidian</strong>: too expensive to sync</li>
<li><strong>Evernote</strong>: don't get me started</li>
</ul>
<p>The note taking system that stuck?</p>
<p><strong><em>A git repo full of Markdown documents.</em></strong></p>
<h2>Why Markdown + Git?</h2>
<p>Honestly? Because it gets <em>out of your way</em>, and lets you just write. Too many
modern systems require a lot of buy in from their users before they really get
comfortable with their systems; and lock you in, making it difficult to move
away when you realise the mistake you've made. Markdown + Git is the opposite.</p>
<h3>The 3 F's</h3>
<p>It boils down to this; Markdown + Git is:</p>
<ul>
<li>Free</li>
<li>Flexible</li>
<li>Future Proof</li>
</ul>
<h3>Free</h3>
<p>Markdown is an open standard, meaning that there are hundreds of software
packages that can view and edit Markdown files, from the humble Notepad everyone
loves to hate, to Markdown specific editors like Typora (paid). Basically, as
long as the software you use allows you to export in Markdown, you aren't locked
into a platform. This means you can jump ship to a different editor at the drop
of a hat. This is a great insurance policy for something that you don't want to
have to deal with; migrating your notes to a new platform.</p>
<p>Using a Git repo provides version control for your notes, which actually comes
in handy more often than you might think. Additionally, using GitHub, Gitlab, or
any other cloud hosting service for your Git repo means that you can access your
notes on different devices for free.</p>
<h3>Flexible</h3>
<p>Decoupling the hosting from the editor means you can pick the best tools for
your specific use case.</p>
<h4>Editor/Viewer</h4>
<p>Using a Markdown backed system means you aren't locked in to a specific editor.
How you think about and edit documents on your computer is likely very different
to how you engage with documents on your mobile. Markdown gives you the best of
both worlds by providing nothing more than an open standard. This means any
editor can provide a different workflow and set of features that you might need
in various contexts.</p>
<p>Examples:</p>
<ul>
<li><strong>Mobile</strong>: iAWriter, 1Writer</li>
<li><strong>GUIs</strong>: MarkText, iAWriter, Typora</li>
<li><strong>Terminal</strong>: Neovim, Vim</li>
</ul>
<h4>Backend</h4>
<p>A Git system might not fulfill your needs in terms of sharability or multi
device sync. Additionally, your security senses might baulk at a cloud hosted
Git repo. Great! You can plug and play whatever backend system you want.</p>
<ul>
<li>Need live syncing between devices?: Dropbox is easy, Syncthing is secure.</li>
<li>Just need backups?: Use rclone.</li>
<li>Is version control your thing?: Spin up that git repo.</li>
<li>Paranoid?: A local folder encrypted with PGP.</li>
</ul>
<p>You can mix and match too! For a while I was using Dropbox and a Git repo at the
same time, but realised that since
<a href="https://strang.es/dumbphones_are_getting_dumber.html">moving to a dumbphone</a>, I
only really access my notes on my primary device, so have since moved to a Git
only system (using a private GitHub repo for backups).</p>
<h4>Exporting Your Files</h4>
<p>Markdown documents can be <strong>exported into virtually any file format</strong> you might
need. <a href="https://pandoc.org/">Pandoc</a> seems to cover most of the common ones.</p>
<p>Want to start blogging with Markdown? Check out Hugo, Astro, or even my own
<a href="https://samuelstranges.github.io/project_therese/">Project Therese</a> to easily
build sites backed by your documents.</p>
<h3>Future Proof</h3>
<p>Markdown will be here tomorrow, the day after, the year after, and the decade
after. There are horror stories of forced migration and software abandonment for
proprietary note management systems. For something as simple as writing personal
documents, encountering these issues is frankly ridiculous.</p>
<p>Even if all the Markdown editors in the world disappeared tomorrow, Markdown's
syntax is so humanly readable that it wouldn't matter. Because <strong>YOU</strong> are in
control of <strong>YOUR</strong> documents, you won't need to worry about getting locked into
a system that's going to shut down.</p>
<h4>LLMs and Markdown</h4>
<p>LLMs work really well with Markdown syntax, meaning that in the world of agentic
AI, it's easy to provide context and instructions using the notes that you've
written for yourself.</p>
<p>Looking forward to the near future, when I'm hopefully running a local LLM in a
homelab setup, having my personal notes written in Markdown will provide a high
quality and easily digestible context that a model will be able to use to
problem solve.</p>
<h2>Learning Markdown</h2>
<p>Markdown is trivial to pick up; you can get a good understanding of the entire
syntax within an hour. Even if you mess a thing up or two in your documents,
modern linters and Markdown specific software are very forgiving and will help
you out.</p>
<p>I'd recommend checking out the
<a href="https://www.markdownguide.org/getting-started/">Markdown Guide</a> as a first
point of call.</p>
<h2>Markdown In The Terminal</h2>
<p>You're reading this blog, you're probably comfortable with the CLI. If you want
a great experience editing Markdown documents in the terminal without any of the
headaches, install <a href="http://www.lazyvim.org/">LazyVim</a>. Tweak as desired. That's
it. It does basically everything you might want.</p>
<p>The only plugin I consider 'essential' that doesn't come out of the box is one
to help with managing lists/sublists. I'm using
<a href="https://github.com/gaoDean/autolist.nvim">autolist</a> at the moment, but there
are others that do the same thing just as well...</p>
<p>For an exceptional, but extremely esoteric Neovim Markdown setup, check out
Linkarzu's <a href="https://linkarzu.com/posts/neovim/markdown-setup-2025/">article</a>.</p>
<h3>Daily Notes</h3>
<p>If you're like me and mentally compartmentalise things by date, setting up a
daily template and hooking up a keybind to quickly jump to it can make life a
little easier.</p>
<pre><code class="hljs language-lua"><span class="hljs-keyword">local</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">open_daily_note</span><span class="hljs-params">()</span></span>
    <span class="hljs-keyword">local</span> dailies_dir = <span class="hljs-string">"/Users/user/notes_dir/dailies"</span>
    <span class="hljs-keyword">local</span> template_path = <span class="hljs-string">"/Users/user/notes_dir/daily_template.md"</span>

    <span class="hljs-comment">-- Get today's date in YYYY-MM-DD format</span>
    <span class="hljs-keyword">local</span> today = <span class="hljs-built_in">os</span>.<span class="hljs-built_in">date</span>(<span class="hljs-string">"%Y-%m-%d"</span>)
    <span class="hljs-keyword">local</span> daily_file = dailies_dir .. <span class="hljs-string">"/"</span> .. today .. <span class="hljs-string">".md"</span>

    <span class="hljs-comment">-- Check if file exists, if not copy from template</span>
    <span class="hljs-keyword">local</span> file = <span class="hljs-built_in">io</span>.<span class="hljs-built_in">open</span>(daily_file, <span class="hljs-string">"r"</span>)
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> file <span class="hljs-keyword">then</span>
        <span class="hljs-comment">-- File doesn't exist, copy from template</span>
        <span class="hljs-keyword">local</span> template_file = <span class="hljs-built_in">io</span>.<span class="hljs-built_in">open</span>(template_path, <span class="hljs-string">"r"</span>)
        <span class="hljs-keyword">if</span> template_file <span class="hljs-keyword">then</span>
            <span class="hljs-keyword">local</span> template_content = template_file:<span class="hljs-built_in">read</span>(<span class="hljs-string">"*all"</span>)
            template_file:<span class="hljs-built_in">close</span>()

            <span class="hljs-comment">-- Create the new daily file</span>
            <span class="hljs-keyword">local</span> new_file = <span class="hljs-built_in">io</span>.<span class="hljs-built_in">open</span>(daily_file, <span class="hljs-string">"w"</span>)
            <span class="hljs-keyword">if</span> new_file <span class="hljs-keyword">then</span>
                new_file:<span class="hljs-built_in">write</span>(template_content)
                new_file:<span class="hljs-built_in">close</span>()
            <span class="hljs-keyword">end</span>
        <span class="hljs-keyword">end</span>
    <span class="hljs-keyword">else</span>
        file:<span class="hljs-built_in">close</span>()
    <span class="hljs-keyword">end</span>

    <span class="hljs-comment">-- Open the file</span>
    vim.cmd(<span class="hljs-string">"edit "</span> .. daily_file)
<span class="hljs-keyword">end</span></code></pre><h3>Useful Shell Aliases</h3>
<p>If you live in the terminal, having a few quick shortcuts aliased in your
<code>.bashrc</code>/<code>.zshrc</code> can remove friction when you need to quickly jot something
down. Here are ones I use daily:</p>
<pre><code class="hljs language-bash"><span class="hljs-comment"># open daily note (being in notes dir helps ripgrep/fzf search)</span>
<span class="hljs-built_in">alias</span> mt=<span class="hljs-string">"cd /Users/user/notes_dir/ &amp;&amp; <span class="hljs-variable">$EDITOR</span> -c 'lua open_daily_note()'"</span>

<span class="hljs-comment"># open my global TODO file</span>
<span class="hljs-built_in">alias</span> md=<span class="hljs-string">"<span class="hljs-variable">$EDITOR</span> /Users/user/notes_dir/main_todo.md"</span>

<span class="hljs-comment"># search current directory for text</span>
<span class="hljs-built_in">alias</span> sg=<span class="hljs-string">'nvim -c "lua Snacks.picker.grep({ cwd = vim.fn.getcwd() })"'</span>

<span class="hljs-comment"># create temporary markdown document (zsh needs backslash)</span>
<span class="hljs-built_in">alias</span> scratchpad=<span class="hljs-string">"nvim /tmp/\$(head -c 16 /dev/urandom | xxd -p).md"</span></code></pre><h2>Conclusion</h2>
<p>I'll be honest, there are two things that tempt me to move away from Markdown;
Typst and Org Mode, for very different reasons.</p>
<p>However, every time I open up my folder and see that my Markdown system has been
largely unchanged for over a year, I realise that I've found the right system,
because I've stopped thinking about needing to fix it.</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bespoke Software]]></title>
            <link>https://www.strang.es/bespoke_software.html</link>
            <guid>https://www.strang.es/bespoke_software.html</guid>
            <pubDate>Mon, 06 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Twelve Tools In Twelve Weeks]]></description>
            <content:encoded><![CDATA[<p>Over the past few months, I've built dozens of tools that have fundamentally
changed my daily workflow. None of them would have been worth building without
AI, and all of them paint a picture of an emerging category of software I call
"bespoke software"; tools that demonstrate why the current debate about AI code
quality misses the point entirely.</p>
<p>In this series, I'd like to share twelve of these tools I've built using AI
assisted development to show how building with these tools have not only changed
my own personal workflow, but how the mindset shift towards bespoke software
could fundamentally change how we view SaaS and custom software.</p>
<p>Moreover, I'd like to show how categorising the AI development process can:</p>
<ul>
<li>save development time by working more effectively with LLM tooling</li>
<li>prevent going down rabbit holes when approaching a problem using the wrong
category</li>
<li>make the iterative development process of developing with AI actually
enjoyable</li>
<li>stopping yourself from yelling at Sonnet to "JUST FIX IT"</li>
</ul>
<h2>Bespoke Software</h2>
<p>My new favourite pastime is to read experienced developers complaining about the
quality of vibecoded apps on Hacker News, asserting that AI assisted development
would never be suitable for 'real' projects. I am of the strong belief that our
understanding of a 'real' project is fundamentally changing, from enterprise
tools that do a good enough job for thousands of customers, to hyper specific
tools that work for small teams or even individuals; a <strong>decentralisation of
software</strong>.</p>
<p>Don't get me wrong, enterprise software will continue to exist, and I sincerely
hope no-one will try and roll their own payment processor to sell a bouquet of
flowers at a farmer's market. However we are on the precipice of an entire
category of software being created: "bespoke software", where <strong>non mission
critical</strong> software is written specifically for an individual or small team's
workflow or business' needs. Bespoke software prioritises rapid development and
immediate utility over comprehensive testing and bulletproof architecture.</p>
<p>Every business has dozens of manual processes that are too specific for off the
shelf automation, while being too small to justify traditional custom software.
AI written bespoke software makes automating these viable, as suddenly every
repetitive task becomes a candidate for elimination.</p>
<p><strong>Bespoke Software:</strong> <em>Non mission critical applications built for specific
individuals or small teams to solve particular workflow problems.</em></p>
<h2>The Code Quality Objection</h2>
<p><em>Does code quality matter equally in all contexts?</em></p>
<p>The obvious objection against "bespoke software" is with regards to code
quality. There's certainly, um...
<a href="https://www.youtube.com/watch?v=LakqV60SfLE">merit</a>... to this argument.
However, pointing to crappy and barely working vibecoded apps misses the
underlying point about AI developed bespoke software; that they alter the cost
benefit calculation, and each individual project must now be evaluated
independently from one another regarding code quality and AI involvement.</p>
<p>Code quality has only ever really mattered for the purpose of meeting business
outcomes and compliance; poor quality code and more broadly technical debt has
historically negatively influenced the bottom line of a company. Bespoke
software operates under a different set of constraints; when building small
tools for small groups of people, with the ability to rapidly iterate,
traditional quality standards become less relevant.</p>
<p>The real question isn't whether AI-generated code meets enterprise standards
that must satisfy all kinds of uses and edge cases by multiple organisations,
but whether it's adequate for the hyper specific context where it will be used.
Since failure affects only a small group, and AI enables rapid iteration; the
code quality, for bespoke software, often has a bar much lower than developers
assume.</p>
<p>The question therefore shifts from "is this software robust enough to throw
anything at it for future use cases" to "how does this software satisfy a small
use case that exists at present".</p>
<h2>The AI Collaboration Problem</h2>
<p>Most developers approach AI tools the same way regardless of project type,
leading to frustrations and failures.</p>
<p>At one end of the spectrum, we have experienced developers who haven't learned
to provide appropriate context to the model, and asking it to solve complex
problems in large codebases that the AI can't solve on its own.</p>
<p>At the other end, we have vibecoders generating massive and tangled codebases on
projects that <strong>needed</strong> structured development from the get go. AI generated
spaghetti code has a strange property; it continues functioning much longer than
human-written spaghetti code. But at some arbitrary size, it will break down,
and by then, the codebase is too complex to easily rewrite.</p>
<p>Between these extremes, we have developers who are resistant to giving AI
control beyond basic boilerplate, and others who consistently attempt to iterate
through problematic AI code that needed upfront architectural planning.</p>
<p>All of these approaches to using AI are valid in specific contexts, but having
only a single methodology of using AI becomes counterproductive when presented
with a different type of problem. Developers haven't yet developed the intuition
on how to effectively categorise AI involvement for different types of projects.</p>
<p>This lack of systematic thinking about AI collaboration is one of the main
barriers to bespoke software development. When developers can't reliably choose
the right approach for different types of projects, they don't realise there is
an entire category of software waiting to be written that was previously not
viable. This is where a systematic categorization of AI collaboration approaches
becomes valuable.</p>
<h2>The Solution</h2>
<p>I propose a framework of different types of AI involvement in the development
process, so developers can quickly learn what types of problems are well suited
to different types of involvement.</p>
<p><em>Categorising the involvement of AI development helps create a mental model of
what types of projects are suited towards what type of development, helping you
more effectively interact with the AI during the software development process.</em></p>
<p>Each of these categories lend themselves to different types of projects and
business needs. They can help us approach problems with a better understanding
of:</p>
<ul>
<li>Best practices for approaching each of these categories</li>
<li>What types of projects are suited for which type of AI involvement</li>
<li>Common pitfalls for each of these categories</li>
</ul>
<p>If developers were aware of each of these categories and when to approach which
problem with which approach, I believe we would see more successful bespoke
software projects, fewer frustrated developers abandoning AI development, and
ultimately the widespread adoption of custom automation that transforms how
small teams work.</p>
<h2>Introducing The Categories</h2>
<p>AI assisted development can be categorised into five distinct categories:</p>
<ol>
<li><strong>Scaffolding</strong>: You request specific boilerplate to be generated by the AI,
while actual implementation details and key business logic is manually coded.</li>
<li><strong>Collaborative</strong>: Tasks are clearly divided between the AI and you, based on
who is more effective at implementing type of task.</li>
<li><strong>Architectural</strong>: Provide detailed specifications <strong>upfront</strong>, giving
detailed requirements before coding begins. AI follows your blueprint.</li>
<li><strong>Iterative</strong>: Code-level feedback loops with AI. You <strong>react</strong> to AI's
design and implementation, suggesting specific improvements to functions and
logic. AI does all coding.</li>
<li><strong>Vibed</strong>: Fully give in to the vibes, embrace exponentials, and forget that
the code even exists.</li>
</ol>
<h2>Twelve Tools In Twelve Weeks</h2>
<p>Now I've introduced the categories, I'd like to introduce the projects I have
written over the last few months (mostly bespoke) and the different
methodologies that were used to build effectively, and in some cases, not
effectively at all using AI.</p>
<h3><a href="https://convocare.net">Convocare</a></h3>
<video controls="" width="100%">
  <source src="./assets/convocare_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: A local events webapp for Catholics in Melbourne</li>
<li>Pain point: Young Catholics regularly complain about a lack of events, despite
several parishes and groups regularly hosting events. Discoverability is a
real issue.</li>
<li>AI Development Category: <strong>Iterative</strong> (should've used <strong>collaborative</strong>)</li>
</ul>
<p><a href="why_i_chose_boring.html">Here</a> is a breakdown of how it was built.</p>
<hr>
<h3><a href="https://github.com/samuelstranges/spellbound.nvim">spellbound.nvim</a></h3>
<video controls="" width="100%">
  <source src="./assets/spellbound_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: A neovim plugin that introduces a new mode dedicated to
spellchecking documents</li>
<li>Pain point: Spellchecking documents in neovim is clunky and repetitive</li>
<li>AI Development Category: <strong>Vibed/collaborative</strong> (Methodology is mixed, as it
was my first real AI assisted bespoke software I built. I plan on rewriting
using the collaborative method again, as the codebase kind of sucks)</li>
</ul>
<hr>
<h3><a href="https://github.com/samuelstranges/chronos">Chronos</a></h3>
<video controls="" width="100%">
  <source src="./assets/chronos_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: An advanced calendar TUI using BubbleTea, featuring multiple
views, vim keybindings and iCal compliance.</li>
<li>Pain point: We're apparently in the golden age of TUIs, but we don't have a
half decent calendar application.</li>
<li>AI Development Category: <strong>Architected</strong></li>
</ul>
<hr>
<h3><a href="https://github.com/samuelstranges/project_therese">Project Therese</a></h3>
<p><strong>No screenshot because you're looking at it!</strong></p>
<ul>
<li>Description: A dead simple static site generator that looks beautiful (this
site uses it).</li>
<li>Pain point: When setting up my blog I found static site generators to be more
complicated than needed, and not actually look that good.</li>
<li>AI Development Category: <strong>Iterative</strong></li>
</ul>
<hr>
<h3><a href="building_a_serverless_sms.html">Chronos Sync</a></h3>
<p><img src="https://github.com/samuelstranges/chronos_sync/raw/main/serverless.png" alt="Chronos Sync serverless architecture diagram"></p>
<ul>
<li>Description: A serverless calendar to SMS notification system built on AWS</li>
<li>Pain point: Using a dumbphone means I don't have access to my calendar easily.</li>
<li>AI Development Category: <strong>Architected</strong></li>
</ul>
<hr>
<h3>Hammerspoon switcher</h3>
<video controls="" width="100%">
  <source src="./assets/hammerspoon_switcher.mp4" type="video/mp4">
</video>

<ul>
<li>Description: A MacOS switcher that can jump to both applications as well as
open tmux windows using a single letter</li>
<li>Pain point: Although my tmux quick switch project works fine, I find myself
thinking about whether I want to switch to an application or a window, which
takes me out of the flow state.</li>
<li>AI Development Category: <strong>Architected</strong></li>
</ul>
<hr>
<h3>Qutebrowser Jump to Audio</h3>
<video controls="" width="100%">
  <source src="./assets/jump_video_demo.mp4" type="video/mp4">
</video>

<ul>
<li><p>Description: A Qutebrowser script that jumps to the tab with the actively
playing audio source</p>
</li>
<li><p>Pain point: I often find myself searching through my tabs to find the
audio/video source that is currently playing just to stop it</p>
</li>
<li><p>AI Development Category: <strong>Iterative</strong></p>
</li>
</ul>
<hr>
<h3>Youtube RSS Copy</h3>
<video controls="" width="100%">
  <source src="./assets/youtube_copy_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: A Javascript tool that works as a chrome extension or as a
userscript to copy RSS feeds of youtube channels and playlists</li>
<li>Pain point: Wanted to source YouTube RSS feeds without relying on third-party
services that might track my viewing habits</li>
<li>AI Development Category: <strong>Vibed</strong></li>
</ul>
<hr>
<h3>Failed Lazyorg Fork</h3>
<video controls="" width="100%">
  <source src="./assets/lazyorg_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: An extension of Lazyorg to be more useful as a calendar TUI</li>
<li>Pain point: Lazyorg is a promising calendar TUI but needed several large
changes to be useful for day to day use.</li>
<li>AI Development Category: <strong>Iterative</strong> (should've used <strong>collaborative</strong>)</li>
<li>Why it failed:<ul>
<li>Didn't ask the question: "Should I try extending this existing project or
work from scratch?". The tech stack prevented extending the software to
where I wanted it to go, and I didn't take the time to look into the
implementation details and the limitations of the library it was built on.</li>
<li>Let AI write spaghetti</li>
</ul>
</li>
</ul>
<hr>
<h3>Rolodex</h3>
<video controls="" width="100%">
  <source src="./assets/rolodex_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: A <code>pass</code>-like contact manager that uses Git backed <code>.toml</code> files
and <code>fzf</code> for its front end.</li>
<li>Pain point: I don't like that I'm locked into proprietary Cloud based
solutions for something as simple as managing my contacts</li>
<li>AI Development Category: <strong>Architected</strong></li>
</ul>
<hr>
<h3>Tmux Quick Switch</h3>
<video controls="" width="100%">
  <source src="./assets/tmux_quick_switch_demo.mp4" type="video/mp4">
</video>

<ul>
<li>Description: Switch between tmux windows with a single letter</li>
<li>Pain point: I organize tmux windows as conceptual applications ('Development',
'Work'), but switching between them requires too much cognitive overhead</li>
<li>AI Development Category: <strong>Iterative</strong></li>
</ul>
<hr>
<h3>Selfhosted RSS Infrastructure</h3>
<p><strong>If there was an image it would just be the docker logo lol...</strong></p>
<ul>
<li><p>Description: FreshRSS and RSS Bridge services deployed on AWS EC2 to sync RSS
across devices</p>
</li>
<li><p>Pain point: Wanted to securely synchronize my RSS feeds across devices while
keeping them accessible via newsboat CLI, plus gain hands-on AWS experience</p>
</li>
<li><p>AI Development Category: <strong>Iterative</strong></p>
</li>
</ul>
<hr>
<h2>Explaining the categories</h2>
<p>Hopefully you can see the kind of complexities and types of projects that lend
themselves to each category that I've defined earlier. However, it may be
helpful to show how a single project can be approached using the different
categories. Lets use the example of a calendar application, and see how each
methodology may build parts of the application.</p>
<h3>Scaffolding</h3>
<ul>
<li>"Here's my Calendar.js file. I need empty functions called parseEvent(),
validateDate(), and formatTime() with proper signatures that plug into my
existing EventManager and DateUtils classes"</li>
<li>Review output</li>
<li>Manually write implementation logic</li>
</ul>
<h3>Collaborative</h3>
<ul>
<li>"You handle the backend API, data models, and database logic, I'll build the
frontend components and handle the styling"</li>
</ul>
<h3>Architectural</h3>
<ul>
<li>Write document that outlines that you want a calendar app with these specific
features, using this tech stack, using these design patterns, following this
data structure, and provide documentation and files of APIs, libraries,
example screenshots of what you want it to look like</li>
<li>Review code</li>
<li>Decide between jumping into an iterative process for specific issues, or to re
prompt with better outline, documentation, etc.</li>
</ul>
<h3>Iterative</h3>
<ul>
<li>"Build me a calendar app"</li>
<li>Review code</li>
<li>"See how in this function you're using a nested loop? We should be using a map
for better performance"</li>
<li>Review updated code</li>
<li>"Move this validation logic into a separate function"</li>
<li>Process continues</li>
</ul>
<h3>Vibed</h3>
<ul>
<li>"Build me a calendar app"</li>
<li>Test the app</li>
<li>"Make the events bigger"</li>
<li>Test the app again</li>
<li>"Add a month view"</li>
<li>Process continues...</li>
</ul>
<h2>What About Questioning AI?</h2>
<p>What I call consultative development, the idea of you developing, and asking AI
for feedback on changes, I'm not sure should really be considered AI developed
software, I think this way of using AI acts more as an extension of feedback
mechanisms. Definitely an important piece of the puzzle and needed in many cases
where more control is needed, but not really what I'm getting at when I'm
talking about developing with AI.</p>
<h2>Moving Up And Down Through The Categories</h2>
<p>It's important to understand what category your project is in <strong>before</strong> you
open up your AI coding assistant. This is because it is far easier to move down
a category than it is to climb up them again. Once you give the AI control over
not just the implementation details, but the overall design of the project, you
have already created a significant amount of technical debt that is extremely
time consuming to wrangle back control.</p>
<p>A perfect example of this is when I was developing Chronos, using the
"iterative" methodology. By the time I started to realise that the complexity of
the project lends itself better to an <strong>Architectural</strong> approach, I had
accumulated a significant amount of technical debt, particularly in the area of
DRY and mixed concerns of packages and functions.</p>
<h2>The Collaboration Boundary</h2>
<p>One distinction that emerges through this categorisation is that between usage
and authorship in the context of teams. The development methodology of the
deeper categories of AI involvement scale less effectively to collaborative
authoring.</p>
<p>This is obvious when you think about it, that the economics that make bespoke
software viable: rapid AI-assisted iteration, tolerance for imperfect code,
minimal upfront planning can break down the moment you introduce shared
development. This is because maintaining consistent architecture across multiple
contributors and documenting implicit design decisions reintroduces all the
overhead that deeper categories can eliminate.</p>
<p>Take the <strong>iterative</strong> approach, for example, the design decisions live in your
conversational history with the AI, not in transferable documentation. However,
if I had used the <strong>Architecture</strong> category from the get go: creating proper
specifications and documentation, then accepting contributions become much more
viable. Ultimately, these categories determine the future ownership of the
resulting software; deeper levels of AI involvement trend towards a more
individualistic ownership model.</p>
<h2>Questions To Help Categorise Projects Before Implementation</h2>
<ul>
<li><strong>How large is the project?</strong>: If the codebase is expected to be large
relative to your LLM's context window, you may need to move up a category or
two.</li>
<li><strong>Is this a library that other projects will be built on top of, or is this
the end product?</strong>: AI can rapidly iterate for hyper specific business
concerns, suggesting deeper categories. Libraries need stability and careful
design, pushing toward more shallow categories.</li>
<li><strong>What tools will the project be built on?</strong>: Typed languages, TDD and static
tools can help AI work more effectively with larger codebases, potentially
allowing deeper categories. Dynamic languages and those without external
refactoring tools outside of manually editing code may force moving up.</li>
<li><strong>Are my models well trained on my tech stack?</strong>: I've found AI produces
poorer output on embedded projects, low level code, and newer frameworks like
Svelte, requiring more oversight and manhandling to produce correct output.
Common frameworks work better with deeper categories.</li>
<li><strong>What vulnerabilities/security issues do I need to be thinking about?</strong>:
Higher risk projects and those dealing with sensitive data need more manual
control.</li>
<li><strong>What will maintenance and future extensions look like?</strong>: Do you have a
clear understanding of how this software will be used long term that you may
need to accommodate? If not, extensibility is needed so a more shallow AI
approach is required.</li>
<li><strong>Who will use this?</strong>: Personal tools allows for experimental approaches,
while larger teams needs consistency.</li>
<li><strong>What is the impact of a catastrophic failure of this software?</strong>: What is
your RTO/RPO, and more broadly; what is the business impact if the software
were to go haywire? Critical applications require more manual intervention and
oversight.</li>
</ul>
<h2>Where To From Here?</h2>
<p>This introduction has established the framework, but I understand if you think
it has been a little light on examples. The coming articles will flesh these out
using the above projects as case studies, and will show how to effectively
approach problems using each methodology.</p>
<p>But hopefully I've painted with a broad brushstroke that:</p>
<ul>
<li>LLM generated code should be judged contextually based on the development
category appropriate for each project</li>
<li>Bespoke software powered by AI is expanding what we consider viable projects,
but only when we systematically match AI collaboration approaches to project
requirements</li>
</ul>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why I Chose Boring]]></title>
            <link>https://www.strang.es/why_i_chose_boring.html</link>
            <guid>https://www.strang.es/why_i_chose_boring.html</guid>
            <pubDate>Mon, 06 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Building an events platform with the standard tools everybody loves to hate]]></description>
            <content:encoded><![CDATA[<p>Getting my feet wet in full stack development has been an ... interesting
journey. There's a world full of new and shiny frameworks that all claim to
reduce friction and streamline development and deployment for the modern
developer. My experience, however, has been anything but.</p>
<p>This post explains why it's better to choose the boring tools and "outdated"
patterns that everybody loves to hate. These are the tools that shine in the
context of AI assisted development; facilitating rapid prototyping through
patterns the model can actually predict, and being terse enough to fit inside
the average LLM context window.</p>
<p>Before we dive into it, I need to get the definitions out of the way...</p>
<h2>Definitions</h2>
<p><strong>the stack</strong> /stæk/ <em>noun, informal</em></p>
<p>A collection of tools chosen with optimism and maintained begrudgingly. If
you're still using it at MVP, it's in the stack.</p>
<p><em>See also: resume-driven development, "I should rewrite this"</em></p>
<hr>
<p><strong>the stuck</strong> /stʌk/ <em>noun, informal</em></p>
<p>Tools and patterns that seduce you during prototyping but get thrown out by MVP.
Characterized by "one more fix" sessions, infrastructure rabbit holes, and the
realization that you're deleting weeks of work.</p>
<p><em>See also: premature optimization, yak shaving</em></p>
<hr>
<p><strong>the boring</strong> /ˈbɔːrɪŋ/ <em>noun, informal</em></p>
<p>Tools and patterns that get out of your way and let you ship. Characterized by
extensive documentation, predictable behavior, and the comforting feeling that
you're building features instead of debugging infrastructure.</p>
<p><em>See also: battle-tested, "it just works"</em></p>
<hr>
<h2>Introducing The Project</h2>
<p><a href="https://convocare.net">Convocare</a> is an events platform for young adult
Catholics in Melbourne. Users discover events through a map, calendar, or card
grid, and can save favorites for later. Admins manage everything through a
dashboard that handles event creation/editing, scheduling, and approvals.</p>
<p>The application is fully responsive across desktop and mobile, and includes
features like recurring event support, image uploads, and location mapping.
While building the site, I learned just how important choosing the right stack
is in the context of AI assisted development.</p>
<h2>"The Stuck": Convocare Edition</h2>
<h3>Version 0</h3>
<p>My first attempt at getting Convocare up was a mess. Although the site was
functionally working, it was tied together with spaghetti, and used a backend so
tightly coupled to the architecture that a full rewrite became the easiest
option...</p>
<p>Let's take a look at the 3 main issues with my first attempt.</p>
<h4>LLMs And DRY; A Love Story</h4>
<p>In the AI assisted development discourse I see online these days, there is one
crucial detail I think is being overlooked:</p>
<p><strong>DRY violations are self perpetuating when developing with AI.</strong></p>
<p>It's easy to be enticed by the quick prototyping that occurs working with AI,
however, each DRY violation you accept will ironically create a pattern which
the LLM will use to violate DRY again. This emergent behaviour is unique to
LLMs, as humans can always just choose to write the next feature more cleanly,
but with agentic AI agents contextualising the patterns of the entire codebase,
each mistake will self perpetuate.</p>
<p>In my case, I slipped into bad habits when creating this prototype, allowing
repeated code to be littered throughout the codebase, rather than creating clean
abstractions. When I tried to steer the AI into a cleaner way of doing things, I
found it was already too late, as it was clear the model was recognising the
repetition as its own pattern that should be followed.</p>
<p>This is <strong>garbage in, garbage out for the agentic era</strong>, and should be avoided
for anything bigger than a toy project.</p>
<h4>Svelte</h4>
<p>My first iteration of Convocare was built using Svelte and SvelteKit using
Claude Sonnet 3.5/4. <em>This was a big mistake</em>. I think the lack of quality
training data that exists for new and clever frameworks and patterns, like those
Svelte offers, means that the code that LLMs will write will be poorer.</p>
<p>The code that it wrote was again, functional, but extremely verbose. I couldn't
get the model to stick to a single pattern, and even when it did, they weren't
idiomatic to Svelte.</p>
<p>I should've just used React from the start as it has tonnes of great training
data to use for all kinds of problems.</p>
<h4>Monolithic Architecture</h4>
<p>This one's on me... I was enticed by the idea of simplifying the infrastructure
to a single box, which is great for cost savings for a specific set of projects,
but certainly overkill for Convocare. Still, I forged ahead, until I got bogged
down in infrastructure hell. I spent more time debugging why an Nginx reverse
proxy was grabbing its configuration from an unexpected file than implementing
features that users would actually want.</p>
<p>Even worse, I actually considered hosting my own map tiles instead of just
defaulting to using Mapbox (how likely is it I'll ever leave the free tier?).</p>
<p><strong>AI thrives when you can isolate issues to a small area</strong>. A monolithic
architecture doing 10 things at once is the nightmare scenario for an AI agent,
it will struggle to pinpoint where something has broken. Add to that the fact
that agentic tools can't SSH into remote servers to debug config issues
(probably a good thing, security wise), and you're back to manual debugging.</p>
<h3>Version 1</h3>
<p>Coming back to the project with a fresh set of eyes I was able to recognise the
mess I had gotten myself into. However, this wasn't to say I learned my lesson,
yet again I was tempted by technologies and approaches that didn't lend
themselves to AI assisted development, and got bogged down in technical debt and
problems quickly!</p>
<h4>Rolling My Own API</h4>
<p>Realising the mistake of a monolithic architecture, I think I swung the pendulum
too far the other way. I decided to roll my own REST API to maximise deployment
flexibility; using a combination of Drizzle, Zod, and Better Auth (all excellent
tools in the right context).</p>
<p>I got about 80% of the way through writing the API routes before I realised the
code was unmaintainable. The issue here was consistency; each route generated
had slightly different patterns for validation, error handling, auth, etc.</p>
<p>In hindsight, I think this had a lot to do with the context window. The
conversation would compact before finishing a single route, meaning Claude would
lose context of earlier patterns. Unless I manually fed it examples from
previous routes (and I rarely did), each new endpoint reinvented the wheel; not
thinking to look to see if any other routes had been created to reference in the
first place.</p>
<h4>Supabase CLI And Declarative Schema</h4>
<p>Supabase is legitimately incredible, but for now, Claude can't leverage it
effectively. I had two major issues when trying to get Claude to work with
Supabase; use of the CLI, and working with migrations and declarative schema.</p>
<p>Normally, my go to solution to fixing Claude having issues with CLI tools, is to
either find the relevant documentation and provide it as a file to be read, or
to just paste in the entire documentation in the chat window. These approaches
have worked for every tool I have had issues with, up until working with the
Supabase CLI.</p>
<p>Claude would continually provide incorrect information on how to use the CLI,
edit migration files that had already been run on the database, and continually
misinterpret how to work with declarative schema. Despite giving Claude the
context it needed to solve the problem, it was still unable to find a solution.</p>
<p>There are two reasons I think this occurred:</p>
<ol>
<li>The Supabase CLI is a local first tool, but the AI kept treating it like it
was applying changes on the remote database.</li>
<li>The declarative schema approach that I was excited to use (coming from
Terraform), isn't feature complete. The tooling encourages you to think in
terms of migrations, but the declarative layer creates confusion about what's
already applied vs what needs to change.</li>
</ol>
<p>In the end, I ended up doing most Supabase schema work manually through the
dashboard and SQL editor. Not ideal, but not the end of the world.</p>
<h2>"The Boring": Convocare Edition</h2>
<p>After two failed attempts, here's the stack that actually got built:</p>
<ul>
<li>Frontend<ul>
<li>Next.js, React, TypeScript</li>
<li>Tailwind CSS</li>
<li>Vercel (hosting &amp; deployment)</li>
</ul>
</li>
<li>Backend/Data:<ul>
<li>Supabase (Postgres + Auth + Storage)</li>
<li>Google OAuth (via Supabase)</li>
<li>Next.js API routes (middleware layer)</li>
<li>Sharp (image processing)</li>
</ul>
</li>
<li>External Services:<ul>
<li>Mapbox (map tiles)</li>
<li>Route 53 (DNS)</li>
</ul>
</li>
<li>Validation &amp; Type Safety:<ul>
<li>TypeScript (compile-time checks)</li>
<li>Zod (runtime validation)</li>
</ul>
</li>
<li>Helpful AI Tools:<ul>
<li>v0 (frontend component designer)</li>
<li>Claude Code (Agentic Tool)</li>
<li>Sonnet 4.5 (Model)</li>
</ul>
</li>
</ul>
<h3>Benefits Of The Boring</h3>
<h4>"The Developer Experience"</h4>
<p>Before working on Convocare, I thought "the developer experience" was only ever
used as the butt of a joke for frontend devs who were scared of touching
hardware. I'm ashamed to report, however, that the developer experience is REAL.</p>
<p>The Next.js w/ Vercel and Supabase stack is incredible. The following three
commands essentially allowed me to forget about infrastructure and just work on
implementing features:</p>
<pre><code class="hljs language-bash">vercel --prod
npx supabase db push
npx tsx supabase-seed.ts</code></pre><p>Both Vercel and Supabase have sane defaults, hard billing limits, and reasonable
free tiers, which really give peace of mind for small projects. Moreover, both
are flexible in terms of deployment, so in the unlikely case Convocare gets to a
point where cost becomes a real issue, we have options with where to run things
without too much rigmarole.</p>
<p>In particular, using Supabase's RLS, as well as utilising views rather than
joins for public queries means it's easier to separate concerns, and creates a
mental model that goes "Database stuff? That's handled by the database".</p>
<h4>LLM Friendly API Routes</h4>
<p>Simplifying your API routes allows you to leverage AI debugging far more
effectively, thanks to the unidirectional and linear patterns that AI seems to
thrive on.</p>
<p>In my case, I used URL parameters over state management in React (think
<code>?filter=upcoming&amp;sort=date</code>). This allowed issues to be quickly identified, as
the LLM doesn't need to contextualise several processes at once, and doesn't
need to mock in order to start debugging.</p>
<p>In short:</p>
<ul>
<li><strong>A route an AI can debug easily</strong>: fetch, validate, display</li>
<li><strong>AI's nightmare route</strong>: query, cache, invalidate, retry, error, retry.</li>
</ul>
<h4>Context Is King</h4>
<p>Boring tools are terse and clear. The more tersely the information you give the
AI that is still intelligible, the more context you have left for actual
solutions to the problem you're working on.</p>
<p>A great example of this is Tailwind, which plays extremely well with Sonnet. By
managing a lot of the CSS in the context of your HTML, LLMs don't need to manage
file references and state throughout your codebase, so that the changes it makes
actually work.</p>
<h2>Conclusion</h2>
<p>Let's have another look at those definitions. In short:</p>
<ul>
<li><strong>the stack</strong>: tools used to reach MVP</li>
<li><strong>the stuck</strong>: tools you ditch by MVP</li>
<li><strong>the boring</strong>: tools that make reaching MVP a breeze</li>
</ul>
<p>By committing to <strong>the boring</strong>, you'll be in a much better position to leverage
AI assisted development to deliver a functional platform in a fraction of the
time it would take you if you tried to chase new and shiny, like I first did...</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I Built A Serverless Calendar SMS Service On AWS Because I Refuse To Carry A Smartphone]]></title>
            <link>https://www.strang.es/building_a_serverless_sms.html</link>
            <guid>https://www.strang.es/building_a_serverless_sms.html</guid>
            <pubDate>Sun, 21 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Yes, it is as ridiculous as it sounds. If you haven't read my article
<a href="/dumbphones_are_getting_dumber.html">Dumbphones are getting dumber</a>, that's the
less technical primer to this post.</p>
<p>If you don't want to read that, the <strong>TL;DR</strong> is that there are specific pain
points that I've encountered since switching to a dumbphone. One I still haven't
sufficiently alleviated yet is: <em>integrating my calendar system into my daily
workflow without a smartphone</em>.</p>
<p>The Terraform files, Lambda functions and bash script used to run this project
are available on a
<a href="https://github.com/samuelstranges/chronos_sync">Github repo</a>. It's still very
experimental, so use at your own risk.</p>
<p>Anyways, this blog post outlines one approach I took to solve this issue, and
although I'm not using it currently, basically due to the price per SMS charged
by AWS, I still think it's worth sharing.</p>
<h2>Constraints</h2>
<p>Teasing out the problem a little more from just '<em>smartphone bad, but i miss
calendar</em>', I figured the solution to integrating my dumbphone with my calendar
system would need the following:</p>
<ul>
<li>Work with my calendar application of choice (more on this in a minute)</li>
<li>Be automated; no manual uploads</li>
<li>Endpoint needs to be SMS (for dumbphone notifications)</li>
<li>Be a serverless solution, I don't want to have to pay and maintain a server
just for this</li>
<li>Use AWS services</li>
</ul>
<h2>Designing The Initial Architecture</h2>
<p>The AWS ecosystem is large, complex, and at times downright confusing. Figuring
out what would go at the end of the pipeline was straightforward though, an SMS
endpoint screams an SNS topic. Obviously, the main difference to most projects
is that my topic would have one subscriber; me!</p>
<p>The design of the pipeline feeding the SNS topic could be more flexible,
however. The underlying calendar software would end up guiding this decision.</p>
<p>For a while I've been using my own fork of
<a href="https://github.com/HubertBel/lazyorg">Lazyorg</a> to manage my calendar from my
laptop. It's a cool little TUI that is built with <code>gocui</code>. The events are backed
by an SQLite database. To get these events pushed to my dumbphone, I'd need to
build a pipeline that uploads the database, convert the events to a message, and
place the event in the SNS topic.</p>
<p>Working with a small SQLite database lends itself well to using AWS Lambda.
Therefore, I settled on a solution that would: upload the SQLite database file
to an S3 bucket, and using S3 event notifications, create EventBridge schedules
for each of these upcoming events contained in the database. The schedule would
call a second Lambda function 15 minutes before a given event that formats the
event as a message, and publishes it to an SNS topic, which my mobile phone was
subscribed to.</p>
<p>I thought about potentially offloading some of the more complex transformations
and parsing to my local machine before uploading, which would potentially remove
the need for an S3 bucket, or a secondary Lambda function. However, I thought
the following architecture provided the greatest flexibility to alter the design
later, as well as provide advanced control over the message content that would
be sent over text.</p>
<h3>Overview Of Pipeline</h3>
<p><em>Local cronjob -&gt; S3 -&gt; Processing Lambda -&gt; EventBridge -&gt; Notification Lambda
-&gt; SNS Topic -&gt; Dumbphone</em></p>
<h2>Implementing Prototype</h2>
<p>With the architecture decided, I moved onto implementation. I setup a cronjob to
upload my SQLite database once a week to an S3 bucket. This provided a
beneficial secondary purpose, it would serve as a weekly backup of my calendar!</p>
<p>I used Claude Code to whip up a quick prototype Lambda function that would
trigger on any upload of a file to complete the following:</p>
<ol>
<li>Download the <code>.sqlite3</code> database to the Lambda's <code>/tmp</code> directory</li>
<li>Perform an SQL query to grab upcoming events (within the next 7 days)</li>
<li>Clear any existing EventBridge schedules from previous uploads</li>
<li>Create new EventBridge schedules that triggers a second Lambda function 15
minutes before an event in my calendar</li>
</ol>
<h3>Fighting Dependencies</h3>
<p>Something I wasn't quite happy about, however, was how the underlying
dependencies of the project led to suboptimal code being produced for the Lambda
sections of the pipeline.</p>
<p>One great example of this is with date parsing. Python has a wonderful
<code>datetime</code> library (especially compared to Go, which I'm using for other
projects), but it felt a little bit awkward integrating the SQL database which
stores the date and times as strings with the inbuilt <code>datetime</code> implementation.</p>
<p>My calendar stored everything as UTC strings, so we had to do the following:</p>
<pre><code class="hljs language-python"><span class="hljs-comment"># Calculate time window: now to 7 days in the future</span>
now_utc = datetime.utcnow().strftime(<span class="hljs-string">'%Y-%m-%d %H:%M:%S'</span>)
end_date_utc = (datetime.utcnow() + timedelta(days=<span class="hljs-number">7</span>)).strftime(<span class="hljs-string">'%Y-%m-%d %H:%M:%S'</span>)

<span class="hljs-comment"># Query events within the 7-day window</span>
cursor.execute(<span class="hljs-string">"""
    SELECT id, name, description, location, time, duration
    FROM events
    WHERE time &gt; ? AND time &lt;= ?
    ORDER BY time
"""</span>, (now_utc, end_date_utc))</code></pre><p>And then for each event, parsing the strings back into Python <code>datetime</code>
objects:</p>
<pre><code class="hljs language-python"><span class="hljs-keyword">if</span> <span class="hljs-string">'+'</span> <span class="hljs-keyword">in</span> time_str <span class="hljs-keyword">or</span> <span class="hljs-string">'T'</span> <span class="hljs-keyword">in</span> time_str:
    event_time_utc = datetime.fromisoformat(time_str.replace(<span class="hljs-string">'+00:00'</span>, <span class="hljs-string">''</span>)).replace(tzinfo=<span class="hljs-literal">None</span>)
<span class="hljs-keyword">else</span>:
    event_time_utc = datetime.strptime(time_str, <span class="hljs-string">'%Y-%m-%d %H:%M:%S'</span>)</code></pre><p>It certainly felt like I was <em>fighting the data format</em> rather than working with
it. Regardless, we pushed on.</p>
<h2>Testing The Prototype</h2>
<p>Despite some questionable data conversion, the prototype was largely working
outside of the inevitable timezone issues you run into when developing anything
calendar adjacent. Regardless, I was receiving my calendar notifications 15
minutes before the events occurred, as expected.</p>
<p>This was great, until I stopped receiving notifications for my events about a
day or so later...</p>
<p>To debug, I first went through the logs of the first Lambda function's
executions. This wasn't it, they were triggering fine. Next, I checked the
EventBridge schedules, these were created and were set to trigger at the right
time too! I narrowed down the problem to the final Lambda then, or so I
thought...</p>
<p>This was working fine too! I was befuddled; the entire pipeline seems to be
working, but I'm not getting any messages?!?!</p>
<p>Then I realised something... <strong>I wasn't receiving SMS messages from AWS <em>AT
ALL</em></strong>! I set up a separate SNS to SMS topic for the purposes of cost
monitoring, and these weren't being pushed to my phone either! This was weird,
surely my pipeline wasn't affecting other SNS topics I'd set up, right?</p>
<p>Well, turns out, I'd fallen victim to
<a href="https://docs.aws.amazon.com/sms-voice/latest/userguide/quotas.html">AWS quotas</a>.
AWS has a default spending threshold of <strong>$1 per account</strong> for SMS. I'm sure
this helps prevent a significant amount of spam, but I felt a little frustrated
that I wasn't even informed upon reaching the quota that the quota even existed!</p>
<p>The funny thing about this prototype, was that the most difficult thing to do
was to request an increase on my quota (taking a number of days to be approved).
I felt a little silly filling out the form clearly meant for much larger
organisations running sophisticated campaigns and programs, requesting a $15 a
month spend, and an intended audience of... <em>myself</em>.</p>
<p>Nevertheless, when the quota increase was approved, we were back in business,
and receiving SMS messages again.</p>
<h2>Reflections On Prototype</h2>
<p>Living with the prototype for a while made me reflect on the architecture and
decisions I had made, and how it could be improved.</p>
<p>The primary drawback of the prototype was that the calendar was backed by an
SQLite database rather than a more standard iCal based system. This means that
the Lambda functions I developed felt a little janky, and the entire system to
me felt brittle because of it. Also, being a prototype, I hadn't written any
tests (<em>iT wOrKs On mY MaChInE</em>).</p>
<p>Really, these problems could have been avoided if I gave less control of the
design of the Lambda functions to Claude Code, and took more time teasing out
the detailed instructions that it works best with. Instead, I ended up with
giant <a href="https://en.wikipedia.org/wiki/Cyclomatic_complexity">cyclomatic</a> issues,
and a spaghetti code that, while working, wasn't worth going back and fixing
(looking at you magic numbers and giga-methods).</p>
<p>Another problem with the prototype was that the build wasn't reproducible, I
used the AWS console to set everything up, when I should've used CloudFormation
or Terraform.</p>
<p>Finally, the S3 cronjob upload method I relied on to trigger the serverless
pipeline wasn't particularly reliable; my machine obviously needs to be on when
the cronjob is run, which means that if my laptop lid is closed when the cronjob
is set to go off, the pipeline wouldn't be set off <strong>at all</strong> (until I noticed
I'm not receiving SMS messages and manually uploaded them to S3).</p>
<p>Moreover, this architecture, specifically this use of S3, meant that it would
only ever support one calendar, as each upload overwrites the previous
calendar's events. Why? S3 event notifications trigger on <em><strong>every uploaded
file</strong></em>, not on every upload batch, preventing uploading multiple calendars
simultaneously.</p>
<h2>Building V2</h2>
<p>Despite the issues in the prototype, I was happy <em>enough</em> with how it was
running.</p>
<p>The trigger to create a V2 of this project was actually building <code>Chronos</code>, my
own calendar TUI. I'll be releasing it in the next couple of months. It's backed
by a more standard iCal compliant system. Making the switch to this really
forced my hand, and allowed me to make some major changes to this notification
project.</p>
<h3>Architectural Changes</h3>
<p>The major design difference compared to the prototype would be with regards to
how we used the S3 bucket. In the prototype, an upload would trigger the Lambda
using S3 event notifications, but since we wanted to support multiple calendars
(with <code>Chronos</code>: multiple <code>.ics</code> files), we would need to automatically trigger
the Lambda function in a different way. Since calendar files are pretty small
(usually measured in <code>kb</code>), we should be able to trigger a Lambda function with
the calendar files as a part of the payload.</p>
<p>I still kept the S3 bucket as a part of the pipeline, however, as I still found
it handy to have a second place my calendar is saved to, but, it really isn't
needed to process the events, as the Lambda does all of the lifting now.</p>
<p><img src="https://raw.githubusercontent.com/samuelstranges/chronos_sync/main/serverless.png" alt="The final architecture diagram"></p>
<p>Another massive change is on the reproducibility of the project in V2. Now,
we're using Terraform. This simplified the orchestration of the AWS services,
and I really <strong>LOVE</strong> the declarative approach it uses to spinning up and
managing running services. Now I've used Terraform for this project, I don't
think I'll be using anything else.</p>
<p>One challenge moving to Terraform did present, ended up being very beneficial.
It forced me to use best practices, and to avoid shortcuts. One example of this
was the packaging of the libraries used in the Lambda functions. The recurring
date time functions that expand iCal events into individual objects isn't a part
of the standard <code>datetime</code> library in Python, instead, you would use a
<a href="https://pypi.org/project/recurring-ical-events/">separate library</a> or roll your
own (RIP prototype recurrence generator). Terraform encouraged me to set up a
Lambda Layer rather than just manually package libraries as a part of the Lambda
function itself. It did this by making it <strong>dead simple</strong> to setup and manage a
Lambda Layer.</p>
<p>Another key difference is the robustness and stability of the software. While I
again used Claude Code to develop the Python Lambda functions, I took the time
to control the design and implementation, using an iterative process to break
down the complex functions into smaller, composable, and most importantly,
testable functions. I used <code>pytest</code> and <code>moto</code> to mock and test the Lambda
functions (I'll be honest, I just vibed the tests, outside of making sure it
focused on timezone handling). Despite this, we still encountered a couple of
bugs... <em>timezones and missing events, a tale as old as time</em>.</p>
<h3>Implementation Challenges</h3>
<p>One timezone bug (now patched) would occur every time the first Lambda function
was triggered. Events that existed in the past (up to 10 hours ago) would
immediately be texted to me. Since I'm in UTC+10, it was pretty clear what was
going on. Digging a little deeper, it occurred when events had no timezone
attached. So, adding a default timezone (Melbourne) seemed to work fine. This is
implemented with a conditional check for timezone existence.</p>
<p>Another bug was with regards to EventBridge schedule groups. Rather than iterate
through all existing EventBridge events to delete each one individually, I
decided it would be more elegant to delete the entire EventBridge group. Only
problem was that I wanted to immediately write to this EventBridge group again.
The <code>boto</code> function to delete EventBridge groups operates asynchronously, so I
had to introduce a manual <code>sleep()</code> that gives enough time for AWS to delete the
group before attempting to write to it again, else the Lambda would error out.
If anyone wants to measure the length of time this takes to save on Lambda
compute, be my guest.</p>
<h2>A Little Vent On IAM &amp; LLMs</h2>
<p>Why do LLMs seem to have a love for poor security practices, particularly when
it comes to creating IAM policies? When building V2, it would create policies
with permissions that were widely out of scope to the project, and even those
that were necessary, would make the permissions far too broad. Several times I
found myself reviewing generated policies that would be applied to all of my
buckets, or to all of my EventBridge schedules, despite the fact that Claude
Code had basically the entire project and previous secure policies from the
prototype in it's existing context window.</p>
<h2>Caution Before Running</h2>
<p>So... You read the title, you got a little excited, you cloned the repo,
modified the Terraform state location variable, added your phone number, and
you're ready to run your own personal SMS calendar solution. Well... at least if
you're in Australia, maybe think twice before jumping in with <code>terraform apply</code>.</p>
<p>The cost per SMS varies widely across different countries, and over here, it
ends up being about $0.04
<a href="https://aws.amazon.com/sns/sms-pricing/"><strong>per text</strong></a>. I use a
<a href="https://en.wikipedia.org/wiki/Timeblocking">timeblocking</a> methodology to
organise my schedule, so I end up with about 10-15 events per day. Over the
course of the month, this adds up to about <strong>$30AUD</strong>, which is more than I
spend on my entire mobile plan.</p>
<p>The other major downside/quirk of this system is the deliverer ID. When you get
a message, it's not from '<strong>Sam's iCal</strong>', but rather '<strong>UNVERIFIED</strong>'. Outside
of testing prototype systems, it's expected that you end up registering a
<strong>sender ID</strong> and setting up an <strong>origination identity</strong>, which can be quite a
costly and time consuming process. Perhaps the market for rolling your own
private SMS notification system might be a little smaller than I initially
thought...</p>
<p>However, if you're a dumbphone user flush with cash, and can mentally map
'UNVERIFIED' to 'My Calendar', then check out the
<a href="https://github.com/samuelstranges/chronos_sync">repo</a>!</p>
<h2>Conclusion &amp; Future</h2>
<p>I had a fair bit of fun getting this project up and running. However, it turned
out to be an expensive education. Serverless is cheap, and Terraform is awesome,
but AWS has more hidden quotas than you can poke a stick at. Most importantly, I
proved that you can solve a $0 problem with a $30/month solution <em>smh</em>...</p>
<p>So, at least for now, I won't be using this system to push my calendar to my
dumbphone. Having had some more time to reflect on the problem, I think the
actual solution would be to use the inbuilt, barely working web browser on my
dumbphone.</p>
<p>Perhaps I could setup a really simple webpage that my dumbphone could access,
with simple password authentication provided by HTTP Basic Auth, and then secure
it by enforcing a HTTPS connection. This would certainly be more flexible,
however, the obvious downside is that there wouldn't be notifications. I might
have to use my brain every now and again...</p>
]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Dumbphones Are Getting Dumber]]></title>
            <link>https://www.strang.es/dumbphones_are_getting_dumber.html</link>
            <guid>https://www.strang.es/dumbphones_are_getting_dumber.html</guid>
            <pubDate>Wed, 17 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Switching to a dumbphone was the best decision that I made in 2025. It's made me
far more intentional with regards to how I use technology, saved me countless
hours mindlessly scrolling, and forces me to make connections in reality rather
than virtually. I plan on writing an article in the near future about the
positive aspects of this switch, but, at least in this article... I need to
vent.</p>
<h2>Switching Is Becoming Increasingly Difficult</h2>
<p>Here's the problem with switching to a dumbphone in 2025. We're transitioning
into a world that assumes constant connection and access to the internet at any
given time. Without intentionally opting out of this, it can be hard to see that
the infrastructure we once took for granted is quietly being eliminated. For
myself, this has manifested itself in several awkward situations since I made
the switch.</p>
<p>More concerning however, is knowing that this societal "paywall" is pushing out
an increasing number of people out who don't have access to technology for
financial reasons, cultural reasons, or simply for technical illiteracy.</p>
<p>So, I'd like to share some of the troubles I've encountered since making the
switch to demonstrate how our increasing dependence on smartphones is creating
barriers that extend far beyond personal choice.</p>
<h2>WhatsApp</h2>
<p>WhatsApp is ubiquitous, and since moving away from social media, it has become
my primary method of communication. However, Meta is certainly making it
difficult for dumbphone users.</p>
<p>WhatsApp technically doesn't need a smartphone to work, just a phone number.
However, in practice, I've found it incredibly difficult to set up and manage
without one. The main issue is that it requires web and desktop applications to
be setup via a smartphone.</p>
<p>Currently, my workaround is to setup these connections using WhatsApp on Android
via an Android emulator (Bluestacks Air for MacOS). This is far from perfect,
the app frequently desyncs, meaning I lose my message history, and need to reset
my contacts from scratch.</p>
<p>What is particularly annoying about the restrictions: the workaround won't work
with the MacOS desktop version, only with WhatsApp for Web (or the
<a href="https://github.com/d99kris/nchat">nchat</a> CLI if you're into that sort of
thing). This is as the authentication method requires scanning a QR code on your
desktop app from your smartphone (no text message auth available). I haven't
been able to get this working with Bluestacks Air (although apparently a recent
update may be able to change this; but I'm scared I'll lose my chat history
again).</p>
<p>So much of our daily communication has moved to WhatsApp, particularly in the
area of public groups. Email lists and group texting has been deprecated in
favor of group chats on WhatsApp; events organisation, work group chats, parish
mass schedules all demand participation via a smartphone.</p>
<h2>Tech Infrastructure Is Changing</h2>
<p>Switching to a dumbphone has also revealed to me how quickly we abandon working
technology for marginal improvements, systematically excluding people who can't
afford constant upgrades.</p>
<p>My first encounter of this was actually when attempting to setup my dumbphone in
the first place. When I decided to make the switch, I figured I'd just use my
old phone from years back and be done with it. Well, I was a bit miffed to learn
that Australia, as of November 2024, has shut down their 3G networks. This meant
for me that my old phone was off the table, as well as any other second hand
device I would have wanted to purchase. There are plenty of dumbphones out there
that can still play Snake just fine, but not many that would still be able to
make a phone call...</p>
<p>Purchasing a dumbphone in 2025 means purchasing one with features I would rather
not have, as I'm really trying to enforce on myself this intentionality of
technology; and unnecessary features have for me always contributed to excessive
use. The only "dumbphones" I was able to find for purchase that operate on the
4G network all have internet access, which I find incredibly frustrating.</p>
<p>This frustration extends to mobile plans too; I couldn't find a single service
provider that offers voice and text without data. My plan was recently
"upgraded" both in amount of data available and in price. I'm literally paying
extra now for the internet access I'm trying to avoid, and most sites (including
this blog) don't even work on low data connections like dumbphones in the first
place!</p>
<p>Perhaps most heartbreaking was recently meeting a community member who had
recently spent money they could hardly spare on a phone, only to discover it
would stop working entirely due to network changes they'd never heard about or
understand. It's become clear to me that forcing upgrades disproportionately
impacts vulnerable populations.</p>
<h2>Getting Around</h2>
<p>Travel has been perhaps the most difficult part of migrating to a dumbphone. So
much of how we get around these days has become tied up with using our
smartphones.</p>
<p>The obvious change is navigation. Satellite navigation is the one feature I most
wanted to keep when I made the switch, so naturally I set out to find a GPS
system. It was surprisingly difficult; most are as expensive as modern
smartphones, and lack the basic features to be reliable for day to day driving.
The first one I bought I returned within an hour. I ended up settling on a
Garmin Drive, and just deal with the lack of real time traffic and outdated
mapping.</p>
<p>Parking is another pain point. The humble parking meter has largely been
replaced by the smartphone app. Just yesterday I was trying to find a park at a
library. When I arrived at the meter, I was warmly greeted with a laminated sign
asking to use an app or to call a number to find alternatives. I decided it
wasn't worth the hassle, and drove off to find another library. For people
without smartphones, these "improvements" to parking systems can make basic
civic services like libraries effectively inaccessible to the people they have
the greatest duty to serve.</p>
<p>Perhaps more subtly, I've noticed the loss of the "landmark". We used to say
"Meet me under the statue at the State Library, then we'll walk to Joe's Cafe."
Now it's just "Meet me at Joe's Cafe". There is an implicit assumption that
didn't exist 20 years ago; that everyone can navigate directly to any named
location. Without access to a smartphone, impromptu gettogethers are more
awkward to organise; my friends will often receive from me a text disrupting the
flow: "Where is that near?".</p>
<h2>QR Codes And Authentication</h2>
<p>I see a QR code and I throw up in my mouth a little.</p>
<p>Coffee shops have replaced punch cards with apps, restaurants offer digital
loyalty deals, and you can save on groceries with app exclusive discounts.
Again, the irony is that these systems have become inaccessible to people who
would most benefit from these cost savings.</p>
<p>Security has also become smartphone dependent. Two-factor authentication is
often only implemented via an app, and not over text, or better yet, using a
hardware key. Logging out of my password manager has on multiple occasions
become quite a headache to recover from, so much so that I've moved my password
management to the Linux tool <code>pass</code>. I've even fallen victim to public WiFi that
needed to authenticate over email!</p>
<p>My least favourite QR code story is when I attended an event at my local parish.
For the purposes of child safety, we were asked to log our attendance. The
catch, of course, was that there was no option to log my attendance manually,
only via a QR code. I ended up asking a stranger to borrow their phone to sign
in, but I haven't been back since cause the whole thing was a bit awkward.</p>
<p>Every QR code has become a reminder that the infrastructure we rely upon is
increasingly reliant upon smartphone ownership.</p>
<h2>You're Always Contactable</h2>
<p>The biggest shift I've noticed is how my inability to be constantly contactable
frustrates others. Our new social contract assumes everyone can instantly
respond to texts, access shared links, and conduct impromptu research.</p>
<p>When I am sent a video or article over text, a delayed response can be seen as
purposely disengaged, when the underlying reason is that it is reasonably
inconvenient to respond to at that moment (texting with a dumbphone), or that I
lack the ability to contextualise the message ("Hey what do you think about this
video?").</p>
<p>A common frustration for myself is receiving an image over text. I can rarely
see the content contained within them on a small screen, if they even load on my
dumbphone at all! Worse still is that there is no easy way of communicating that
you can't view the image without responding via a text that may take a couple of
minutes to write. Yet again, we encounter an irony, that you can't easily
communicate that you can't easily communicate.</p>
<p>Perhaps most telling for me is how saying "I'll need to check my calendar" has
become a euphemism to decline an invitation. For those of us without a
smartphone, we legitimately need to check our calendars, as it's unlikely we're
carrying them in our pocket that very second. However, this can be seen as
flakiness without explaining the context of not having a smartphone.</p>
<h2>Conclusion</h2>
<p>My choice to use a dumbphone has made me reflect on just how much of what we
consider "choice" in technology adoption has actually become an unreasonable
necessity to participate in the economy and society.</p>
<p>Ultimately, switching to a dumbphone in 2025 isn't just about personal
discipline, unfortunately, it's also about opting out of infrastructure that
increasingly assumes universal smartphone adoption.</p>
]]></content:encoded>
        </item>
    </channel>
</rss>