Monthly Dispatch
We decided to test everything on ourselves
March was about closing the gap between what we recommend and what we rely on. A lot of mess was going around and we needed to reorganise processes. Here is what we learned.
We started depending on Shelly
This month Shelly became part of how we operate. We used her support to handle content updates, admin tasks, reporting, and keeping systems running, especially CMS. We did not treat it as a test. Instead, we made it part of the workflow.
The difference between recommending something and relying on it is immediate. Inefficiencies show up faster, gaps become obvious and small issues compound quickly.
All sounds cool, but why are we discussing this?
By using Shelly ourselves, feedback comes from our own frustrations, and that is where the real improvement happened.
Work Assignment
How tasks get delegated in practice vs how they are delivered in documents.
Information Flow
Where updates get lost and what structurally fixes it vs what just adds another tool.
Delay Mapping
Bottlenecks rarely live where people think they do. Visibility is the first fix.
Automation Limits
The line between what to automate and what needs human judgment is more nuanced than any checklist.
This approach also changed how we approach client work. Instead of reacting to problems, we start to see patterns earlier and anticipate what might be needed.
If you are a current client, you might start hearing from Shelly with suggestions on how to improve different parts of your digital presence.
AI vs AI: sometimes comparisons are necessary
We spent March running comparisons of ChatGPT and Claude across content creation, structured reasoning, operational tasks, and communication output. Not about winners or losers, but to understand where each tool performs best so we can use them intentionally.
ChatGPT
Rapid ideation, casual content, broad research. Fast, accessible output.
Claude
Structured reasoning, long-form communication, operational clarity.
In practice
To write this newsletter, we ran drafts through both. ChatGPT helped get ideas out quickly, then Claude helped refine structure and clarity. The final version was redacted based on feedback from both.
The advantage is not in using AI, but in practising discernment while using the right combination of tools available.
Shelly, redefined
The most important work this month had nothing to do with features but with language. How we describe Shelly determines whether someone immediately understands its value or needs three follow-up calls to get there.
When we described Shelly as a virtual assistant or automation tool, people partially understood it, but not clearly. So we changed the framing: Shelly is a human layer that keeps your digital systems running.
What Shelly actually does
We updated the way we present this and created a new brochure. Available on request.
Q2 is about depth.
There is a meaningful difference between recommending systems and relying on them. Q2 continues with a few clear priorities:
- Scale Shelly support: refining how Shelly serves teams that are growing without wanting to grow their operational complexity alongside them.
- Publish more openly: sharing the messy, in-progress thinking, not just the polished lessons.
- Integrating AI: optimising AI tools to support real work, including integrations and areas like computer vision where it can solve practical problems.
If this resonated, share it with someone building something of their own.
That is usually where the best conversations start.