From Voice Note to Deployed App in One Commute
I do a lot of my best thinking in the car. Not at a desk, not in a meeting, not staring at a blank screen. In the car, somewhere between the school run and the office, when my mind is free to wander.
For years those thoughts just evaporated. Good ideas, half-formed strategies, connections between things I'd read or heard. Gone by the time I sat down to work. And I'm not unusual in that. Every founder, every business leader, every creative person has this problem. The ideas come when you can't capture them and dry up when you try.
So I started voice noting. Just recording random thoughts, ideas, reactions to things. It was better than nothing but it created a different problem. I now had dozens of audio files sitting in my phone that I'd never go back to. Raw recordings aren't knowledge. They're just noise with potential.
That's where the pipe dream started. What if those voice notes could be transcribed, structured, and stored somewhere useful? What if they could become a knowledge base? And then, because I couldn't help myself, the dream got bigger. What if I could have an idea on the drive to work, send a voice note, and have a full-stack application ready for me to review by the time I got off the train in London?
There were no tools for this. You can't just plug and play. What I'm about to describe took two weeks of refinement to get working, and it's still evolving. But it works. And it's changed how I think about the relationship between human creativity and AI execution.
The Security Paradox
I've always been nervous about installing AI tools on my machine with significant amounts of control. I understand how they work and I'm not comfortable handing over that level of access to my personal computer, my files, my client data.
But here's the tension. To get any real value out of AI tools, you need to give them control. You need to let them execute. Otherwise you're just clicking "approve" on permission after permission after permission, and the whole thing becomes a glorified autocomplete with extra steps.
There has to be a happy medium. And that's where the idea of a VPS comes in.
A virtual private server is essentially a computer in the cloud that you rent. You can install whatever you want on it, give it whatever permissions you want, and critically, it's completely isolated from everything else in your life. Your personal files, your client data, your email, none of it is accessible.
You can install AI agents at the root level of a VPS so that they have complete control within that self-contained unit. Full power in a confined space. That's the happy medium.
The Joy of Always On
The VPS I went with was Hostinger. For £5.99 a month, less than a coffee, I could have Claude Code running in the cloud constantly.
This matters more than it sounds. If your AI tools are running on your laptop, they stop when you close the lid. They stop when you go to lunch. They stop when you go home. An AI assistant that only works when you're actively sitting in front of it is fundamentally limited.
Running in the cloud means it never turns off. It's working while you sleep. It's available at 6am when you have an idea in the shower. It's there at 11pm when you remember something you forgot to follow up on. The cost of a coffee for something that never clocks out.
The setup is technical. SSH access, installing Claude Code in the terminal, configuring API keys, hardening the server so only specific ports can communicate. There are security protocols you need to follow. But once it's done, it's done.
Going Mobile
Initially I was SSH-ing in from my laptop to interact with Claude Code on the server. This worked, but it was still tethered to one device. I'd removed password login entirely for security, using SSH keys only, which meant I could only connect from machines that had my key.
I wanted to work from my phone. So I installed a terminal app called Terminus, which let me SSH into the server from mobile. Suddenly I could start something on my laptop, leave the house, and check on it from my phone on the train.
But more importantly, the AI session persists on the server regardless of what device connects. You're not running the AI on your phone. You're just looking through a window at something that's always running. Start a build in the morning, check the results at lunch, refine in the evening. The session never breaks.
From IP Address to Real URLs
At this point Claude Code had the entire server at its disposal. I could ask it to build applications and it would deploy them across the full server resources. I built some test cases and they were available at my VPS's IP address. Functional, but not exactly something you'd share with anyone.
So I set up a custom domain and pointed it to the VPS. This is when I needed Caddy, a reverse proxy that maps local applications running on different ports to proper subdomains. blog.harrylab.tech points to one application, bjj.harrylab.tech points to another. Caddy handles SSL certificates automatically, so everything is HTTPS out of the box.
This is where it starts feeling real. You're not looking at localhost anymore. You're visiting live websites that Claude Code built and deployed, accessible to anyone with the URL.
The Instruction Architecture
Consistency was the next problem. It's all very well asking Claude Code to build something, but it will build in a completely different way each time. Different technologies, different patterns, different folder structures. Chaos.
Claude has a concept called CLAUDE.md, an instruction file that loads at the start of every session. You define how you want things built. Which technologies. Which workflows. Which version control practices. Which design systems. It reads these instructions and behaves accordingly.
My CLAUDE.md grew to about 800 lines. It was comprehensive, but it was consuming a huge amount of context in every single request. Context that should have been spent on the actual task.
The solution was modular documentation. I broke the 800-line monolith into separate specialist documents. Stack guides for development. Design system references. Tool configurations. Workflow definitions. The CLAUDE.md file dropped to about 80 lines, just enough to point Claude in the right direction for whatever type of task it was handling. Development work? Read the stack guide. Content work? Read the writing style doc. Research? Read the tools reference.
This was a significant efficiency gain. Load only what you need, when you need it.
There's No Ctrl+Z on an AI
The other thing you learn very quickly is that version control is non-negotiable. When you have an agent with full server access that's creating, deleting, and modifying files autonomously, things can and will disappear. There's no Ctrl+Z on an AI within a virtual private server.
Git became the safety net. Every change committed. Every state recoverable. I started with a monorepo for the server itself: all applications, all documentation, all configuration, version controlled. Any change Claude Code makes, any update to the docs, any new application built, it's all tracked. If something goes wrong, we can always go back.
The Infrastructure Beneath
What most people don't see is the supporting infrastructure that makes this all work. It's like an iceberg. The visible applications are the tip.
We needed a mail server so applications could send verified emails. That meant Postfix with proper DNS records, SPF, DKIM, the full authentication stack. We needed monitoring so we'd know when things went wrong, before users told us. We needed process management so applications would survive server restarts.
A VPS without mail, monitoring, and version control is a toy. With them, it's a platform.
The Gateway
Here's where everything changed.
Working with the VPS through an SSH terminal was functional but impractical. A tiny terminal window, manual login every time, no way to interact casually. It felt like work, not like having an assistant.
I needed a gateway, something that could sit between my communication channels and the server. OpenClaw is exactly that. It's a gateway between messaging apps like Telegram and your server, connecting the two through an LLM.
Installing OpenClaw with my Claude API keys suddenly meant I could text my server. Not SSH in, not open a terminal. Just send a message on Telegram. OpenClaw receives it, processes it through Claude, executes whatever needs doing on the server, and replies.
But OpenClaw goes beyond simple message forwarding. It has its own personality layer, its own persistent memory, its own tools and automation capabilities. It can schedule tasks, run periodic checks, automate workflows. It became less of a gateway and more of an actual assistant.
Then came voice notes. By integrating Whisper for speech-to-text, I could now send audio messages on Telegram. OpenClaw transcribes them and acts on the content. Send a voice note describing an application, and the server starts building it. Send a voice note with an idea, and it gets structured and stored.
This is the moment it becomes transformative. An AI agent with complete control over a server environment, full web access through Brave Search API, automation capabilities, and you interact with it by talking into your phone.
I should note that this requires giving OpenClaw root access to the server. Some people will be uncomfortable with that. But this is exactly why we set it up in an isolated VPS in the first place. Maximum power in a confined space, not limited power on your main machine. The security paradox resolves itself.
The Self-Improving Machine
What makes this genuinely different from just "using AI tools" is that the system improves itself.
Every time we build something and hit a problem, we solve it. That solution gets written into the documentation as a rule. Next time the same situation comes up, Claude already knows what to do. Every error becomes a lesson. Every workaround becomes a standard.
I've set up an automation that runs overnight, reviewing the day's work, identifying learnings, and baking them into the documentation. When I wake up, the system is slightly better than it was yesterday. Not dramatically, but consistently. Compound interest, applied to workflow efficiency.
The documentation isn't static. It's alive. It evolves with every build, every mistake, every refinement. Human-driven development pipelines, the classic scope-design-build-deploy flow, don't map directly to agentic workflows. The agent thinks differently, fails differently, and learns differently. The documentation has to breathe with that.
The Ideas Vault
The content side of this has been equally transformative. I can voice note my thoughts into Telegram at any time. OpenClaw doesn't just transcribe them. It structures them into a summary, pulls out action points, captures the sentiment and tone, and pushes it into a Git repository under a clean date-based naming convention.
I use Obsidian with the Git community plugin to sync that repository locally. So I can voice note an idea from the car, and by the time I sit down it's already structured and waiting for me in Obsidian on my desktop or phone. Ready to edit, expand, or publish.
But the really interesting bit is reactive content. Because OpenClaw can search the web and run on a schedule, it monitors industry news automatically. When something significant happens in hotel technology, like Hilton investing in a new AI solution, it flags it and asks me for my opinion.
This solves the hardest problem in content creation. Not "how do I write well" but "what do I write about?" Having a system that brings relevant topics to you and prompts you to react is fundamentally different from staring at a blank page. And reactionary content, having a genuine opinion on what's happening in your industry, is some of the most valuable content you can create.
The Long Game
My thinking is that if I journal my thoughts and ideas consistently over a long period, I'm building something much more valuable than a collection of notes. I'm building a personalised knowledge graph that AI can learn from and work with.
The only real value is my thoughts. AI can handle the execution. But AI can only support and derive from what I give it. The more I record my thinking, the better AI becomes at recalling, connecting, and acting on my behalf. It starts to anticipate. It knows my preferences, my reasoning patterns, my communication style. It stops being a tool and starts being an extension of how I work.
Three Pillars, Three Repos
In a nutshell, I now have three version-controlled pillars:
The development pipeline. A monorepo containing all applications, documentation, and deployment configuration. Full-stack apps, static demos, APIs. Claude Code builds them, Caddy serves them, PM2 keeps them running.
The content system. An ideas repository synced to Obsidian. Voice notes become structured thoughts. Structured thoughts become draft articles. Draft articles get polished and published. Reactive news monitoring keeps the pipeline fed.
The agent's identity. A workspace repository containing the personality, memory, tools, and workflow definitions for Jeeves (my OpenClaw instance). Every interaction refines how it behaves. Every correction shapes its approach. And because it's all version controlled, this entire agent identity is portable. If I ever need to set up another AI assistant, I have a fully documented, battle-tested specification for what I expect from an agent.
The most important thing in all of this is that everything originates from me. Every idea, every opinion, every direction. I'm just using AI as a facilitation platform. It captures, structures, executes, and deploys. But the thinking is mine.
That's not a small distinction. In a world that's increasingly anxious about AI replacing human work, this is a model where AI amplifies human thinking rather than substituting for it. The human provides the irreplaceable part, the ideas, the taste, the judgement, and the AI handles everything else.
It started as a pipe dream about capturing thoughts in the car. It became a complete system for turning ideas into reality without ever opening a code editor. And it's still getting better every day, because the system that builds things is the same system that learns from building them.
That's the joy of it, really. It never stops improving. And neither do I.