tl;dr: From email summaries to a Pinecone-powered second brain, this personal assistant has wide-ranging potential!
OpenClaw is an open-source AI personal assistant you self-host, connect to your own accounts, and talk to over a messaging app you already use. Instead of opening five apps, you ask your assistant.
I’ve been meaning to try it for a while. But between work and everything else, I just didn’t get round to it. Then a friend gave me a final nudge. I sat down and set it up.
What Is OpenClaw?
OpenClaw is an open-source AI personal assistant framework. You run a gateway on your machine, connect it to an AI model of your choice, wire up integrations like Gmail or Google Calendar, and talk to it over a messaging app you already use (WhatsApp, Telegram, etc).
Think of it like a personal assistant that has access to the tools you use daily and responds immediately.
It’s not magic but certainly feels like it. It’s mainly scaffolding. Good scaffolding changes how you work.
The Setup
Getting started is mostly configuration. You install OpenClaw, point it at an AI model, and connect your integrations. The whole thing runs in the background.
The part I found most interesting is how you give it a personality and context. There’s a workspace with files like SOUL.md and USER.md where you define who the assistant is and who it’s helping. It reads those at the start of each session.
Within about an hour, I had it running and connected to Telegram.
3x Use Cases I’ve Started With
1. Newsletter Summaries
I subscribe to more newsletters than I read. I’ve not had “zero inbox” in years.
My new workflow: when something lands in my inbox that I want to read but don’t have time for right now, I forward it to the isolated Gmail account my assistant monitors. The assistant knows summarise forwarded emails.
After the next poll cycle, I get back the key points in plain language.
It’s a tiny improvement in my life, but I can already feel it making a huge impact as I work my inbox down to zero.
I didn’t plan this use case per se, but it happened organically.
2. Quick Questions
This one sounds trivial, but it’s surprisingly fun.
“What’s the weather in Tokyo today?”
“What time does X close?”
These micro-queries used to be micro-distractions. Now I fire them off over Telegram or email and get an answer without breaking flow. (Email is slightly more asynchronous than Telegram.)
The assistant is wherever you are: messaging app, email, it doesn’t matter. It answers questions wherever you are. (For ephemeral questions, you might prefer messaging apps.)
3. Calendar Events
I describe what I want such as “create a calendar entry for X event (see https://…)“ and the assistant creates the event. The part I find particularly cool is when I send it a link and it parses the URL to pull in context, or does a quick search to add relevant details to the invite. No clicking through date pickers, no copy-pasting.
It’s still early days and I’m watching for edge cases, but so far it works OK.
Going Further: Adding a Second Brain with Pinecone
The three use cases above are faily lightweight. Forward an email, ask a question, create an event. But there’s a more interesting layer you can add on top: a persistent, searchable memory that grows over time.
I call it my “second brain”. This is where Pinecone comes in.
What’s a vector database?
A regular database retrieves by exact match: give me all rows where category = X. A vector database works differently. It stores data as numerical representations of meaning. If that doesn’t make sense, here are a few ways to query information:
Semantic search: Ask “what did I read about AI orchestration tools last week?” and it surfaces relevant notes even if none of them contain those exact words. It matches on meaning, not keywords.
Keyword search : Traditional text matching. Exact words, not meaning. Fast but literal.
Hybrid search: Semantic similarity and keyword weight combined. Better for mixed queries.
Filtered search : Adds metadata constraints (e.g. category = X AND semantically similar to query)
When you start querying it about things you’ve encountered before, it starts to feel like a second brain.
Setting it up with OpenClaw
The Pinecone DB integration lives in my OpenClaw workspace as a skill. A couple of scripts and a config file. Setup involves creating a Pinecone account and index, adding environment variables, and the skill’s configuration tells the assistant when and how to use the upsert and query scripts.
The schema I’m using is based on Zettelkasten: a note-taking philosophy built around atomic, interlinked ideas. I’ve published the AI prompts I use to drive it on GitHub: github.com/logicalicy/ai-zettelkasten-lite.
In practice, I send a message saying “save this XXX to my second brain”, the assistant fetches the page if there’s a link, writes a structured entry, and upserts it to Pinecone. Later I ask ”what have I saved about XXX?” and it does a semantic search and surfaces notes I haven’t thought about in weeks.
The key difference from bookmarking is context. The entry captures why I saved something, not just what it is. I can also fetch related information. That’s what makes retrieval useful later.
It’s early days. But the more you add, the more useful it becomes.
Safety First: How I’m Thinking About This
Giving an AI access to your email is not something I take lightly. This past week, a security writeup on CodeWall described how researchers broke into McKinsey’s AI platform. It’s a useful reminder that even well-resourced teams get this wrong. I’m sure I’ll discover more gaps in my own setup as I go.
I’ve started by isolating the assistant’s Gmail account. The assistant doesn’t have access to my main inbox. I created a dedicated Gmail account specifically for OpenClaw. The only emails it ever sees are ones I explicitly forward to it. That’s a deliberate boundary.
I’m still working on adding more guardrails and fine-grained permissions. You should too. Always start with the Principle of least privilege.
The Rough Edges
The early experience isn’t perfectly smooth and that’s worth noting.
The most interesting issue I hit was around memory. OpenClaw maintains state across sessions using files on disk such as a long-term memory file, etc. In theory, the assistant can “remember” things you’ve told it. In practice, I reset the OpenClaw session mid-setup and it forgot a bunch of state I thought had been saved. The assistant had it in conversational “context” but never committed it to the memory/state files on disk. It’s the AI equivalent of closing a customer support chat window and having to set the context again for the next support agent.
This highlights an important point: free-form conversational interaction and reliable, repeatable workflows are different tools. Asking an agent “do X” in natural language works great for one-off tasks. But for something that needs to happen consistently, you want determinism. Code, not conversation.
The principle I’ve started designing around is reach for a script before you reach for a prompt. If code can do it reliably, code should do it. LLMs are great when the task is genuinely fuzzy, when you need synthesis or something that doesn’t have a deterministic answer.
The mental model I’ve settled on: use chat to figure out what you want, then encode it as a script when you need it to be reliable. Free-form interaction is prototyping. The deterministic pipeline is production.
Final thoughts
This setup is still a work-in-progress but looks promising. The use cases I’ve described are simple on purpose as I wanted to validate the workflow before building on it. Setting up the assistant is only the beginning and will be iterated on over time.
Areas I’d like to dive into more next in the world of AI (it’s moving too fast): RAG (retrieval-augmented generation) to pull from my second brain at the right moment, evals to measure whether agents are doing the right thing, determinism-first design principle, harness engineering and the evolving tooling landscape (e.g. agent orchestration).
If you haven’t tried OpenClaw yet: this is your nudge!
If you’re experimenting with AI in your workflow, I’d love to hear what you’re building. I write more like this at blog.mariohayashi.com, and feel free to follow me on Twitter: @logicalicy



Top comments (0)