type
status
date
slug
summary
tags
category
icon
password
I’ve always been fascinated by the idea of creating something uniquely mine, and building a personal assistant chatbot tailored to my life feels like the perfect project to dive into.
 

The Spark of the Idea

It all started with my trusty Obsidian vault, where I store my thoughts, ideas, and notes in markdown files. I love how Obsidian organizes my chaos, but I wanted more—a way to interact with my notes conversationally, like chatting with a friend who knows me inside out. Sure, there are plenty of Obsidian-compatible AI applications/plug-ins out there, but there’s something special about crafting my own. So, I set out to build a personal assistant that could answer my questions by pulling from my notes and truly understanding me.
 

Breaking Down the Magic

If you’ve ever tinkered with LangChain, the structure of my Retrieval-Augmented Generation (RAG) assistant might feel familiar, though I added a twist with a vector store connected to a research agent. The idea is simple: the assistant searches my knowledge base (my Obsidian vault) to find relevant information, then crafts a thoughtful response based on what it finds. Let me walk you through how it works.
The high level structure (actually very simple)
The high level structure (actually very simple)
The high-level setup is straightforward. When I ask a question, the assistant processes it as a HumanMessage and passes it to a research agent. This agent converts the query into a vector and searches my vector store—a database of my notes transformed into numerical representations—for the most relevant pieces of information. Once it retrieves the best matches, it hands them over to the assistant agent, which generates a clear, concise answer. All of this—my input, the agent’s search, and the final response—is tracked in the AgentStates, like a shared whiteboard keeping everything organized.
The real magic happens when my Obsidian vault gets involved. Using LangChain’s tools, I load my markdown files with the DirectoryLoader function. Since some notes are lengthy, I use the RecursiveCharacterTextSplitter to break them into smaller chunks. These chunks are then converted into vectors using an embedding model and stored in a Chroma database. This setup lets my assistant quickly sift through my notes to find exactly what I need, whether it’s a fleeting idea I jotted down months ago or a detailed plan from last week.
 

A Glimpse of It in Action

It’s pretty exciting to see the assistant in action! The process isn’t perfect yet, but watching it work sparks a sense of pride and possibility.
 
And this is how the book looks like in my vault haha
notion image
 

What’s Next for My Assistant?

I’m thrilled with how this project is shaping up, but there’s still work to do. My next steps are to refine the retrieval strategy to make it faster and more precise, enabling the bot to evaluate the quality of retrieved information, and set stricter guidelines for the LLM’s responses to keep them concise and on-point. I’m also toying with the idea of giving the bot a frontend—maybe a simple web interface—so I can interact with it more easily.
This feels like the start of a series, as each tweak and enhancement teaches me something new about AI, coding, and even myself. Building this assistant isn’t just about the tech; it’s about creating a tool that reflects who I am and how I think.
 

Want to Explore the Code?

If you’re curious about how I built this assistant or want to try creating your own, I’ve shared all the code and details in my GitHub repository. Feel free to check it out, tinker with it, or even adapt it to fit your own notes and ideas—it’s a fun way to make AI work for you!
 
Loading...