My NotebookLM workflow starts before I even open NotebookLM. That was the part I got wrong for a long time. I'd collect twenty tabs, save three YouTube videos for later, dump a few PDFs into a folder, and then act surprised when none of it turned into actual understanding. I wasn't learning. I was hoarding inputs.
So I changed the system. Now the first step happens in Claude Code, not in my browser bookmarks. I use one skill for discovery, another skill for exporting the finished takeaways, and NotebookLM sits in the middle as the place where the real synthesis happens. That one change made the whole process calmer. Also faster. And, most importantly, it made the good parts reusable instead of trapped in one chaotic research session.
I wasn't bad at research. I was bad at intake.
Once I saw the pattern, it was obvious. My old setup looked productive from the outside. Lots of reading. Lots of saved links. Lots of “I'll come back to this.” But if you asked me a week later what actually changed in my thinking, the answer was usually “not much.”
The problem wasn't effort. The problem was shape. Everything came in at once: blog posts, docs, talks, podcasts, forum threads, random clips, half-finished notes. No order. No filter. No sense of which source came first, who wrote it, or whether I was reading the original idea or somebody's summary of it.
Once I noticed that, the fix became obvious. Stop treating source discovery like a side effect. Make it a step with its own tool, its own output, and its own standards.
That is what the first Claude Code skill does for me.
The discovery step happens in Claude Code
I wrote a discovery skill in Claude Code that does the messy part I used to do manually. It searches the web and YouTube for the topic I'm trying to learn, pulls back candidate sources, and gives me the stuff I actually care about: title, author, publish date, the link itself, and a short note on why the source might be worth keeping.
Then it orders the results by date. Newest first if I need to see where the conversation is now. Oldest first if I want to trace how an idea developed. That sounds small. It isn't. Knowing who wrote something and when they wrote it changes how you read it.
I also like doing this in Claude Code because the output is cleaner than a search engine session. I can tell the skill “give me original sources first, then strong secondary explainers, and don't flood me with ten articles that all say the same thing.” That means NotebookLM never becomes a landfill. It gets a curated input set.
If you read my earlier piece on Claude Code skills, this is exactly why I got so hooked on them. Repetition is one reason. Consistency is the bigger one.
What the discovery skill filters for
- →Originality. I want primary material when possible, not twelve paraphrases of the same source.
- →Metadata. Title, author, date, and format are mandatory because context without provenance gets shaky fast.
- →Coverage. A few articles, a few videos, maybe docs if the topic calls for it. Enough range to learn, not so much that I drown in it.
Why I only push selected sources into NotebookLM
NotebookLM works best for me when it starts with a deliberate set of material. Google's own NotebookLM docs say a notebook works from static copies of the sources you import, and it supports things like web URLs, public YouTube URLs, PDFs, markdown, audio, images, Google Docs, and more. That flexibility is great. It also means you can make a mess very quickly if you throw everything in at once.
So I don't. The discovery skill gives me the candidates. I choose the sources that deserve a place in the notebook. Then NotebookLM becomes a workspace with boundaries, not a junk drawer.
This part also saved me from a problem I talked about in my context management post. More input is not the same as better understanding. Past a certain point, extra material just makes the signal harder to see.
NotebookLM gives you source grounding. That only helps if the sources are worth grounding against in the first place.
notebooklm-py is the glue layer
This is where notebooklm-py comes in. I use it to push the curated sources into NotebookLM and manage the notebook without doing everything manually in the browser.
Important caveat: it's an unofficial community project that uses undocumented Google APIs. The README is very clear about that, and I think that honesty matters. I'm comfortable using it for my own workflow because the value is obvious. I would still treat it as fast-moving tooling, not something I assume is guaranteed forever.
What I like is that it fits the shape of the system. It can create notebooks, add and manage sources, chat with content, generate outputs, and download artifacts in formats I can do something with. The browser UI is where I think. The library is what keeps the pipeline repeatable.
Why I use code at this stage at all
Because import is boring, and boring steps are where systems break. If a workflow depends on me manually copying links into a tool every single time, I'll eventually skip the careful part and regret it later.
The code doesn't replace the thinking. It protects the thinking from sloppy handoffs.
What I actually do inside NotebookLM
Once the sources are in, that's when the good part starts. I ask questions against the notebook. I compare sources. I look at where the citations point. I save the answers that are actually worth keeping. And because NotebookLM can focus on selected sources, I can narrow the conversation when a topic branches too far.
The citations matter a lot to me. NotebookLM isn't useful to me as a confident narrator. It's useful as a grounded research partner. If an answer points me back to the right source, I move faster and trust the result more.
I also save useful outputs as notes. Google added a nice bridge there: notes can be created directly inside the notebook, and when it makes sense you can turn a note into a new source. That sounds like a minor feature until you use it in practice. Then you realize it lets you promote your own synthesis back into the working set.
Different topics need different pressure tests. Sometimes I open a mind map because the structure is fuzzy. Sometimes I generate an Audio Overview because I want to hear the topic back while I walk. And sometimes I want a study guide, quiz, or flashcards because understanding something well enough to recognize it is not the same thing as being able to recall it cold.
- →Chat first. I use questions to find the shape of the topic and the disagreements inside the source set.
- →Notes second. Good answers get saved. Weak answers disappear. I don't archive everything.
- →Artifacts when needed. Mind maps, audio, quizzes, and flashcards are there to stress-test my understanding, not to impress me with output.
The source set changes while I learn
This part matters because people talk about research as if it were linear. It rarely is. Once I understand the topic better, I often realize that one of the early sources was too shallow, too old, or just not about the thing I actually needed. So I update the notebook. Add a better source. Remove a noisy one. Tighten the set.
I'm not editing the original source documents inside NotebookLM. I'm curating the notebook around them. That distinction is important. The notebook becomes a sharper research environment over time, not a permanent dump of everything I touched on day one.
This is also where the date-ordering from the discovery step keeps paying off. When a field moves fast, I can quickly see which sources are historical context and which ones deserve more weight now.
It sounds obvious. It wasn't obvious to me until I built it into the workflow.
Obsidian gets the finished version
When I'm done researching, I don't want to leave the best parts trapped in NotebookLM. I use a second Claude Code skill for that. Its job is simple: pull the notes, summaries, and kept insights I actually care about and push them into Obsidian for long-term reference.
Not everything goes over. That would defeat the point. Obsidian is where I keep the compressed version - the arguments I want to remember, the examples worth reusing, the frameworks that survived contact with real reading. NotebookLM is the lab. Obsidian is the shelf.
I like ending the workflow this way because it closes the loop. The next time I revisit the topic, I don't start from zero. I start from the distilled version of what already held up.
This is really a workflow about retention
People usually ask me about the tools first. Fair enough. The tools are fun. But the real point is retention. I want a system that helps me find strong sources, question them properly, keep the important parts, and reuse the result later without doing the whole dance again.
Claude Code handles discovery and export. NotebookLM handles the active research loop. Obsidian handles the long memory. Each part has one job. That is why the whole thing works.
If your current research process still ends as a pile of tabs and a vague feeling that you looked at some smart stuff, fix the handoffs. That's where the value leaks out. Build a workflow that respects the difference between finding information and actually keeping it.
Josip Budalić
Founder & CEO
Josip runs HOTFIX d.o.o., a dev shop based in Croatia. He's been writing code for over a decade and is slightly obsessed with finding ways to ship faster without sacrificing quality. When not arguing with AI assistants, he's probably hiking somewhere or consuming unhealthy amounts of coffee.