NotebookLM Skills: Give Claude Code a Brain That Doesn't Hallucinate
notebooklm-py and the NotebookLM skill for Claude Code. Source-grounded, citation-backed answers from your own knowledge base, zero hallucinations.
The hallucination problem
You ask your coding agent about your internal API. It gives you a confident, well-structured, completely wrong answer. Hallucinated the endpoint names, invented query parameters, cited documentation that doesn't exist. Classic.
RAG pipelines are the standard fix but they're a faff to set up. Embeddings, vector databases, chunking strategies, retrieval tuning. Fine for production systems, overkill when you just want your agent to check the docs before making stuff up.
NotebookLM already sorted this on Google's side. You upload your sources, Gemini answers only from those sources, every answer comes with citations pointing to the exact passage. The catch was it's browser-only. No API. No way to wire it into your agent workflow.
Two projects changed that.
notebooklm-py
notebooklm-py is an unofficial Python library with 9,500+ stars that gives you programmatic access to everything NotebookLM does, including bits the web UI doesn't expose. Create notebooks, import sources (URLs, PDFs, YouTube, Google Drive), ask questions, get cited answers.
pip install notebooklm-py
The mad thing is it also handles all the Studio outputs: audio overviews, video generation, slide decks you can download as PPTX, mind maps as JSON, quizzes, flashcards. Upload your architecture docs, ask for a quiz on the API surface, get exportable flashcards. Proper useful for onboarding new team members.
It runs on undocumented Google APIs, which means it could break if Google changes things. Fair warning. Been solid since January though and the maintainer is responsive.
The notebooklm-skill
The notebooklm-skill is the one that got me excited. It's a Claude Code skill that lets Claude query your NotebookLM notebooks mid-session. Claude needs to know about your internal service? It asks NotebookLM, gets a Gemini-grounded answer with citations from your actual documentation, then uses that to write code.
Install it into ~/.claude/skills/, authenticate once with Google, and it just works. Claude asks in the background, you get answers grounded in your sources instead of the model's training data. 5,600 stars and climbing.
Only works with local Claude Code though, not the web UI. The skill uses browser automation under the hood and the web sandbox blocks that. Fair enough.
So what
You're basically giving your agent a reference library it reads before answering instead of winging it. Upload your API docs, architecture decisions, runbooks, whatever. Agent checks them. Every answer traceable back to the source.
Not perfect. The undocumented API thing is a real risk, and browser automation is inherently fragile. But for zero quid and ten minutes of setup, you get an agent that stops making stuff up about your own codebase. I'll take that.