Advanced AI
The chat drawer is a retrieval-augmented search over the vault: your question gets matched against every published note, the best matches are fed to the model as context, and you get back an answer grounded in the material. This guide covers the controls that make it land better — structured filters, the context meter, and how conversations persist.
How search works
Your message is embedded and matched semantically against the vault’s note chunks. Frontmatter — type, tags, framework, sources, authors, key concepts — is rolled into those embeddings, so a query like “workshops about flow from Accelerate” can find the right notes even if they never use the word “workshop” in the body.
The top matches appear to the model as “Related notes from vault” alongside the page you’re currently reading. Gated pages you don’t have access to are filtered out before the model ever sees them. You don’t need to ask the assistant to “search the vault” — every turn already does.
Filtering with structured queries
You can narrow retrieval by adding key=value pairs anywhere in your message. The parser pulls those tokens out; the rest of the sentence is what gets matched semantically.
| Intent | Syntax | Example |
|---|---|---|
| Equal | key=value | type=Workshop |
| Equal with spaces | key="quoted value" | type="Case Study" |
| Not equal | key!=value | type!=Article |
| One of several | key=[a, b, c] | type=[Workshop, Playbook] |
| Has a tag | tag=value | tag=team-dynamics |
| Links to a note | key=[[Note Name]] | source=[[Team Topologies]] |
Multiple filters in one message combine with AND. tag=X is sugar for “the tags list contains X.” Wikilink values are resolved against real vault pages, so the note has to exist for the filter to bind.
The filterable fields are the ones this vault indexes as structured metadata: type, framework, source, problems, workshops, related tools, key concepts, authors, author, tags, date, gated. Filtering on anything else silently falls back to pure semantic match.
A few queries you can try:
type=Workshop how do I run a team API session
tag=flow source=[[Accelerate]] what does healthy delivery look like
type=[Case Study, Playbook] where have teams gotten 30x ROI
framework=[[Team Topologies]] stream-aligned vs platform teams
Applied filters are echoed back to the model in the system prompt, so the assistant knows you’ve narrowed scope and won’t wander off it.
The context meter
Beneath the composer is a thin bar that tracks how much of the model’s context window your current conversation is using. It measures the live chat — your messages plus the assistant’s replies — not the base system prompt or attached pages, so New resets it to zero.
| Bar | Meaning | What to do |
|---|---|---|
| Gray | Under 60% of the 200k window | Keep going. |
| Amber, “Context filling up” | 60–85% | Wrap up the current thread. A Start fresh button appears next to the bar. |
| Red, “Context full — older messages will be dropped” | Over 85% | Start a new chat. At this point early turns are being truncated and the assistant is losing what you discussed earlier. |
The model is Claude Sonnet 4.6 with a 200k context window. When the window fills, the oldest turns get dropped first — so a long thread quietly degrades into an assistant that’s forgotten how you framed the problem ten messages ago. Shorter, scoped chats beat one sprawling session. When the bar turns amber, land the current thread, hit New, and if you want continuity, paste the one or two conclusions worth carrying forward.
Managing conversations
Open the drawer’s sub-toolbar for history. Past conversations are listed newest first; click one to reopen it with the message thread restored. Each conversation remembers which page it started on, so it stays useful even after you navigate elsewhere in the vault.
Where history lives depends on whether you’re signed in:
- Signed in: conversations sync to your account and follow you across browsers and devices.
- Anonymous: conversations are stored in your browser’s local storage on this device only. Clearing site data wipes them, and signing in later won’t backfill earlier local conversations.
History is global across the vault, not per-page. You can start a conversation on one note and continue it from another — the drawer knows which page context belongs to which conversation.
Page context
The page you opened the drawer on is attached automatically. As you navigate during a chat, the last three non-home pages you visit get added to context as well, so the assistant can see where you’ve been. If you hit a gated page you don’t have an entitlement for, the drawer prompts you to sign in before it’ll chat about that page.
Tips
- Lead with filters, finish with intent:
type=Playbook how do I start a discovery sprintbeats asking for “playbooks about discovery” and hoping. - If answers feel generic, anchor them with a
tag=orsource=[[...]]. Semantic search cast wide will find something; narrowing the field usually sharpens the answer. - Watch the meter on research sessions. One focused chat per question is worth more than one marathon chat per day.
- Wikilink filter values must match an existing vault page title — a typo resolves to nothing and the filter drops silently.
Related
- mcp-server — connect your own AI tools directly to the vault over MCP
- Context Engineering — supplying the right context is the whole job
- Prompt Engineering — how to write the prose that sits next to the filters
- Effective AI — the workshop this guide supports
Knowledge