AI#RAG#Vector Store#Knowledge Base#Qdrant
RAG Document Ingest
Chunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
RAG Document Ingest
The first half of any Retrieval-Augmented Generation pipeline: paste a document and have Heym split, embed, and store it in your Qdrant collection automatically.
What this workflow does
- Input receives the document text (article, manual, policy)
- RAG node (insert mode) chunks the text, embeds it, and writes to Qdrant
- Output confirms the number of chunks stored
Use cases
- Knowledge base construction from PDFs or wikis
- Policy document retrieval for compliance bots
- Product documentation for customer support agents
Setup
Configure the RAG node with your Qdrant host, collection name, and embedding model. Pair with the RAG Q&A Agent template for end-to-end retrieval and answering.
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3Press Cmd+V / Ctrl+V — nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
#RAG#Vector Store#Knowledge Base#Qdrant
Click a node to select it — same as the Heym editor; the panel shows its settings.
5 nodes · Free & source-available
More workflow templates
Explore related automations — each page links to other templates so you can discover more use cases.
- Batch LLM Status TrackerSend an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
- Gemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- PDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- Meeting Notes → JSON TasksTurn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
- Inbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- RAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- Language Switch RouterDetect the language of incoming text with an LLM and route to the matching branch using the Switch node.
- Telegram FAQ Auto ReplyReply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
- IMAP Support Inbox TriageWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
- Jina Web FetcherFetch clean, LLM-ready text from any URL using the Jina Reader API.
- Cursor Post NotifierMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
- Claude Blog MonitorMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.