All templates
AI#RAG#Vector Store#Knowledge Base#Qdrant

RAG Document Ingest

Chunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.

RAG Document Ingest

The first half of any Retrieval-Augmented Generation pipeline: paste a document and have Heym split, embed, and store it in your Qdrant collection automatically.

What this workflow does

  1. Input receives the document text (article, manual, policy)
  2. RAG node (insert mode) chunks the text, embeds it, and writes to Qdrant
  3. Output confirms the number of chunks stored

Use cases

  • Knowledge base construction from PDFs or wikis
  • Policy document retrieval for compliance bots
  • Product documentation for customer support agents

Setup

Configure the RAG node with your Qdrant host, collection name, and embedding model. Pair with the RAG Q&A Agent template for end-to-end retrieval and answering.

How to import this template

  1. 1Click Import → Copy JSON on this page.
  2. 2Open your Heym and navigate to a workflow canvas.
  3. 3Press Cmd+V / Ctrl+V — nodes appear instantly.
  4. 4Add your API keys in the node config panels and click Run.
#RAG#Vector Store#Knowledge Base#Qdrant

Click a node to select it — same as the Heym editor; the panel shows its settings.

5 nodes · Free & source-available

Explore related automations — each page links to other templates so you can discover more use cases.

Heym
incident analysis · production AI
Observed across 100s of AI rollouts

AI workflows don't fail because of prompts.
They fail because of orchestration.

symptom · glue code01
5 tools
Scripts, vector DB, approval bot, tracing, browser runner — none of them talk.
symptom · visibility02
~0%
Observable behavior across the stack. Debugging is guesswork.
with heym · one runtime
1 canvas
Agents, RAG, HITL, MCP, traces & evals. Self-hosted. Observable.
AI-Native RuntimeProduction-Grade
github.com/heymrun/heym