AI Workflow Templates
Download or copy free AI workflow templates for email triage, RAG Q&A, multi-agent research, Slack alerts, scheduled reporting, and more. Import directly into Heym — the source-available AI workflow automation platform.
What are Heym workflow templates?
Each template is a complete, ready-to-run AI workflow automation — a pre-wired graph of nodes (triggers, LLM calls, HTTP requests, conditionals, integrations) packaged as a single JSON file. Click Import → Copy JSON on any card, switch to your Heym canvas, and press Cmd+V — the entire workflow pastes in under a second. Templates are model-agnostic: every LLM node defaults to gpt-4o but works with Claude, Mistral, or any local Ollama model. All templates are free to use and source-available.
AI
LLM-powered pipelines that triage emails, answer questions over documents, classify data, and generate structured outputs.
Multi-Agent
Coordinate multiple AI agents in sequence or parallel: researcher + writer, planner + executor, and validator + corrector patterns.
Integration
Connect Slack, webhooks, HTTP APIs, and Google Sheets — route data between services with conditional logic and automatic retries.
Automation
Scheduled and event-driven workflows: scrape pages, generate reports, monitor API uptime, and deliver results by email.
Data
Pull data from REST APIs, transform it with Set/Mapper nodes, and push it to Google Sheets, BigQuery, Grist, or Redis.
Send an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
IMAP Support Inbox Triage
AutomationWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
Jina Web Fetcher
IntegrationFetch clean, LLM-ready text from any URL using the Jina Reader API.
Cursor Post Notifier
AutomationMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
Generate images from a text prompt using Gemini's native image output.
Translate the full text of any uploaded document using an AI agent.
Claude Blog Monitor
AutomationMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.
Open-Meteo Weather Snapshot
IntegrationPull live weather (no API key) from Open-Meteo for any city coordinates — great for travel bots and dashboards.
GitHub Release Radar
AutomationCompare the latest GitHub release tag against Redis and notify Slack when a project ships a new version.
Turn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
Research Brief → Draft Writer
Multi-AgentTwo-stage pipeline: one LLM pulls facts and bullets, the next turns them into a polished paragraph for blogs or newsletters.
web.dev Article Monitor
AutomationCron + crawler + Redis dedupe + Slack: get notified when Google's web.dev blog publishes a new article.
ZenQuotes Random Quote
IntegrationFetch a random inspirational quote as JSON from the free ZenQuotes API — no API key, ideal for bots and UI demos.
Load a sample user record from JSONPlaceholder — handy for prototyping Set/Mapper nodes and mock APIs.
Map incoming text into named fields with the Set node before handing off to webhooks or databases.
Urgent vs Standard Router
AutomationBranch on a keyword in the input line — fast path vs standard path using a Condition and two outputs.
Paste a long email or thread — one LLM call returns a short TL;DR with next actions.
Wait — Debounce Handoff
AutomationInsert a configurable pause (Wait node) before the final output — useful for debouncing or human-in-the-loop pacing.
Slack AI Triage Agent
Multi-AgentClassify incoming Slack messages with an LLM and auto-route urgent tickets to a priority channel.
Discord Incoming Webhook
IntegrationPost a message to a Discord channel with a single HTTP node — structured text input and webhook JSON body.
Iterate over a JSON array of URLs with the Loop node, fetch each via HTTP, and merge all responses into one payload.
Chunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
Search your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
Detect the language of incoming text with an LLM and route to the matching branch using the Switch node.
Resilient HTTP + Error Handler
AutomationAttach an Error Handler node to an HTTP call and Slack-notify your team the moment a request fails.
Google Sheets AI Enricher
IntegrationRead new rows from Google Sheets, classify or enrich them with an LLM, and write the results back automatically.
Playwright Visual AI Monitor
AutomationTake a full-page screenshot on a schedule, analyse it with an LLM for anomalies, and Slack-alert when something looks off.
Reply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
Realtime WebSocket Alert Relay
IntegrationListen to an external WebSocket feed, audit critical events, and forward them to another realtime channel.
Async Sub-workflow Dispatcher
AutomationValidate an incoming brief and dispatch a reusable sub-workflow in the background without a response node.
Drive Share Link Mailer
IntegrationFetch a remote file into Drive, return the download link immediately, and email the same link asynchronously.
Read qualified rows from Grist, stream them into BigQuery, and log the sync outcome in a Heym DataTable.
RabbitMQ Delayed Publisher
IntegrationPublish a release message to RabbitMQ with an optional delivery delay for downstream consumers.
Self-stopping Status Monitor
AutomationPoll an incident endpoint until it resolves, then automatically disable the polling trigger for future runs.
Frequently Asked Questions
- What are Heym workflow templates?
- Pre-built automation workflows you download as JSON or copy to clipboard and paste onto the Heym canvas. Each template is a complete node graph — triggers, AI models, integrations, and outputs — ready to run in minutes.
- How do I import a template?
- Click "Import" → "Copy JSON" on any card, then press Cmd+V / Ctrl+V on the Heym canvas. Nodes and connections appear instantly. You can also "Download template" and drag the .json file onto the canvas.
- Are the templates free?
- Yes — all templates are source-available under Commons Clause + MIT. Use and modify them in your own deployments; commercial redistribution or paid hosted offerings require separate licensing.
- Which AI models are supported?
- Templates are model-agnostic. Nodes default to gpt-4o but work with any provider Heym supports: OpenAI, Anthropic Claude, Mistral, and local models via Ollama.
- Do templates work on self-hosted Heym?
- Yes. Heym is fully self-hosted via Docker Compose or Kubernetes. Import any template into your local instance and run it without data leaving your infrastructure.