Back to home
Free Workflow Templates

AI Workflow Templates

Download or copy free AI workflow templates for email triage, RAG Q&A, multi-agent research, Slack alerts, scheduled reporting, and more. Import directly into Heym — the source-available AI workflow automation platform.

What are Heym workflow templates?

Each template is a complete, ready-to-run AI workflow automation — a pre-wired graph of nodes (triggers, LLM calls, HTTP requests, conditionals, integrations) packaged as a single JSON file. Click Import → Copy JSON on any card, switch to your Heym canvas, and press Cmd+V — the entire workflow pastes in under a second. Templates are model-agnostic: every LLM node defaults to gpt-4o but works with Claude, Mistral, or any local Ollama model. All templates are free to use and source-available.

AI

LLM-powered pipelines that triage emails, answer questions over documents, classify data, and generate structured outputs.

Multi-Agent

Coordinate multiple AI agents in sequence or parallel: researcher + writer, planner + executor, and validator + corrector patterns.

Integration

Connect Slack, webhooks, HTTP APIs, and Google Sheets — route data between services with conditional logic and automatic retries.

Automation

Scheduled and event-driven workflows: scrape pages, generate reports, monitor API uptime, and deliver results by email.

Data

Pull data from REST APIs, transform it with Set/Mapper nodes, and push it to Google Sheets, BigQuery, Grist, or Redis.

Send an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.

#LLM#Batch API#Status Branch

Watch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.

#IMAP#Email#Slack

Jina Web Fetcher

Integration

Fetch clean, LLM-ready text from any URL using the Jina Reader API.

#HTTP#Jina#Scraping

Monitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.

#Cron#Crawler#Redis

Generate images from a text prompt using Gemini's native image output.

#AI#Gemini#Image

Translate the full text of any uploaded document using an AI agent.

#AI#Translation#Document

Monitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.

#Cron#Crawler#Redis

Pull live weather (no API key) from Open-Meteo for any city coordinates — great for travel bots and dashboards.

#HTTP#Open Data#Weather

Compare the latest GitHub release tag against Redis and notify Slack when a project ships a new version.

#GitHub#HTTP#Redis

Turn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.

#LLM#JSON#Productivity

Two-stage pipeline: one LLM pulls facts and bullets, the next turns them into a polished paragraph for blogs or newsletters.

#LLM#Multi-Step#Content

Cron + crawler + Redis dedupe + Slack: get notified when Google's web.dev blog publishes a new article.

#Cron#Crawler#Redis

Fetch a random inspirational quote as JSON from the free ZenQuotes API — no API key, ideal for bots and UI demos.

#HTTP#JSON#ZenQuotes

Load a sample user record from JSONPlaceholder — handy for prototyping Set/Mapper nodes and mock APIs.

#HTTP#JSON#Mock API

Map incoming text into named fields with the Set node before handing off to webhooks or databases.

#Set#Mapper#ETL

Branch on a keyword in the input line — fast path vs standard path using a Condition and two outputs.

#Condition#Routing#Support

Paste a long email or thread — one LLM call returns a short TL;DR with next actions.

#LLM#Email#Productivity

Insert a configurable pause (Wait node) before the final output — useful for debouncing or human-in-the-loop pacing.

#Wait#Debouncing#Automation

Classify incoming Slack messages with an LLM and auto-route urgent tickets to a priority channel.

#Slack#AI#Triage

Post a message to a Discord channel with a single HTTP node — structured text input and webhook JSON body.

#Discord#Webhook#HTTP

Iterate over a JSON array of URLs with the Loop node, fetch each via HTTP, and merge all responses into one payload.

#Loop#HTTP#Batch

Chunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.

#RAG#Vector Store#Knowledge Base

Search your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.

#RAG#Search#LLM

Detect the language of incoming text with an LLM and route to the matching branch using the Switch node.

#Switch#Language Detection#Routing

Attach an Error Handler node to an HTTP call and Slack-notify your team the moment a request fails.

#Error Handling#HTTP#Resilience

Read new rows from Google Sheets, classify or enrich them with an LLM, and write the results back automatically.

#Google Sheets#AI#Spreadsheet

Take a full-page screenshot on a schedule, analyse it with an LLM for anomalies, and Slack-alert when something looks off.

#Playwright#Browser Automation#Visual AI

Reply to inbound Telegram questions with an LLM and keep the latest question in a global variable.

#Telegram#LLM#FAQ

Listen to an external WebSocket feed, audit critical events, and forward them to another realtime channel.

#WebSocket#Realtime#Relay

Validate an incoming brief and dispatch a reusable sub-workflow in the background without a response node.

#Execute#Async#Automation

Fetch a remote file into Drive, return the download link immediately, and email the same link asynchronously.

#Drive#Email#Files

Read qualified rows from Grist, stream them into BigQuery, and log the sync outcome in a Heym DataTable.

#Grist#BigQuery#DataTable

Publish a release message to RabbitMQ with an optional delivery delay for downstream consumers.

#RabbitMQ#Queue#Events

Poll an incident endpoint until it resolves, then automatically disable the polling trigger for future runs.

#Cron#Disable Node#HTTP

Frequently Asked Questions

What are Heym workflow templates?
Pre-built automation workflows you download as JSON or copy to clipboard and paste onto the Heym canvas. Each template is a complete node graph — triggers, AI models, integrations, and outputs — ready to run in minutes.
How do I import a template?
Click "Import" → "Copy JSON" on any card, then press Cmd+V / Ctrl+V on the Heym canvas. Nodes and connections appear instantly. You can also "Download template" and drag the .json file onto the canvas.
Are the templates free?
Yes — all templates are source-available under Commons Clause + MIT. Use and modify them in your own deployments; commercial redistribution or paid hosted offerings require separate licensing.
Which AI models are supported?
Templates are model-agnostic. Nodes default to gpt-4o but work with any provider Heym supports: OpenAI, Anthropic Claude, Mistral, and local models via Ollama.
Do templates work on self-hosted Heym?
Yes. Heym is fully self-hosted via Docker Compose or Kubernetes. Import any template into your local instance and run it without data leaving your infrastructure.