Batch LLM Status Tracker
Send an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
Batch LLM Status Tracker
Use one LLM node for bulk prompting and a second branch for progress-aware side effects. This template demonstrates Heym's Batch API mode on the plain LLM node, including the dedicated batchStatus branch.
What this workflow does
- Variable node seeds a demo array of prompts
- LLM node sends that array through Batch API mode
- STATUS branch maps live progress updates such as pending, processing, and completed
- Main path reshapes the final per-item results and returns them as the workflow output
Use cases
- Lower-cost bulk prompting via OpenAI's Batch API
- Queue-style AI enrichment jobs
- Progress notifications while the provider batch is still running
- Batch JSON extraction with downstream per-item handling
Setup
Use an OpenAI API credential and a model Heym shows as batch-capable. Replace the demo $vars.promptList array with a dynamic expression such as $input.items.map("item.text") when you turn this into a production workflow.
Notes
Batch mode is LLM-only and text-only. It cannot be combined with image output or image input.
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3Press Cmd+V / Ctrl+V — nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
Click a node to select it — same as the Heym editor; the panel shows its settings.
7 nodes · Free & source-available
More workflow templates
Explore related automations — each page links to other templates so you can discover more use cases.
- Gemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- PDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- Meeting Notes → JSON TasksTurn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
- Inbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- RAG Document IngestChunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
- RAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- Language Switch RouterDetect the language of incoming text with an LLM and route to the matching branch using the Switch node.
- Telegram FAQ Auto ReplyReply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
- IMAP Support Inbox TriageWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
- Jina Web FetcherFetch clean, LLM-ready text from any URL using the Jina Reader API.
- Cursor Post NotifierMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
- Claude Blog MonitorMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.