Meeting Notes → JSON Tasks
Turn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
Meeting Notes → JSON Tasks
Showcase structured LLM output: paste notes and receive a predictable JSON object (titles, owners, due hints). Ideal for standups, customer calls, and PM handoffs.
What this workflow does
- Input receives raw notes
- LLM runs with JSON output enabled and a light schema
- JSON output mapper returns the parsed tasks object as a clean top-level JSON payload
Use cases
- CRM / ticketing prep without manual copy-paste
- Feeding downstream Set or HTTP nodes with clean fields
- Demos for teams evaluating Heym vs rigid form builders
Setup
Connect your preferred model credential in the LLM node. The template does not ship provider secrets.
FAQ
Is this the same as image generation? No — it highlights tabular JSON output, complementary to image templates.
Can I change the schema? Yes — edit jsonOutputSchema to match your tool chain.
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3Press Cmd+V / Ctrl+V — nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
Click a node to select it — same as the Heym editor; the panel shows its settings.
4 nodes · Free & source-available
More workflow templates
Explore related automations — each page links to other templates so you can discover more use cases.
- Batch LLM Status TrackerSend an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
- Gemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- PDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- Inbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- RAG Document IngestChunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
- RAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- Language Switch RouterDetect the language of incoming text with an LLM and route to the matching branch using the Switch node.
- Telegram FAQ Auto ReplyReply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
- IMAP Support Inbox TriageWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
- Jina Web FetcherFetch clean, LLM-ready text from any URL using the Jina Reader API.
- Cursor Post NotifierMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
- Claude Blog MonitorMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.