Language Switch Router
Detect the language of incoming text with an LLM and route to the matching branch using the Switch node.
Language Switch Router
Goes beyond binary branching: the Switch node handles three or more cases without chaining conditions. Detect language with an LLM, then fan out to the right handler.
What this workflow does
- Input receives any text
- LLM returns the ISO language code (
en,de,fr, orother) - Switch routes to the matching output branch
Use cases
- Multi-language support ticket routing
- Locale-specific content pipelines
- Internationalisation preprocessing
Setup
Extend the Switch node with additional cases for more languages. Replace the Output nodes with your downstream processors (translation, CRM, etc.).
FAQ
Is this different from a Condition node? Yes — Condition handles one binary split; Switch handles three or more distinct values cleanly.
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3Press Cmd+V / Ctrl+V — nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
Click a node to select it — same as the Heym editor; the panel shows its settings.
10 nodes · Free & source-available
More workflow templates
Explore related automations — each page links to other templates so you can discover more use cases.
- Batch LLM Status TrackerSend an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
- Gemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- PDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- Meeting Notes → JSON TasksTurn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
- Inbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- RAG Document IngestChunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
- RAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- Telegram FAQ Auto ReplyReply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
- IMAP Support Inbox TriageWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
- Jina Web FetcherFetch clean, LLM-ready text from any URL using the Jina Reader API.
- Cursor Post NotifierMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
- Claude Blog MonitorMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.