AI#Telegram#LLM#FAQ#Support
Telegram FAQ Auto Reply
Reply to inbound Telegram questions with an LLM and keep the latest question in a global variable.
Telegram FAQ Auto Reply
Turn a Telegram bot into a lightweight FAQ assistant. This template stores the latest incoming question in a global variable, drafts a concise reply with an LLM, and sends the answer back to the same chat.
What this workflow does
- Telegram Trigger receives a new bot message
- Variable stores the latest question as a global variable for reuse
- LLM drafts a short answer based on the stored question
- Telegram sends the reply back to the originating chat
- Output returns the send status for debugging
Use cases
- Internal IT helpdesk bots
- Customer FAQ auto-replies
- Team support bots for common questions
Setup
Create a Telegram credential first, then set the webhook for the trigger node. Update the LLM system instruction with your domain knowledge and preferred tone.
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3Press Cmd+V / Ctrl+V — nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
#Telegram#LLM#FAQ#Support
Click a node to select it — same as the Heym editor; the panel shows its settings.
6 nodes · Free & source-available
More workflow templates
Explore related automations — each page links to other templates so you can discover more use cases.
- Batch LLM Status TrackerSend an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
- Gemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- PDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- Meeting Notes → JSON TasksTurn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
- Inbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- RAG Document IngestChunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
- RAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- Language Switch RouterDetect the language of incoming text with an LLM and route to the matching branch using the Switch node.
- IMAP Support Inbox TriageWatch a shared mailbox, summarize incoming support email, and route urgent messages to Slack.
- Jina Web FetcherFetch clean, LLM-ready text from any URL using the Jina Reader API.
- Cursor Post NotifierMonitor the Cursor blog on a schedule and Slack-notify your team when a new post goes live.
- Claude Blog MonitorMonitor the Anthropic blog on a schedule and Slack-notify your team on new Claude posts.