All templates
AI#Telegram#LLM#FAQ#Support

Telegram FAQ Auto Reply

Reply to inbound Telegram questions with an LLM and keep the latest question in a global variable.

Telegram FAQ Auto Reply

Turn a Telegram bot into a lightweight FAQ assistant. This template stores the latest incoming question in a global variable, drafts a concise reply with an LLM, and sends the answer back to the same chat.

What this workflow does

  1. Telegram Trigger receives a new bot message
  2. Variable stores the latest question as a global variable for reuse
  3. LLM drafts a short answer based on the stored question
  4. Telegram sends the reply back to the originating chat
  5. Output returns the send status for debugging

Use cases

  • Internal IT helpdesk bots
  • Customer FAQ auto-replies
  • Team support bots for common questions

Setup

Create a Telegram credential first, then set the webhook for the trigger node. Update the LLM system instruction with your domain knowledge and preferred tone.

How to import this template

  1. 1Click Import → Copy JSON on this page.
  2. 2Open your Heym and navigate to a workflow canvas.
  3. 3Press Cmd+V / Ctrl+V — nodes appear instantly.
  4. 4Add your API keys in the node config panels and click Run.
#Telegram#LLM#FAQ#Support

Click a node to select it — same as the Heym editor; the panel shows its settings.

6 nodes · Free & source-available

Explore related automations — each page links to other templates so you can discover more use cases.

Heym
incident analysis · production AI
Observed across 100s of AI rollouts

AI workflows don't fail because of prompts.
They fail because of orchestration.

symptom · glue code01
5 tools
Scripts, vector DB, approval bot, tracing, browser runner — none of them talk.
symptom · visibility02
~0%
Observable behavior across the stack. Debugging is guesswork.
with heym · one runtime
1 canvas
Agents, RAG, HITL, MCP, traces & evals. Self-hosted. Observable.
AI-Native RuntimeProduction-Grade
github.com/heymrun/heym