All templates
FeaturedAI#LLM#Lead Generation#Slack#Sales#Qualification#AI

AI Lead Qualification Agent

Submit lead details — an LLM scores the fit 1–10, routes hot leads to Slack, and logs cold leads separately.

Workflow at a glance

The full canvas, before you import it

Click any node to see its config.

#LLM#Lead Generation#Slack#Sales#Qualification#AI

Click a node to select it — same as the Heym editor; the panel shows its settings.

9 nodes · Free & source-available

AI Lead Qualification Agent

Stop manually reviewing every inbound lead. Feed the lead details into this workflow and an LLM returns a qualification score plus a short rationale. High-scoring leads get an instant Slack ping to your sales channel; low scorers are logged for nurturing.

What this workflow does

  1. Input receives the lead's name, company, role, and use-case description
  2. LLM analyses the lead against your ICP and returns a JSON object with score (1–10) and rationale
  3. Condition branches on score ≥ 7 (hot) vs < 7 (cold)
  4. Slack (hot path) sends a formatted alert to your #sales channel
  5. Output (Cold) logs the rationale for the nurturing queue

Use cases

  • Auto-triage inbound demo requests
  • Prioritise outbound sequences by fit score
  • Feed into a CRM enrichment workflow

Setup

Connect an OpenAI credential. Update the LLM system instruction to include your Ideal Customer Profile. Add a Slack credential and set the channel name on the Slack node.

How to import this template

  1. 1Click Import → Copy JSON on this page.
  2. 2Open your Heym and navigate to a workflow canvas.
  3. 3Press Cmd+V / Ctrl+V — nodes appear instantly.
  4. 4Add your API keys in the node config panels and click Run.
More workflow templates
View all templates
Heym
incident analysis · production AI
Observed across 100s of AI rollouts

AI workflows don't fail because of prompts.
They fail because of orchestration.

symptom · glue code01
5 tools
Scripts, vector DB, approval bot, tracing, browser runner — none of them talk.
symptom · visibility02
~0%
Observable behavior across the stack. Debugging is guesswork.
with heym · one runtime
1 canvas
Agents, RAG, HITL, MCP, traces & evals. Self-hosted. Observable.
AI-Native RuntimeProduction-Grade
github.com/heymrun/heym