Back to blog

April 29, 2026Ceren Kaya Akgün

How to Build an AI Agent (No Code)

Learn how to build an AI agent without writing code. Step-by-step: trigger, agent node, tools, persistent memory, MCP, and multi-agent orchestration in Heym.

ai-agentno-codeworkflow-automationmcpagent-memorytoolstutorial
How to Build an AI Agent (No Code)

TL;DR: Build an AI agent in Heym by adding a trigger node, connecting an Agent node with a system prompt, attaching tool nodes, and wiring an output. No code required. Enable persistentMemoryEnabled for a knowledge graph that persists across runs. The whole agent can be exposed as an MCP tool and called from Claude.ai in one click.

What is an AI agent? An AI agent is an autonomous software system that receives a goal, breaks it into steps, calls external tools to gather information or take actions, and produces a result without step-by-step human instruction. Unlike a simple chatbot, an AI agent decides which tools to use, in what order, based on the task at hand.

Key Takeaways:

  • Heym's Agent node runs a full ReAct loop (reason, act, observe) on top of any supported model including Claude, GPT-4, and Gemini
  • Custom tools are JavaScript functions defined directly in the canvas — no server or deployment needed
  • Enable persistentMemoryEnabled to build a knowledge graph that survives across runs and is injected as context on the next execution
  • Connect any MCP-compatible server as agent tools in one click from the MCP panel
  • Mark one agent as orchestrator and list subAgentLabels to delegate tasks across a team of agents

Table of Contents


Why Build an AI Agent Without Code

I am on the Heym team and use AI agents in production workflows daily. Everything in this tutorial reflects features available in Heym today.

This tutorial walks you through exactly how to build an AI agent without writing code — from choosing a trigger to deploying a multi-agent pipeline. It is written for developers, product teams, and technical operators who want to automate complex tasks with AI without writing and maintaining a full codebase. Building an AI agent from scratch in Python means managing dependencies, running an inference server, writing retry logic, implementing memory storage, and deploying the whole thing somewhere. That is a significant project, not a feature you add in an afternoon.

The business case for AI agents is growing quickly. According to McKinsey's 2025 Global AI Survey, 65% of organizations now use AI in at least one business function, up from 55% a year earlier, and AI agent automation is the fastest-growing category of that adoption (McKinsey, 2025). Gartner predicts that by 2028, 15% of routine business decisions will be made autonomously by AI agents — a shift that is already visible in early-adopter teams today (Gartner, 2025). The Stanford AI Index 2025 reported that AI agent research and production deployments grew faster than any other AI category in 2024, driven by the convergence of capable LLMs, tool-use APIs, and accessible orchestration frameworks (Stanford HAI, 2025).

No-code tools close the gap between wanting an AI agent and having one. Building an agent with a code framework means writing hundreds of lines before you can run a single test. In Heym, a working agent is a visual canvas with four types of nodes: a trigger, an agent, tools, and an output. You wire them together, write a system prompt in plain English, and hit Run. The entire reasoning loop, tool registry, and memory layer are handled by the platform — not by you.

If you are new to the broader concept, what is AI workflow automation is a good starting point before continuing. Once your agent is running, AI agent use cases and AI agent memory are natural next reads.


What You Need Before You Start

Before you open Heym, gather three things:

  1. A focused goal for the agent. The narrower, the better. "Summarize support tickets and post a daily digest to Slack" is a good agent goal. "Be helpful" is not.
  2. An API credential. Heym supports Anthropic Claude, OpenAI GPT-4, and Google Gemini models. You need at least one API key added to your Heym credential store before the Agent node can run.
  3. An output destination. The agent's result has to go somewhere: a Slack channel, an email inbox, a webhook endpoint, a database row, or another node on the canvas.

No local development environment, no Python, and no infrastructure setup required.


Step 1: Choose a Trigger

Every Heym workflow starts with a trigger node. The trigger determines when and how your agent runs. Heym supports six trigger types:

TriggerWhen to use
Text InputManual run from the canvas — best for testing and iteration
CronScheduled runs (e.g., every hour, every Monday at 08:00 UTC)
Webhook / HTTPRuns when a POST request hits your workflow's unique URL
Telegram TriggerRuns when a message arrives in a Telegram bot
Slack TriggerRuns on Slack events (new message, mention, or reaction)
IMAP TriggerRuns when a new email arrives in a monitored inbox

For your first agent, use Text Input. It gives you a test input field in the canvas that you can edit between runs without touching infrastructure. Once the agent produces correct outputs, swap the trigger to Cron or Webhook and everything else stays the same.

To add a trigger: open the node panel on the left, drag Text Input onto the canvas, and you have your starting node.


Step 2: Add and Configure the Agent Node

What is a ReAct agent loop? A ReAct loop (Reason + Act) is the execution pattern used by AI agents. The agent reasons about the current task, selects and calls a tool, observes the result, then reasons again — repeating until the task is complete or the iteration limit is reached. This loop allows an agent to gather external data and adapt its plan at runtime, rather than generating a single static response.

The Agent node is Heym's core AI reasoning unit. It runs a ReAct loop: reason about the task, call a tool, observe the result, repeat until done, then return an output. You control the loop with three fields.

System Prompt

The system prompt is the agent's job description. Write it in plain English. A good system prompt answers three questions for the model:

  • What is your role? ("You are a support ticket analyst.")
  • What data will you receive? ("You receive raw ticket text from the Trigger node.")
  • What should you output? ("Return a one-paragraph summary and a severity score from 1 to 5.")

You can inject values from upstream nodes into the system prompt using $expression references. For example: The user's name is $triggerNode.body.name.

Model

Select any supported model from the dropdown: claude-opus-4, claude-sonnet-4, gpt-4.1, gpt-4o, gemini-2.5-pro, gemini-2.5-flash, and more. Each credential you add to Heym unlocks the models for that provider. The context window size is displayed next to each model name — up to 1,047,576 tokens for gpt-4.1, and 200,000 tokens for the Claude and Gemini families.

Max Tool Iterations

Set maxToolIterations to limit how many tool calls the agent can make before it must return an answer. The default is 10. For simple single-step agents, set it to 3. For complex research agents that gather and synthesize multiple sources, 15 or 20 is appropriate.


Step 3: Give Your Agent Tools

Tools are what separate an AI agent from a plain LLM call. A tool is a capability the agent can invoke at runtime: fetch a URL, query a knowledge base, call an API, run a calculation. Without tools, the agent can only reason over what you give it in the prompt. With tools, it can interact with live data and external systems.

Key insight: IBM's 2025 Global AI Adoption Index found that 42% of enterprises have actively deployed AI in production applications, with tool-augmented agents representing the fastest-growing deployment pattern. Teams that previously needed custom integration code to connect LLMs to APIs now configure the connection in a no-code canvas in under 15 minutes (IBM Institute for Business Value, 2025).

Custom JavaScript Tools

In the Agent node's Tools section, click Add Tool. Each tool has four fields:

  • Name: a snake_case identifier the model uses to call it (get_weather, search_docs, parse_invoice)
  • Description: plain-English explanation of what the tool does and when to use it
  • Parameters: a JSON Schema object defining the inputs the tool accepts
  • Code: a JavaScript function body that receives the parameters and returns a result

Here is an example tool that fetches a URL and returns the first 2,000 characters of the response:

const response = await fetch(params.url, {
  headers: { "User-Agent": "Heym-Agent/1.0" }
});
const text = await response.text();
return { content: text.slice(0, 2000) };

The model decides when to call this tool based on the description you wrote. You do not need to tell it explicitly in the system prompt.

RAG Tool

Connect a RAG node to query your Qdrant vector store. Add a RAG node to the canvas, configure the vector store and query expression, then wire it to the Agent node. The agent calls the RAG node when it needs to retrieve relevant documents from your knowledge base. Qdrant is Heym's vector store — other vector databases are not supported.

HTTP Tool

The HTTP node makes external API calls using a cURL-style command. Connect it to the Agent node and the agent can hit any REST API during its reasoning loop. For a detailed walkthrough of HTTP node configuration, see how to connect two APIs in an AI workflow.


Step 4: Connect an Output Node

Once the agent finishes its reasoning loop, it returns a result. That result needs a destination. Common patterns:

  • Output node: displays the result in the debug panel — useful during testing
  • Slack node: posts the agent's output to a channel (#alerts, #support-summary, etc.)
  • Telegram node: sends a message to a bot or group chat
  • Send Email node: delivers the result to an inbox with a configurable subject and body
  • Set node: writes the output to a Heym Global Variable for use in downstream workflows

Connect the Agent node's output handle to the input of your chosen output node. If you want to post only the agent's answer text (not the full output object), use a $agentNode.output expression in the output node's message field to extract the relevant value.


Step 5: Test and Run Your Agent

With trigger, agent, tools, and output connected, run the workflow:

  1. Click Run in the top bar (or send a message to the trigger if using Telegram, Slack, or Webhook).
  2. The Debug Panel on the right shows each node's execution in real time: inputs, outputs, tool calls, and timing.
  3. Inspect the Agent node's trace to see the full reasoning loop — which tools the model called, what each tool returned, and how the model synthesized the final answer.
  4. If the output is wrong, adjust the system prompt first. An underspecified prompt that leaves too much ambiguous for the model is the most common cause of incorrect agent behavior.

Iterate on the system prompt until the agent produces the right output on a representative set of inputs. Then change the trigger from Text Input to Cron or Webhook and the agent runs automatically from that point on.


Give Your Agent Persistent Memory

By default, each run of an AI agent starts fresh. The agent has no memory of previous conversations or tasks. For many workflows that is perfectly fine. For agents that handle ongoing relationships — customer service, personal assistants, research pipelines — memory is what makes the agent genuinely useful over time.

What is persistent agent memory? Persistent agent memory is a storage layer that records entities, facts, and relationships extracted from an AI agent's previous runs, then injects that accumulated knowledge as structured context into future runs. Unlike conversation history — which is discarded after each session — persistent memory grows with every execution, allowing an agent to build domain knowledge over hundreds of interactions without manual curation.

Enable persistentMemoryEnabled on the Agent node. When active, Heym runs a background extraction step after each run: a secondary LLM call reads the run's inputs, outputs, and tool results, identifies entities and relationships, and writes them to a knowledge graph stored in the database. On the next run, the graph is serialized as structured markdown and injected into the agent's system prompt as additional context.

A support agent with persistent memory recognizes returning users, remembers their previous issues, and avoids asking for information it already has. The memory accumulates across hundreds of runs without any configuration beyond flipping the toggle.

You can also share memory across agents. The memoryShares field lets you grant other agents read or read/write access to a given agent's knowledge graph, so a Summarizer agent can draw on the memory that a Research agent built over multiple runs.

For a deep dive into memory types and implementation patterns, see AI agent memory: types, patterns, and implementation.


Connect External Tools via MCP

The Model Context Protocol (MCP) lets your agent connect to any MCP-compatible tool server: web search, code execution, calendar access, file system operations, and community-built integrations.

What is the Model Context Protocol (MCP)? The Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI agents discover and call external tools and data sources. An MCP server exposes a set of named tools with typed parameters; an MCP client discovers those tools at startup and calls them at runtime by name. MCP replaces per-integration custom adapters with a universal, agent-readable interface that works across any compatible AI system (MCP Specification, 2025).

In Heym, open the MCP panel from the left sidebar. Add an MCP connection with:

  • Transport type: sse for hosted MCP servers, streamable_http for streaming endpoints, or stdio for local servers
  • Server URL or command: the endpoint address or the shell command that starts the server
  • Label: a friendly name that appears in the agent's tool list

Once connected, the MCP server's tools appear automatically in the Agent node's tool list. The agent calls them by name with parameters, exactly the same way it calls custom JavaScript tools.

You can also expose your own Heym workflows as MCP tools. Toggle any workflow on in the MCP panel and it becomes callable by other agents, by Claude.ai via the Claude Connector, or by any MCP-compatible client. This turns a complex multi-node pipeline into a single reusable tool.

For more on building and connecting MCP servers, see how to build an MCP server and best MCP servers for AI workflow automation in 2026.


Three Real-World Agent Examples

1. Daily Briefing Agent

Trigger: Cron (every weekday at 08:00 UTC)

Agent tools: HTTP node fetching a news API, HTTP node fetching a weather API

Output: Slack node posting to #morning-briefing

The agent fetches today's top headlines and the local weather forecast, synthesizes a two-paragraph briefing in the style defined in the system prompt, and posts it to Slack before the team starts work. Total canvas size: five nodes. Total setup time: under 20 minutes.

2. Support Ticket Classifier

Trigger: IMAP Trigger (new email to [email protected])

Agent tools: Custom JavaScript tool that extracts email subject and sender metadata; RAG node querying a Qdrant knowledge base of previously resolved tickets

Output: Set node writing the classification result to a Global Variable, followed by an HTTP node POSTing the result to your ticketing system API

The agent reads the incoming email, checks similar resolved tickets in the knowledge base, assigns a category and severity score (1 to 5), and sends the structured result to your ticketing API. With persistentMemoryEnabled on, the agent improves its classifications as it processes more tickets over time.

3. Multi-Agent Research Pipeline

Trigger: Webhook (accepts a research topic as a JSON payload)

Orchestrator agent: Receives the topic and delegates to two sub-agents using isOrchestrator: true and subAgentLabels: ["Web Researcher", "Summarizer"]

Web Researcher agent: Uses an MCP web search server to gather sources and extract key facts

Summarizer agent: Reads the researcher's output and produces a structured report with citations

Output: Email node delivering the finished report to the requester

The orchestrator agent never does the research itself. It plans, delegates, collects sub-agent results, and routes the final output. This is a three-agent system running on a single canvas, coordinated entirely through Heym's multi-agent orchestration layer. For more on how these architectures work, see multi-agent AI systems: a practical guide.


Limitations to keep in mind: AI agents are not fully deterministic. The same input can produce different tool call sequences across runs, particularly with temperature above 0. For business-critical workflows where auditability matters, review the agent's trace in the Debug Panel after each run. Heym also supports Human-in-the-Loop (HITL) review, which pauses the agent and requests a human decision before the output reaches an external system — a useful safeguard for high-stakes automation.


FAQ

Do I need coding experience to build an AI agent in Heym?

No. Heym's visual canvas handles all the wiring. You write a system prompt in plain English, configure nodes through form fields, and optionally write small JavaScript snippets for custom tools. No framework knowledge, no deployment, and no infrastructure management is required.

What is the difference between the LLM node and the Agent node?

The LLM node makes a single call to a language model and returns the response. It has no tool access and no reasoning loop. The Agent node runs a full ReAct loop: it can call tools, observe results, reason again, and repeat until the task is complete or maxToolIterations is reached. Use LLM nodes for simple text transformations and Agent nodes for tasks that require external data or multi-step reasoning.

What triggers can start an AI agent workflow in Heym?

Heym supports six trigger types: Text Input (manual, for testing), Cron (scheduled), Webhook/HTTP (triggered by an incoming POST request), Telegram Trigger, Slack Trigger, and IMAP Trigger (new email). You can change the trigger at any time without modifying the rest of the workflow.

How do I share memory between multiple agents?

Enable persistentMemoryEnabled on each agent that should have its own memory graph. Then use the memoryShares field to grant other agents read or read/write access to a given agent's knowledge graph. This lets a Summarizer agent read the memory built up by a Research agent over multiple runs without the two agents needing to communicate directly.

Can I expose my AI agent as an API or MCP tool?

Yes. Every Heym workflow has a webhook URL that acts as a REST endpoint. You can also toggle the workflow on in the MCP panel to expose it as an MCP tool, letting Claude.ai and other MCP-compatible clients call it directly via the Claude Connector using OAuth 2.1 authentication.


Building your first AI agent is the starting point, not the destination. As you get comfortable with triggers and tools, add persistent memory to make the agent smarter over time, connect MCP servers to extend its reach, and wire multiple agents together for tasks that need parallel reasoning. The patterns in this guide — trigger, agent, tools, output — scale from a five-node daily briefing to a twenty-node research pipeline without changing the fundamental approach. Start with one agent, one tool, and one clear goal. The rest follows from there.

Ready to go further? See what is agentic AI for the theory, best AI agent builders in 2026 for a broader landscape view, or open Heym and build your first workflow today.

Ceren Kaya Akgün
Ceren Kaya Akgün

Founding Engineer

Ceren is a founding engineer at Heym, working on AI workflow orchestration and the visual canvas editor. She writes about AI automation, multi-agent systems, and the practitioner experience of building production LLM pipelines.