Back to blog

April 27, 2026Ceren Kaya Akgün

How to Connect Two APIs in an AI Workflow (No Code)

Learn how to connect two APIs in an AI workflow — no code. Step-by-step tutorial: HTTP node, cURL, security headers, response mapping, and MCP sharing.

api-integrationhttp-nodeworkflow-automationmcpno-codeai-workflow
How to Connect Two APIs in an AI Workflow (No Code)

TL;DR: To connect two APIs in an AI workflow: add a webhook trigger, place an HTTP node with a cURL command for the first API call, pass the response downstream using $expression references, then add a second HTTP node for the outgoing call. In Heym this is entirely visual — no code, no custom auth handlers, no JSON parsing logic. The whole pipeline can be exposed as a single MCP tool that Claude.ai calls on demand.

What is an API-to-API workflow? An API-to-API workflow is an automated pipeline that receives data from one API endpoint, transforms or enriches it (optionally with an AI agent), and sends a request to a second API endpoint — all without manual intervention or custom integration code. Each step is a node on a visual canvas connected by data references.

Key Takeaways:

  • The HTTP node in Heym takes a standard cURL command and executes it as a workflow step — headers, auth tokens, and body all in one field
  • Reference upstream data anywhere in the cURL string using $nodeName.field expressions
  • Store API keys as Global Variables and inject them at runtime so secrets never appear in workflow definitions
  • Every HTTP response is a structured object: $httpNode.body, $httpNode.status, $httpNode.headers
  • Expose any workflow as an MCP tool in one click; connect Claude.ai via OAuth 2.1 with the Claude Connector tab

Table of Contents


Why API-to-API Automation Matters in 2026

I work on the Heym team and build API automation workflows daily — everything described here reflects features I use in production.

This guide is for developers and technical teams who need to connect APIs across SaaS tools, billing systems, messaging platforms, or AI pipelines — without writing and maintaining custom integration code. If you build anything that talks to more than one service, you already know the problem: data lives in different systems, each with its own API, its own authentication scheme, and its own response shape. Getting those systems to talk to each other used to mean writing glue code — auth handlers, HTTP clients, JSON transformers, retry logic — and maintaining all of it indefinitely.

The scale of this problem is not small. The global application integration market reached $26.07 billion in 2026, up from $22.23 billion the year before, driven almost entirely by teams trying to make their SaaS stacks communicate (360iResearch, 2026). The API management layer alone is on track to hit $37.43 billion by 2034, growing at 21.7% per year (Fortune Business Insights, 2025).

According to Postman's 2025 State of the API report, 86% of developers say their organizations are increasing API usage year over year, and AI-driven orchestration now accounts for the fastest-growing category of new API traffic. What changed in 2025 and 2026 is that AI agents entered the pipeline. It is no longer enough to pipe data from API A to API B. The useful pattern now is: receive event → call external API → let an AI agent reason about the response → call a second API with the result. That four-step loop is the core of most AI automation workflows I build at Heym, and it is what this guide walks through from start to finish.

If you are new to the broader concept of AI workflow automation, that post is a good primer before continuing here.


What "Connecting Two APIs" Actually Means

Quick answer: To connect two APIs without code, you need four things: a trigger that receives the incoming data, an HTTP node that calls the first API, a way to extract fields from the response, and a second HTTP node that sends those fields to the second API. A visual workflow tool provides all four in a single canvas.

In a visual workflow tool, "connecting two APIs" is not about network-level routing. It means:

  1. Receiving a signal — a webhook call, a scheduled trigger, or a message from a chat interface
  2. Calling the first external API — an HTTP request with the right method, headers, and body
  3. Transforming the response — extracting the fields you need and optionally enriching them with AI
  4. Calling the second external API — using data from step 3 as input

The Four-Step Pattern at a Glance

StepNode typeWhat it doesOutput
1. ReceiveWebhook Trigger (generic mode)Accepts the incoming JSON payload$triggerNode.body
2. FetchHTTP NodeCalls the first external API with cURL$httpNode.body, .status, .headers
3. Enrich (optional)AI AgentProcesses or transforms the response$agentNode.output
4. SendHTTP NodeCalls the second API with upstream dataFinal response

The connective tissue is data passing. Each node in the workflow produces output; downstream nodes consume it via $expression references. The expression $myHttpNode.body.user.email, for example, reaches into the HTTP node's response body and extracts a nested field — no parsing code needed.

This is meaningfully different from point-to-point integrations built with webhooks alone. A workflow canvas makes the data flow visible, debuggable, and reusable. You can inspect every node's output after a run, replay failed executions with the same input, and share the entire pipeline as a template that other team members can fork and adapt.


The HTTP Node: cURL-Based Requests in a Visual Workflow

HTTP Node: An HTTP node is a workflow step that executes a single HTTP request — defined as a standard cURL command — and returns the response as a structured object (status, body, headers, request) for downstream nodes to reference via $expressions. No custom HTTP client code is required.

The HTTP node in Heym does one thing: it takes a cURL command and executes it. That decision — using cURL as the input format rather than a form with separate fields for method, URL, headers, and body — turns out to be very ergonomic for developers. You can copy a request directly from your browser's network tab, from Postman, or from an API's documentation, paste it into the node, and it works immediately.

Here is the simplest possible HTTP node configuration:

curl -X GET https://api.example.com/users/42

The node parses the method (GET), the URL, and any headers or body it finds in the command. After execution, the response is available as a structured object.

Adding Dynamic Values with $Expressions

Where the HTTP node becomes powerful is $expression interpolation. Any part of the cURL command — the URL, a header value, a body field — can reference data from earlier nodes in the workflow.

Suppose your webhook trigger receives a userId in its body. You want to call an external API with that ID:

curl -X GET https://api.example.com/users/$triggerNode.body.userId

Heym evaluates $triggerNode.body.userId at runtime and substitutes the actual value before sending the request. You can use the same syntax inside the body:

curl -X POST https://api.example.com/reports \
  -H "Content-Type: application/json" \
  -d '{"userId": "$triggerNode.body.userId", "period": "2026-Q1"}'

Expressions also work inside header values, which is how authentication tokens are handled — covered in the next section.

Generic Webhook Body Mode

Before the HTTP node fires, you need data to come in. The webhook trigger has two body modes: legacy (the original format, which wraps the payload in a fixed envelope) and generic (which passes the raw JSON body through untouched).

For API-to-API workflows, always use generic mode. It makes the incoming payload accessible at $triggerNode.body exactly as the caller sent it, with no unwrapping needed. You set this in the workflow settings under the webhook trigger configuration panel.


Security Headers: Passing Auth Tokens Safely

Every real-world API requires some form of authentication. The most common patterns are Bearer tokens (OAuth 2.0), API key headers (X-Api-Key, x-api-key), and Basic Auth. All of these map directly to cURL header flags — -H "Authorization: Bearer token" and so on — but you should never hardcode a secret in a workflow definition.

Heym's Global Variables are the right place to store API keys. A Global Variable is a named value defined once at the workspace level, accessible from any workflow via the $globalVar.variableName expression. The value is stored encrypted and is never exported with the workflow definition itself.

Here is the pattern I use for every authenticated HTTP node. Postman's 2025 State of the API report identifies auth misconfiguration as the leading cause of API integration failures in production. Global Variables eliminate the most common mistake: hardcoded keys in shared workflow definitions.

curl -X POST https://api.stripe.com/v1/charges \
  -H "Authorization: Bearer $globalVar.stripeSecretKey" \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "amount=2000&currency=usd&source=$triggerNode.body.stripeToken"

$globalVar.stripeSecretKey resolves to the actual key at execution time. If you need to rotate the key, you change the Global Variable once and every workflow that references it picks up the new value immediately — no workflow edits required.

Key Principle: Store API keys as Global Variables, not inline in cURL commands. One change to a Global Variable propagates instantly to every workflow that references it — no hunt-and-replace across dozens of nodes.

Multiple Security Headers

Some APIs require both an API key and a signature header, or a tenant ID alongside a Bearer token. Stack them as additional -H flags:

curl -X GET https://api.example.com/data \
  -H "Authorization: Bearer $globalVar.accessToken" \
  -H "X-Tenant-ID: $triggerNode.body.tenantId" \
  -H "X-Request-ID: $globalVar.requestIdPrefix-$triggerNode.body.correlationId"

The HTTP node's Last Request panel (visible in the properties panel after a run) shows every header that was actually sent, which makes debugging auth failures straightforward: you can see exactly what the evaluated headers looked like.


Response Mapping: Extracting Data and Passing It Forward

After the HTTP node runs, its output is a structured object with four fields:

FieldTypeContent
$httpNode.statusnumberHTTP status code (200, 404, 500, etc.)
$httpNode.bodyobject or stringParsed JSON, or raw text if the response is not JSON
$httpNode.headersobjectResponse headers as a key-value map
$httpNode.requestobjectEcho of outgoing method, URL, and sent headers

When you connect two APIs in sequence, response mapping is the step that passes data from the first API's output into the second API's input. It means referencing these fields in downstream nodes using $expressions. If the API returns:

{
  "user": {
    "id": "u_123",
    "email": "[email protected]",
    "plan": "pro"
  },
  "usage": {
    "calls_this_month": 847
  }
}

Then in the next node you can reference:

  • $httpNode.body.user.email"[email protected]"
  • $httpNode.body.user.plan"pro"
  • $httpNode.body.usage.calls_this_month847
  • $httpNode.status200

You can use these expressions in message templates for AI agent nodes, in the cURL body of a second HTTP node, or in condition branches. The expression evaluator handles nested access (a.b.c), array indexing (a[0].b), and arithmetic ($httpNode.body.usage.calls_this_month * 0.001).

Handling Non-200 Responses

The HTTP node does not throw on non-2xx responses — it always returns the structured output, even for 400 or 500 status codes. This means you can add a Condition node after the HTTP node to branch on $httpNode.status:

  • If $httpNode.status == 200 → proceed to the next API call
  • If $httpNode.status >= 400 → route to an error handler or a notification node

This explicit error-path branching is one of the advantages of visual workflow tools over write-once scripts: the error path is a first-class branch in the flow, not an afterthought.


Real-World Example: A Four-Node API Pipeline

Here is a complete workflow I built to illustrate every concept in this guide. The scenario: a webhook receives a customer ID, fetches the customer's subscription data from an external billing API, asks an AI agent to write a personalized renewal reminder, then posts that message to a Slack-like messaging API.

Node 1 — Webhook Trigger (Generic Mode)

Incoming payload:

{ "customerId": "cust_abc123", "channelId": "C01234567" }

The workflow URL is called by an external scheduler every morning. Body mode: generic.

Node 2 — HTTP Node: Fetch Customer Data

curl -X GET https://billing.example.com/v2/customers/$triggerNode.body.customerId \
  -H "Authorization: Bearer $globalVar.billingApiKey" \
  -H "Accept: application/json"

Response (accessible as $fetchCustomer.body):

{
  "name": "Alice Martin",
  "plan": "pro",
  "renewal_date": "2026-05-15",
  "usage_percent": 91
}

Node 3 — AI Agent: Write the Reminder

The agent node receives a message template:

Customer $fetchCustomer.body.name is on the $fetchCustomer.body.plan plan,
renewing on $fetchCustomer.body.renewal_date, and is at
$fetchCustomer.body.usage_percent% of their quota.
 
Write a friendly, concise renewal reminder (2 sentences max) that mentions
their high usage and upcoming renewal date.

The agent outputs a natural-language string, available as $agentNode.output.

Node 4 — HTTP Node: Post to Messaging API

curl -X POST https://messaging.example.com/v1/messages \
  -H "Authorization: Bearer $globalVar.messagingApiKey" \
  -H "Content-Type: application/json" \
  -d '{"channel": "$triggerNode.body.channelId", "text": "$agentNode.output"}'

Four nodes, no code, end-to-end API-to-API pipeline with an AI step in the middle.

Limitations to be aware of: The HTTP node executes a single request per run — it does not handle paginated APIs automatically (e.g., APIs that return next_page cursors) or retry transient 5xx failures. For paginated sources, pair the HTTP node with a loop construct in the workflow. For retry logic, add a Condition node that re-routes 5xx responses back to the HTTP node with a counter variable. For more patterns like this, see the AI agent use cases guide.


Sharing Your API Workflow via MCP and Claude Connector

Once a workflow is built, you often want to reuse it across contexts — from other workflows, from Claude conversations, or shared with teammates. Heym has three mechanisms for this.

Workflow Templates

From the canvas, you can save any workflow as a template and publish it to the template library. Team members browse templates from a search dialog and use them directly, which populates their canvas with a fully configured starting point. Templates are the lowest-friction way to share API integration patterns internally.

MCP Workflow Exposure

The Model Context Protocol is the open standard that lets AI clients call external tools. In Heym, you can expose any workflow as an MCP tool in one step: open the MCP panel and toggle the workflow on.

When a workflow is MCP-enabled:

  • Its name becomes the tool name
  • Its webhook input becomes the tool input schema
  • Its final output becomes the tool return value
  • It is accessible at the SSE endpoint /api/mcp/sse using your workspace API key

Any MCP-compatible client — Claude Desktop, Cursor, or a custom agent — can discover and call your Heym workflows as native tools. The best MCP servers of 2026 post covers the broader ecosystem of tools you can pair this with.

Claude Connector (OAuth 2.1)

The Claude Connector tab in the MCP panel takes this one step further. It implements OAuth 2.1, so Claude.ai can authenticate to your Heym workspace without you sharing an API key manually. The setup is:

  1. Open MCP panel → Claude Connector tab in Heym
  2. Copy the MCP server URL shown there
  3. In Claude.ai settings, go to Connectors → Add MCP Server and paste the URL
  4. Leave OAuth Client ID and Secret blank — Claude registers automatically
  5. Authorize the connection

After authorization, every workflow you have enabled in the MCP panel appears as a callable tool in Claude conversations. You can say "Fetch the renewal reminder for customer cust_abc123" in a Claude chat and Claude will call the Heym workflow, pass the ID, and return the result — the entire four-node pipeline from the example above, triggered from a conversation.

This is what "API-to-API" looks like when AI is the client: the AI model orchestrates external APIs through your workflow layer, with proper auth, error handling, and observability built in. For a deeper look at building AI-first workflows, the multi-agent AI systems post is worth reading next.


API Integration Checklist

Use this checklist before going live with any API-to-API workflow in Heym:

  • Webhook trigger set to generic body mode (not legacy)
  • API keys stored as Global Variables, not hardcoded in cURL commands
  • $expression references tested in the expression evaluator before running
  • HTTP node output inspected via the Last Request panel after a test run
  • Condition node added to branch on $httpNode.status >= 400 (error path)
  • Second HTTP node references correct upstream fields ($firstHttpNode.body.fieldName)
  • Workflow tested end-to-end with a real payload before enabling production traffic
  • MCP toggle enabled (optional) if Claude.ai or other AI clients need to call this workflow

FAQ

Do I need to write code to connect two APIs in Heym?

No. Heym's HTTP node accepts a standard cURL command — the same syntax you'd paste into a terminal — and executes the request as a workflow step. You can inject dynamic values using $expression references, but no programming language is required. The full workflow canvas, including conditionals, loops, and AI nodes, is visual-only.

How do I pass an API key securely in the HTTP node?

Store your API key as a Global Variable in Heym, then reference it in the cURL command with a $expression: curl -H "Authorization: Bearer $globalVar.myApiKey" https://api.example.com. The key is never hardcoded in the workflow definition and is resolved at runtime. If you need to rotate the key, update the Global Variable once — all workflows that reference it pick up the new value automatically.

What does the HTTP node return after a request?

The HTTP node outputs a structured object with four fields: status (HTTP status code, e.g. 200), body (parsed JSON object or raw text if the response is not JSON), headers (a key-value map of response headers), and request (an echo of the outgoing method, URL, and sent headers for debugging). All four fields are accessible via $expressions in any downstream node.

Can I expose my Heym workflow as an API endpoint?

Yes. Every Heym workflow has a webhook trigger that generates a unique HTTPS URL. Set the body mode to generic to accept arbitrary JSON. You can also expose the workflow as an MCP tool from the MCP panel, letting Claude.ai and other MCP-compatible clients call it as a native tool with OAuth 2.1 authentication.

What is the Claude Connector in Heym?

The Claude Connector is a feature in Heym's MCP panel that uses OAuth 2.1 to let Claude.ai call your Heym workflows as tools in conversations. Once connected, any workflow you have enabled in the MCP panel appears as a callable tool inside Claude chats — no manual API key sharing or custom prompt engineering required. Claude can discover the tool's input schema and call it automatically when the conversation context calls for it.


Start Connecting APIs in Minutes

Connecting two APIs in an AI workflow is a four-step pattern: trigger → first HTTP call → optional AI step → second HTTP call. The HTTP node's cURL-based input, $expression interpolation, Global Variables for secrets, and structured response output make each of those steps straightforward to configure and easy to debug.

If you want to go from zero to a working API pipeline in under ten minutes, open Heym and browse the workflow templates — there are pre-built API integration patterns you can fork and adapt to your own endpoints without starting from scratch.


Sources:

Ceren Kaya Akgün
Ceren Kaya Akgün

Founding Engineer

Ceren is a founding engineer at Heym, working on AI workflow orchestration and the visual canvas editor. She writes about AI automation, multi-agent systems, and the practitioner experience of building production LLM pipelines.