Coding Agent with GitHub Integration
Receive a plain-English coding task, plan with Sequential Thinking, write and commit code via GitHub MCP, then post a Slack summary.
The full canvas, before you import it
Click any node to see its config.
Click a node to select it — same as the Heym editor; the panel shows its settings.
4 nodes · Free & source-available
Coding Agent with GitHub Integration
Give the agent a plain-English task and it plans the implementation, writes or updates files in your GitHub repository, commits the result, and posts a formatted Slack notification with the file names and commit reference.
What this workflow does
- TaskInput receives a free-text coding task
- CodingAgent uses Sequential Thinking to plan before touching any file
- Agent creates or updates files via the GitHub MCP server
- Agent commits with a descriptive message and pushes to the repository
- Agent calls the SlackNotifier tool and posts a formatted mrkdwn summary
- FinalResult captures the agent's completion text for debugging
Use cases
- Automated utility function generation from a ticket description
- Configuration file updates across repositories
- Test generation for existing functions without manual test writing
- Documentation updates triggered by an API change description
MCP and credential setup
GitHub MCP — Add your GITHUB_PERSONAL_ACCESS_TOKEN to the Agent node's MCP connection env field. The token needs repo scope for private repositories or public_repo for public ones. The MCP server is installed automatically by npx -y.
Sequential Thinking MCP — No credentials needed. Add the connection as shown and reference sequentialThinking in the system prompt to force a planning step before any file write.
Slack — Add a Slack incoming webhook credential to the SlackNotifier node. The node is wired to the agent via the tool-input handle so the agent calls it as a tool and composes the message itself.
System prompt tips
- Instruct the agent to call
sequentialThinkingbefore any file operation - Include Slack mrkdwn formatting rules in the prompt: bold is
*text*, links are<URL|text>, double asterisks are not valid - Set
maxToolIterationsto at least 20 so the agent has room for planning steps plus GitHub calls - Set
temperatureto 0.1 for deterministic code generation
How to import this template
- 1Click Import → Copy JSON on this page.
- 2Open your Heym and navigate to a workflow canvas.
- 3PressCmd+V/Ctrl+V— nodes appear instantly.
- 4Add your API keys in the node config panels and click Run.
Discover more automations
- AIBatch LLM Status TrackerSend an array through the OpenAI Batch API, branch on live status updates, and collect the final per-item results.
- AIGemini Image CreatorGenerate images from a text prompt using Gemini's native image output.
- AIPDF / DOCX Translation AgentTranslate the full text of any uploaded document using an AI agent.
- AIMeeting Notes → JSON TasksTurn messy meeting notes into structured JSON tasks with the LLM node's JSON output mode — no image pipeline required.
- AIInbox TL;DR SummarizerPaste a long email or thread — one LLM call returns a short TL;DR with next actions.
- AIRAG Document IngestChunk and embed a document into a Qdrant vector store so it can be retrieved later by the RAG Search node.
- AIRAG Q&A AgentSearch your Qdrant vector store for relevant context, then answer with an LLM — grounded in your own documents.
- AILanguage Switch RouterDetect the language of incoming text with an LLM and route to the matching branch using the Switch node.