Lowering Agent Workflows: How xcaffold Compiles Procedures Across Providers
Multi-step procedures are powerful, but not all AI tools natively support them. Here is how xcaffold uses compiler lowering strategies to bridge the gap between high-level workflows and provider-specific rules and skills.
Some AI agents natively understand complex, multi-step procedures as a core primitive. But what happens when you need to run that same workflow in Claude Code, Cursor, Gemini CLI or Codex—tools that don't have a native "workflow" engine?
If you hand-write it, you end up maintaining different structural hacks for each provider. You might build a master rule for Cursor, a complex prompt file for Copilot, and a web of interdependent skills for Claude. It is tedious, brittle, and a nightmare to keep synchronized.
This is a classic compiler problem. And xcaffold solves it using a classic compiler technique: Lowering.
The Problem: The Missing Workflow Primitive
In xcaffold, you can define a kind: workflow in your .xcaf manifests. This represents a named, multi-step procedure with discrete, ordered tasks. With the recent shift to a pure-YAML model, a workflow is cleanly separated from markdown bodies—its instructions live entirely in structural YAML fields.
Because of this architectural shift, xcaffold can infer exactly how a workflow should be executed based entirely on its structure, enabling three powerful patterns:
1. The Basic Workflow
For lightweight procedures, you just define steps with inline instructions. The compiler infers this is a Basic workflow and compiles it into a single, cohesive skill file with ## step sections.
kind: workflow
version: "1.0"
name: ship-feature
description: End-to-end workflow for shipping a feature.
steps:
- name: implement
instructions: Build the component from the given spec.
- name: review
instructions: Run lint and address any errors.
2. The Orchestrator Workflow (Skill Chain)
For complex, multi-agent operations, workflows can chain existing capabilities. If any step uses a skill: reference instead of (or alongside) instructions:, the compiler detects an Orchestrator workflow. It produces a master skill that dynamically invokes your other named skills (e.g., writing-plans or commit-changes).
kind: workflow
version: "1.0"
name: feature-lifecycle
description: Master orchestrator for feature development.
steps:
- name: plan
skill: writing-plans
- name: execute
skill: executing-plans
- name: commit
skill: commit-changes
3. Ambient & Triggered Workflows
Not all workflows are manually invoked. By adding the activation field, you can dictate exactly when the procedure is injected into the AI's context window:
kind: workflow
version: "1.0"
name: secure-code-audit
activation: ["*.go", "*.ts"]
steps:
- name: scan
instructions: Scan for panic() calls or unhandled promise rejections.
This is clean, readable "Harness-as-Code". But when xcaffold apply runs, it faces a reality check: how does it tell Claude Code to execute this?
The Default Strategy: rule-plus-skill
In compiler design, "lowering" is the process of translating high-level concepts into lower-level instructions that the target architecture understands.
For providers that lack a native workflow engine (Claude, Cursor, Gemini, Copilot), the xcaffold compiler defaults to the rule-plus-skill lowering strategy. It automatically shreds your clean .xcaf workflow into a web of provider-specific files:
- The Orchestrator Rule: It creates a rule file (e.g.,
.claude/rules/ship-feature-workflow.md) containing anx-xcaffoldprovenance block and explicit instructions instructing the LLM to run specific steps in order. - The Step Skills: For every step defined in the workflow, it generates an isolated skill file (e.g.,
.claude/skills/ship-feature-01-implement/SKILL.md).
You author the business logic once in xcaf/workflows/, and xcaffold apply automatically handles the provider-specific structural bridging.
Native Promotion: The Antigravity Contrast
Not all providers require this hack. Antigravity, for instance, has a first-class workflow primitive.
Because xcaffold is provider-aware, it doesn't blindly apply the rule-plus-skill strategy everywhere. When compiling the exact same ship-feature workflow for Antigravity, xcaffold bypasses the lowering logic entirely (and it does this automatically, without needing any explicit overrides).
It emits a single, clean .agents/workflows/ship-feature.md file containing all the steps concatenated under ## <step-name> markdown headers. No artificial skills, no orchestrating rules. Just native execution.
The Context Pollution Trade-off
This architectural bridging is powerful, but it exposes a critical reality about how different providers manage context.
In Antigravity, workflows are invoked manually via slash commands (e.g., /workflow ship-feature). Because they are invoked on-demand, they do not consume tokens or pollute the context window during normal chat interactions.
With xcaffold's pure-YAML model, you have granular control over this trade-off for other providers using the activation field:
- Ambient (Always-On): Setting
activation: "always"compiles an orchestrator rule that permanently sits in the LLM's context window. It ensures the workflow is always enforced, but perpetually consumes your token budget. - Conditional (Path-Triggered): Setting
activation: ["*.go", "*.ts"]scopes the orchestrator rule to specific file types. The workflow only enters the context window when relevant files are modified. - On-Demand (Skill-Only): Omitting the
activationfield entirely generates no orchestrator rule. The workflow compiles purely into skills, keeping your context 100% clean. The AI simply invokes the skill manually when it (or the user) determines it's needed.
Harness-as-code makes this trade-off visible, allowing you to tune activation strategies to protect your context budget without sacrificing automation.
The Importer Caveat: Compilation vs. Decompilation
There is one final reality check regarding lowered workflows: reverse engineering.
If you point xcaffold import at an existing .claude/ directory filled with lowered rule-plus-skill pairs, it will naively import them as separate rule.xcaf and skill.xcaf resources.
The importer cannot reverse-engineer those pieces back into a unified workflow.xcaf manifest. Just like a software decompiler losing source context, the high-level intent of the "workflow" was lost during the lowering phase. (Though, interestingly, it can import native workflows perfectly fine from an Antigravity directory).
This limitation highlights the core philosophy of xcaffold: your .xcaf files are the definitive single source of truth. The .claude/ or .cursor/ directories are just compiled build artifacts.
Write once, compile everywhere, and let the compiler worry about the lowering strategies.