Our browser agent is made up of two cooperating agents: The builder and the maintenance agent. Together, these agents enable a robust automation lifecycle: from rapid creation to continuous adaptation. By generating enough context at build time, our agents execute faster, have lower cost, and improved reliability compared to taditional browser-use frameworks that use LLMs at every step at runtime.
AgentResponsibility
Builder AgentCreates the workflow via chat with a human or an AI.
Maintenance AgentMonitors live runs and repairs the workflow whenever the target site changes.

Builder Agent

Purpose

Turn a goal or standard operating procedure (SOP) into a browser agent that can be triggered over our Run API.

How it works

  1. Kick‑off – Open a new workflow in the dashboard and describe the task. You can paste a detailed SOP or simply type a high‑level goal like “Log in and fill out this tracking form.”
  2. Schema discovery – Optionally provide JSON schemas for input variables and expected output. Skip this, and the agent infers them automatically.
  3. Interactive build – The agent launches a live browser preview, navigates the target site, and asks clarifying questions only when needed (e.g., login credentials, ambiguous clicks).
  4. Credential capture – Any secrets you share are stored in your vault and reused securely in subsequent runs.
  5. Graph generation – Each confirmed action converts into a node‑edge graph you can inspect, edit and version.
  6. Save & run – Once the agent completed the graph generation, hit Save to get a dedicated API endpoint for triggering this workflow.

Why we generate workflows instead of using AI for every step

Traditional “agentic” tools keep an LLM in the loop for every click, form fill, and page transition. CloudCruise compiles the logic up front and reuses it. The payoff shows up in three dimensions:
DimensionRuntime LLM at every stepCloudCruise compile‑once approach
SpeedEach action incurs network latency plus token inference time, leading to multi‑minute runs.Graph executes at native browser speed; only API calls hit the network.
CostPay per token, per step—cost scales linearly with run length.One‑time build tokens are amortized over thousands of executions; marginal cost is near‑zero.
ReliabilityHigh variance: LLM may hallucinate selectors or loop indefinitely; reproducibility is low.Fully deterministic; every run follows the same graph. Maintenance Agent handles drift proactively.