surfs.dev Logo
surfs.dev
ResourcesNews
Platform for Browser Agents

Label, train, and deploy
AI browser agents

Training LLMs is solved. Training BROWSER AGENTS? That's the frontier. The only platform providing training data, validation, and deployment for browser, desktop, and GUI agents.

Watch Demo
800+
Developers
Building with Surfs
10,000+
Tests Run
Agent validations
debugg.ai
Zero-Config Testing
Powered by Surfs
The Problem

Browser agents fail. You have no idea why.

Building browser agents without visibility, training data, or benchmarks means shipping blind and improving by luck.

Failures are opaque

Are your browser agents getting stuck on captchas or silently failing on page loads? Without action-level tracing across browser sessions you're debugging blind, guessing at prompts, and hoping the next run works.

No training data

No way to capture what your agent actually did, label which actions were correct, or build datasets from real sessions. You're improving by instinct, not evidence.

Improvement is unmeasurable

You change the prompt. Is the agent better? Worse? There's no baseline. No version history. No way to A/B test GPT-4 against Claude on the same browser workflow. "V2 feels worse" is not a metric.

Built by the team behind Debugg.ai — 800+ users, 10,000+ agent tests/week

The Solution

The complete lifecycle for action-based agents

Label action sequences. Train on real interactions. Validate multi-step flows. Deploy with confidence.

Label

Record and annotate agent actions automatically. Capture every action in the flow. Label success/failure at action-level. Build datasets of action sequences.

Train

Training data for action sequences (not just text). Clean, labeled datasets ready for fine-tuning. RLHF workflows for agent behavior. We're creating CommonCrawl for actions.

Deploy

Sandbox → Staging → Production with safety. Test in isolated environments. CI/CD for agent updates. Canary rollouts for behavior changes.

Eval

Validate 49/50 vs 50/50 correct actions. Action-level validation (not just outcomes). Behavioral testing (did it take the right path?). Version control for agent behavior.

The complete lifecycle in code

Label, train, deploy, and eval—all integrated. From action recording to production deployment in one platform.

agent.ts
import { Surfer, Dataset, Eval } from '@surfs/sdk'

// 1. Label: Record and annotate actions
const session = await Surfer.record({
  task: "Add item to cart",
  captureActions: true,
  labelSuccess: true
})

// 2. Train: Build dataset from labeled actions
const dataset = await Dataset.create({
  sessions: [session],
  format: "action-sequences"
})

// 3. Deploy: Test in sandbox before production
await Surfer.deploy({
  environment: "sandbox",
  canary: 0.1 // 10% rollout
})

// 4. Eval: Validate 50/50 actions, not 49/50
const results = await Eval.run({
  validateEachAction: true,
  compareToBaseline: true
})

The complete agent lifecycle: Label → Train → Deploy → Eval, all in one SDK.

See what your agent is thinking

Every action starts with an LLM decision. Track reasoning → execution, not just raw actions. Debug failures at the decision level.

https://dashboard.surfs.dev/agents/run-a3f9e2b8
Task Completed Successfully

run_a3f9e2b8 · 47.3s · 12 steps

Success
Tokens
8,734
$0.043
Latency
412ms
avg per step
Success
100%
this run
1
💭 Navigate to shop to browse products
goto("https://shop.example.com")
2
💭 Need to search for red t-shirt
click("#search-button")
3
💭 Enter search query for product
fill("input[name=q]", "red t-shirt")
12
💭 Found cheapest option, add to cart
click(".add-to-cart-btn")

Latest Resources

Stay updated with the latest insights, guides, and best practices for AI-powered development and testing.

Speculative Execution for Browser Agents: Branch-and-Rollback Plans, Predictive Prefetch, and Tab-Level Isolation to Slash Latency

Speculative Execution for Browser Agents: Branch-and-Rollback Plans, Predictive Prefetch, and Tab-Level Isolation to Slash Latency

A practical, opinionated blueprint for speculative execution in the browser: branch-and-rollback plans, predictive prefetch and prerender, service worker cache warming, and tab-level isolation that deliver near-instant commits without side effects or anti-bot headaches.

4/3/2026
20 min read
Read more
Synthetic Web Factory for Browser Agent Training: Procedural Websites, Task DSL, and Reward Simulators

Synthetic Web Factory for Browser Agent Training: Procedural Websites, Task DSL, and Reward Simulators

A practical blueprint for a synthetic web factory to train and evaluate browser agents: procedurally generate diverse, deterministic sites; compile a task DSL to ground-truth affordances; simulate dense rewards; auto-label traces; and export deterministic replays for RL and CI.

4/2/2026
20 min read
Read more
Training LLMs for AI Browser Agents: Toolformer‑Style CDP Supervision, Hindsight Rollouts, and Counterfactual Replays

Training LLMs for AI Browser Agents: Toolformer‑Style CDP Supervision, Hindsight Rollouts, and Counterfactual Replays

A practical, opinionated guide to fine-tuning LLMs that operate Chrome: mine CDP tool traces from deterministic replays, synthesize and verify Toolformer-style calls, harvest hindsight rollouts, train with counterfactual DPO, enforce selector grammars, and evaluate against live-site drift.

3/30/2026
19 min read
Read more
View All Resources

Looking for PR-focused browser testing?

Try DebuggAI PR Copilot
surfs.dev Logo
surfs.dev

The easiest way to build reliable AI agents that actually understand the web

Resources

  • Blog & Resources
  • Agentic Browser News
  • Documentation

Company

  • Privacy Policy
  • Terms of Service

© 2026 surfs.dev. All rights reserved.

Cookie Policy