OS‑Level Sandboxing for Agentic Browsers: Rootless Chrome, seccomp/AppArmor, and eBPF Egress Controls to Reduce Browser Agent Security Risk
Agentic browsers—autonomous or semi-autonomous systems that control a browser via automation protocols—are powerful. They browse, click, fill forms, and download. They also expand the blast radius when things go wrong. A compromised page, poisoned dependency, or exposed DevTools endpoint can flip an “agent” into an attacker with your network’s keys.
This piece lays out a defense-in-depth blueprint to ship safer auto-agent AI browsers on Linux:
- Isolate each browsing session in a rootless container with user namespaces.
- Lock down Chrome DevTools Protocol (CDP) via pipes/UDS and a strict broker.
- Apply seccomp/AppArmor for syscall and filesystem controls.
- Enforce egress with eBPF cgroup programs and/or an mTLS egress proxy.
- Manage secrets via a vault; avoid ambient environment variables.
- Provide a safe “what is my browser agent” diagnostic path.
- Support cross-engine fleets (Chromium, Firefox, WebKit) consistently.
The audience here is technical; you’ll find concrete configurations, tradeoffs, and references for further reading.
Executive summary
- Put every agent-driven browser instance in an ephemeral, rootless container with minimal privileges.
- Never expose CDP on a TCP port without strong auth; prefer pipes/UDS, short-lived credentials, and brokering.
- Combine seccomp (deny risky syscalls) with AppArmor (deny filesystem/IPC) to contain the process tree.
- Gate network egress with eBPF cgroup programs; keep the allowlist tiny. When domain control is needed, force all traffic through a local mTLS egress proxy and only permit the proxy’s IP.
- Treat secrets as on-demand, short-lived tokens retrieved via UDS from a local agent bound to a vault; avoid env vars and world-readable files.
- Offer a minimal, local-only diagnostic endpoint so operators can verify which agent/session/engine is running—without leaking.
Threat model for agentic browsers
Agentic browsers increase risk in several dimensions:
- CDP exposure: CDP grants full control over tabs, network interception, file downloads, and process introspection. Exposed TCP ports are routinely exploited when reachable.
- Web stack vulnerabilities: Newly-disclosed RCEs in widely-used codecs or image parsers (e.g., recent 0-days in image libraries) imply that mere page visits can lead to arbitrary code execution inside the browser context.
- Supply-chain and plugin risk: Extensions or helper binaries pulled at runtime can be hijacked.
- Data exfiltration: Screenshot, PDF save, download, and POST from the browser can leak data; external commands launched by headless browsers broaden egress.
- Lateral movement: Gained footholds pivot to local network targets or sensitive cloud metadata endpoints.
Assume break-in is possible. Engineer for containment and fast recovery.
Design goals and principles
- Unprivileged by default: No root in the container, no ambient capabilities, no host mounts beyond read-only runtime.
- Per-session isolation: Ephemeral user-data-dir, tmpfs for writes, and unique UID namespace per run.
- Explicit egress: Deny-by-default, small allowlists, and clear audit logs for attempted violations.
- Non-bypassable controls: eBPF attached to cgroups; AppArmor/LSM enforced by the kernel; no reliance on in-process policies.
- Red team friendly: Make failure modes observable via logs (seccomp/AppArmor denials), and instrumented eBPF counters.
Architecture overview
A minimal production setup per agent session might look like this:
- An orchestrator forks a rootless container named “browser-<sessionId>”.
- Inside, a minimal OS image and the browser binary are mounted read-only; a small tmpfs holds runtime state.
- The browser is launched with headless flags, controlled via CDP through a private broker using a Unix domain socket (UDS) or stdio pipes.
- eBPF cgroup connect hooks are attached to the container’s cgroup to enforce an IP:port allowlist (usually only a local egress proxy).
- An Envoy or similar egress proxy mTLS-authenticates to upstreams and implements domain/SNI allowlists.
- A sidecar agent fetches short-lived secrets from a vault via mutual TLS and injects them into the browser only as needed (e.g., via UDS or in-memory credential store exposed over a private loopback in the container netns).
- A local-only diagnostic path exposes basic metadata for observability without leaking secrets.
Rootless containers for browser sessions
Rootless is mandatory for agentic browsers. Even if the browser is compromised, lack of root and lack of ambient capabilities dramatically limit host impact. Choose one of:
- Rootless Podman (recommended): Uses user namespaces and integrates well with cgroup v2 and systemd.
- Docker in rootless mode: Supported and workable; requires configuration of subuid/subgid.
- systemd-nspawn or bubblewrap (bwrap): Serve the same isolation primitives for custom setups.
Example: Rootless Podman per-session container with tmpfs and read-only mounts
bash# Ensure subuid/subgid are configured for your user, e.g., /etc/subuid: alice:100000:65536 # Build a minimal image with Chromium and fonts podman build -t agent-chrome:stable -f Containerfile # Run a new session container SESSION_ID=$(uuidgen) RUNDIR=$(mktemp -d) podman run --rm \ --userns=keep-id \ --name browser-$SESSION_ID \ --hostname browser-$SESSION_ID \ --cap-drop=ALL \ --security-opt no-new-privileges \ --security-opt label=disable \ --read-only \ --tmpfs /tmp:rw,nosuid,nodev,noexec,size=512M \ --tmpfs /home/chrome:rw,nosuid,nodev,noexec,size=512M \ --volume "$RUNDIR/cdp.sock:/run/cdp.sock:rw" \ --network slirp4netns:allow_host_loopback=true \ agent-chrome:stable \ chromium \ --headless=new \ --no-first-run --no-default-browser-check \ --disable-gpu --disable-accelerated-2d-canvas \ --disable-dev-shm-usage \ --user-data-dir=/home/chrome/ud \ --remote-debugging-pipe
Notes:
- --read-only plus tmpfs mounts ensures writes go to RAM. No persistent state unless explicitly extracted.
- --security-opt no-new-privileges blocks setuid escalation inside the container.
- Using --remote-debugging-pipe avoids TCP sockets for CDP. You will broker the pipe via a supervised process.
- --security-opt label=disable is shown to sidestep SELinux config complexity for brevity. In production, prefer enabling SELinux in enforcing mode and writing a proper policy. If AppArmor is your LSM, see the profile below.
Chrome itself includes a multi-layer sandbox using user and PID namespaces, chroot, and seccomp-bpf. Rootless containers add another outer shell. Avoid disabling Chrome’s own sandbox.
Lock down CDP: pipes/UDS, brokering, and authentication
The DevTools endpoint is the agent’s crown jewels. Exposed, it’s often game over.
Best practices:
- Prefer --remote-debugging-pipe. This keeps CDP on stdio, not a TCP port.
- If your agent must be a separate process, run a tiny broker that:
- Owns the Chromium process, reads/writes CDP on its stdio pipes.
- Exposes a Unix domain socket at /run/cdp.sock.
- Enforces access controls: file permissions (0700), SO_PEERCRED-based UID/GID checks, a short-lived capability token, and rate limiting.
- Terminates on idle.
- Do not bind CDP to 0.0.0.0 or container-exposed ports. If you absolutely must run CDP over TCP, bind strictly to 127.0.0.1, inject a random one-time token, and place it behind mTLS.
Example: minimal Python CDP broker using UDS with SO_PEERCRED
python#!/usr/bin/env python3 import os, socket, selectors, subprocess, sys SO_PEERCRED = 17 # Linux CDP_SOCK = "/run/cdp.sock" TOKEN = os.environ.get("CDP_TOKEN") # short-lived, injected by orchestrator try: os.unlink(CDP_SOCK) except FileNotFoundError: pass srv = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) srv.bind(CDP_SOCK) os.chmod(CDP_SOCK, 0o700) srv.listen(4) sel = selectors.DefaultSelector() # Launch Chromium with --remote-debugging-pipe chrome = subprocess.Popen([ "chromium", "--headless=new", "--remote-debugging-pipe", "--user-data-dir=/home/chrome/ud", ], stdin=subprocess.PIPE, stdout=subprocess.PIPE) def bridge(client): # Optional: read first line as token tok = client.recv(128).decode().strip() if TOKEN and tok != TOKEN: client.close(); return # Optional SO_PEERCRED check pid, uid, gid = socket.getpeercred(client.fileno()) if hasattr(socket, 'getpeercred') else (0,0,0) if uid != os.getuid(): client.close(); return sel.register(client, selectors.EVENT_READ, data=(client, chrome.stdin)) sel.register(chrome.stdout, selectors.EVENT_READ, data=(chrome.stdout, client)) sel.register(srv, selectors.EVENT_READ) while True: for key, _ in sel.select(): if key.fileobj is srv: cli, _ = srv.accept(); bridge(cli) else: src, dst = key.data data = src.read(65536) if src is chrome.stdout else src.recv(65536) if not data: try: sel.unregister(src) except Exception: pass try: dst.close() except Exception: pass continue if dst is chrome.stdin: dst.write(data); dst.flush() else: dst.sendall(data)
This broker enforces:
- No TCP exposure.
- File permission gating on the socket.
- Optional token and UID checks.
- A single chokepoint for logging and throttling.
Harden further by:
- Chrooting the broker.
- Applying AppArmor to the broker with an even stricter profile.
- Rotating the CDP token per session; avoid long-lived bearer secrets.
seccomp: deny-by-default for risky syscalls
Chrome uses a lot of syscalls—video, sandboxing, threading—so an ultra-minimal allowlist breaks it. Rather than write from scratch, start from Docker’s default seccomp profile and tighten it.
Key restrictions to keep:
- Deny kernel attack surface: bpf, keyctl, add_key, request_key, kexec_load, perf_event_open.
- Deny privilege shifts: ptrace, setns, unshare, mount, umount2.
- Deny introspection: kcmp.
- Audit violations.
Example: seccomp profile snippet (JSON) to layer on top of Docker default
json{ "defaultAction": "SCMP_ACT_ERRNO", "archMap": [ {"architecture": "SCMP_ARCH_X86_64", "subArchitectures": ["SCMP_ARCH_X86", "SCMP_ARCH_X32"]} ], "syscalls": [ {"names": ["read", "write", "openat", "close", "fstat", "newfstatat", "lseek", "mmap", "mprotect", "munmap", "brk", "rt_sigaction", "rt_sigprocmask", "rt_sigreturn", "clone", "clone3", "set_robust_list", "prlimit64", "getrandom", "getpid", "gettid", "futex", "sched_yield", "nanosleep", "clock_nanosleep", "clock_gettime", "clock_getres", "epoll_create1", "epoll_ctl", "epoll_pwait", "eventfd2", "pipe2", "timerfd_create", "timerfd_settime", "signalfd4", "socket", "connect", "accept4", "bind", "listen", "getsockopt", "setsockopt", "recvfrom", "sendto", "recvmsg", "sendmsg", "shutdown", "statx", "uname", "getcwd", "getdents64", "ioctl", "pread64", "pwrite64", "access", "dup", "dup3", "splice", "tee", "memfd_create", "ftruncate", "mkdirat", "unlinkat", "renameat2", "linkat", "symlinkat", "chown", "fchmod", "fchmodat", "fchown", "fchownat", "poll", "ppoll", "select", "pselect6", "arch_prctl"], "action": "SCMP_ACT_ALLOW"}, {"names": ["ptrace", "keyctl", "add_key", "request_key", "kexec_load", "open_by_handle_at", "bpf", "perf_event_open", "name_to_handle_at", "kcmp", "setns", "unshare", "mount", "umount2"], "action": "SCMP_ACT_KILL"} ] }
Attach via Podman/Docker with --security-opt seccomp=/path/to/profile.json. Test with realistic workloads; broaden only when necessary. If you need dynamic brokering for specific syscalls (e.g., allow openat only for certain paths), consider seccomp user notifications with a privileged broker process—this is advanced but powerful.
AppArmor: filesystem and IPC boundaries
AppArmor complements seccomp by constraining filesystem and IPC. A tight profile for the browser process:
- Read-only on /usr, /lib, /bin.
- Writable only to tmpfs paths like /tmp and /home/chrome.
- Deny ptrace and sysfs/procfs reads except essentials.
Example: AppArmor profile for Chromium in a container
# /etc/apparmor.d/usr.bin.chromium-agent
profile chromium-agent flags=(attach_disconnected, mediate_deleted) {
# Include base abstractions
#include <abstractions/base>
#include <abstractions/fonts>
#include <abstractions/nameservice>
# Binary
/usr/bin/chromium ixr,
/usr/lib/** r,
/usr/share/** r,
# Read-only root
/ r,
/** r,
# Writable tmpfs areas only
deny /** wklx,
owner /home/chrome/** rwk,
/tmp/** rwk,
# Dev and proc
deny /dev/mem r,
deny /dev/kmem r,
/dev/shm/** rwk,
/proc/cpuinfo r,
/proc/meminfo r,
/proc/sys/** r,
deny /proc/*/fd/** rw,
deny /proc/*/maps r,
# Network allowed; pair with eBPF for egress controls
network inet stream,
network inet6 stream,
# No ptrace
deny ptrace peer=unconfined,
# Lock down capabilities
capability chown,
deny capability sys_admin,
deny capability sys_ptrace,
deny capability setuid,
deny capability setgid,
}
Apply with --security-opt apparmor=chromium-agent. Expect to refine for fonts, sandbox helper binaries, and codecs present in your image. When using SELinux instead of AppArmor, craft an equivalent policy module and run in enforcing mode.
Egress control with eBPF cgroups
Network egress is where data leaves. Block it or shape it at the kernel boundary.
Two pragmatic patterns:
- Strict IP allowlist with cgroup/inet{4,6}_connect hooks. Keep the allowlist tiny—ideally just a local proxy.
- Force ALL egress to a local mTLS proxy; your eBPF program allows only connections to 127.0.0.1:<port> (or a veth IP). The proxy then enforces domain/SNI policies.
We’ll show both.
Pattern A: Allowlist only selected IP:port
eBPF program (C) for cgroup/connect4 and connect6. It consults two hash maps: allowed_ipv4 and allowed_ipv6 keyed by 5-tuple (proto, ip, port). You can reduce to ip+port.
c// ebpf_connect_allow.c #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <linux/in.h> #include <linux/in6.h> struct key4 { __u32 ip; __u16 dport; __u8 proto; }; struct key6 { struct in6_addr ip6; __u16 dport; __u8 proto; }; struct { __uint(type, BPF_MAP_TYPE_HASH); __uint(max_entries, 1024); __type(key, struct key4); __type(value, __u8); } allowed_ipv4 SEC(".maps"); struct { __uint(type, BPF_MAP_TYPE_HASH); __uint(max_entries, 1024); __type(key, struct key6); __type(value, __u8); } allowed_ipv6 SEC(".maps"); SEC("cgroup/connect4") int cg_connect4(struct bpf_sock_addr *ctx) { struct key4 k = { .ip = ctx->user_ip4, .dport = bpf_ntohs(ctx->user_port), .proto = ctx->protocol }; __u8 *ok = bpf_map_lookup_elem(&allowed_ipv4, &k); return ok ? 1 : 0; // 1=allow, 0=reject } SEC("cgroup/connect6") int cg_connect6(struct bpf_sock_addr *ctx) { struct key6 k = { .dport = bpf_ntohs(ctx->user_port), .proto = ctx->protocol }; __builtin_memcpy(&k.ip6, &ctx->user_ip6, sizeof(k.ip6)); __u8 *ok = bpf_map_lookup_elem(&allowed_ipv6, &k); return ok ? 1 : 0; } char _license[] SEC("license") = "GPL";
Compile and attach:
bashclang -O2 -g -target bpf -c ebpf_connect_allow.c -o ebpf_connect_allow.o bpftool prog load ebpf_connect_allow.o \ "/sys/fs/bpf/allow_connect" type cgroup/connect # Create maps and pin them bpftool map pin id $(bpftool map show | awk '/allowed_ipv4/ {print $1}' | cut -d: -f2) /sys/fs/bpf/allowed_ipv4 bpftool map pin id $(bpftool map show | awk '/allowed_ipv6/ {print $1}' | cut -d: -f2) /sys/fs/bpf/allowed_ipv6 # Attach to the container’s cgroup CG=/sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/app.slice/podman-<ctr>.scope bpftool cgroup attach $CG connect4 pinned /sys/fs/bpf/allow_connect bpftool cgroup attach $CG connect6 pinned /sys/fs/bpf/allow_connect # Insert allowed destinations (example: allow only 10.0.0.5:443 and 169.254.169.254:80) python3 - <<'PY' from bcc import BPF, libbcc import socket, struct, os ipv4 = libbcc.lib.bpf_obj_get(b"/sys/fs/bpf/allowed_ipv4") # helper to add key def add(ip, port, proto=6): key = struct.pack("IHB", struct.unpack("!I", socket.inet_aton(ip))[0], port, proto) libbcc.lib.bpf_map_update_elem(ipv4, key, struct.pack("B", 1), 0) add("10.0.0.5", 443) add("169.254.169.254", 80) PY
Pros: kernel-enforced, low overhead. Cons: IP-based allowlists don’t capture domain policies; IPs change.
Pattern B: Force egress through a local mTLS proxy
- eBPF allows only 127.0.0.1:15001 (proxy) or the proxy’s veth IP.
- The proxy performs domain/SNI allowlisting, TLS origination, mTLS to upstreams, and logs.
Chrome then connects using standard URLs; the proxy handles routing.
Example Envoy egress proxy (static) allowing only specific domains via SNI
yamlstatic_resources: listeners: - name: egress address: socket_address: { address: 127.0.0.1, port_value: 15001 } filter_chains: - filters: - name: envoy.filters.network.tcp_proxy typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy stat_prefix: egress_tcp cluster: egress_cluster clusters: - name: egress_cluster connect_timeout: 2s type: LOGICAL_DNS lb_policy: ROUND_ROBIN load_assignment: cluster_name: egress_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: example.com, port_value: 443 } - endpoint: address: socket_address: { address: api.example.org, port_value: 443 } transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext sni: "%REQ(SNI:example.com)%" # optional override common_tls_context: tls_params: { tls_minimum_protocol_version: TLSv1_2 }
You can replace the static cluster with a dynamic filter that rejects non-allowlisted SNI, and configure mTLS (client certs) via SDS from a vault. Attach eBPF allow rules such that only 127.0.0.1:15001 is reachable from the browser process; everything else is denied.
Secrets: ephemeral, out-of-band, least privilege
Avoid long-lived browser secrets. Recommended:
- Use a local secret broker (sidecar) that:
- Authenticates to a vault (e.g., HashiCorp Vault) via mTLS with a workload identity.
- Fetches short-lived credentials on-demand and returns them to the agent via UDS.
- Never writes secrets to disk.
- Exposes a minimal API: fetch-cookie, fetch-api-token.
- Pass secrets to the browser via CDP commands that add cookies just-in-time; avoid populating huge cookie jars or localStorage ahead of time.
- Keep secrets out of environment variables. Env is readable within the same user namespace; malicious pages could trigger native code to read /proc/self/environ if not otherwise blocked.
Example: Quick and safe cookie injection via CDP
json{ "id": 1, "method": "Network.setCookie", "params": { "name": "session", "value": "<short-lived-token>", "domain": "example.com", "secure": true, "httpOnly": true, "sameSite": "Strict" } }
Orchestrate token fetch right before navigation; rotate on failure.
A safe “what is my browser agent” diagnostic path
Operators need to know which agent is running, which engine version, and basic state—without leaking secrets or increasing the attack surface.
Implement a local-only status endpoint exposed by the broker or sidecar on a Unix domain socket or loopback in the container netns, not published externally. It should:
- Return: session_id, engine (chromium/firefox/webkit), engine_version, policy_version, egress_mode (bpf/proxy), last_nav_time, health.
- Omit: any tokens, cookies, URLs visited, or headers.
- Enforce: file permissions (0700), optional mTLS if using TCP on loopback, or SO_PEERCRED checks.
Example: Minimal HTTP over UDS using socat for ops
bash# Query diagnostic socket socat - UNIX-CONNECT:/run/agent.d/diag.sock <<<'GET /healthz HTTP/1.1\n\n'
Return body example:
json{ "session_id": "1bf1a7e3-...", "engine": "chromium", "engine_version": "120.0.6099.224", "policy_version": "2025-02-10a", "egress_mode": "bpf+envoy", "last_nav_time": "2025-02-11T14:20:33Z", "health": "ok" }
The same endpoint can expose a sanitized /about that includes only Chrome’s /json/version minus any WebSocket debugger URL; if you include it, strip tokens and keep the socket private.
Cross-engine fleets: Chromium, Firefox, and WebKit
- Chromium: Use CDP with --remote-debugging-pipe. Maintain per-major-version images; some flags change between releases.
- Firefox: Prefer WebDriver BiDi for future-proofing; geckodriver tunnels WebDriver and BiDi. Apply the same container, eBPF, and proxy policies. Lock down the geckodriver port via UDS or loopback+mTLS.
- WebKitGTK/Playwright WebKit: Similar story—automate via Playwright or WebDriver BiDi. Ensure the UI subprocess model is captured by your seccomp/AppArmor policy.
Unify orchestration:
- The outer containerization, cgroup, and egress controls are engine-agnostic.
- Write separate AppArmor profiles per engine. Keep your allowlist small; share base abstractions for fonts, XDG paths, and NSS.
Observability and auditing
- seccomp/AppArmor: Enable audit logs. Parse dmesg or journald for denials tagged with profile names.
- eBPF: Add per-cgroup counters for allowed/denied connects. Expose via bpftool map dump or a tiny exporter.
- Proxy: Centralize egress logs; emit SNI, TLS version, and upstream latency.
- CDP broker: Log client PID/UID, connection start/stop, and count of commands per session, not payloads.
Example: Simple eBPF stat map for denies
cstruct { __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); __uint(max_entries, 1); __type(key, __u32); __type(value, __u64); } denied_cnt SEC(".maps"); ... // In connect hook when denying: __u32 k = 0; __u64 *v = bpf_map_lookup_elem(&denied_cnt, &k); if (v) __sync_fetch_and_add(v, 1);
Scrape with a userland exporter and reset periodically.
Performance and ergonomics
- Startup time: Pre-bake browser caches and fonts in the image. Use read-only layers and tmpfs for speed. Keep the image small; Alpine is not ideal for glibc-heavy browsers—use Debian/Ubuntu or distroless with the right libraries.
- Headless flags: --headless=new is more robust in modern Chromium. Avoid software rendering overhead by disabling GPU where possible.
- eBPF vs iptables: For per-session policies, eBPF attached to cgroups scales better, avoids rule churn, and makes multi-tenant safe. iptables owner match is coarse and brittle.
- Proxy overhead: On modern hosts, a local Envoy adds low single-digit ms per TLS connection. Keep connection pools warm; reuse HTTP/2.
Deployment patterns
- Single host orchestrator: systemd-run --user can isolate transient scopes; attach eBPF per scope. Rootless Podman pods map cleanly to cgroups.
- Kubernetes: Use a per-agent Pod with an initContainer that mounts pinned BPF maps and a sidecar egress proxy. Disable host networking. Use Pod Security Admission to enforce restricted policies.
- Multi-tenant: Assign unique UIDs and cgroups per session. Never share the proxy; one proxy per session netns prevents cross-tenant egress routing.
systemd-run example that creates a transient scope and attaches eBPF
bashSESSION=$(uuidgen) systemd-run --user --scope --unit=browser-$SESSION \ podman run ... CGPATH=$(systemctl show --user -p ControlGroup browser-$SESSION | cut -d= -f2) # Attach eBPF to that cgroup path bpftool cgroup attach /sys/fs/cgroup$CGPATH connect4 pinned /sys/fs/bpf/allow_connect bpftool cgroup attach /sys/fs/cgroup$CGPATH connect6 pinned /sys/fs/bpf/allow_connect
Practical hardening checklist
- Containerization
- Rootless Podman or Docker rootless.
- --read-only with tmpfs for /tmp and home.
- no-new-privileges, drop all capabilities.
- Browser
- Keep Chrome’s sandbox enabled.
- Use --remote-debugging-pipe; never expose a wide-open TCP port.
- Ephemeral user-data-dir.
- CDP
- UDS with 0700 permissions; SO_PEERCRED checks.
- Short-lived token per session.
- Broker with logging and backpressure.
- Syscalls/FS
- seccomp: start from Docker default, kill risky syscalls.
- AppArmor/SELinux: deny writes outside tmpfs; no ptrace; deny /proc introspection.
- Egress
- eBPF cgroup connect hooks. Either:
- Strict IP:port allowlist, or
- Only allow proxy’s socket; enforce domains in proxy.
- No direct access to cloud metadata IP unless explicitly needed.
- eBPF cgroup connect hooks. Either:
- Secrets
- Vault integration via sidecar with mTLS; no env vars.
- Short-lived cookies/tokens; inject via CDP just-in-time.
- Diagnostics
- Local-only /healthz and /about; no secret values.
- Observability
- Audit logs for LSM/seccomp.
- eBPF counters for deny events.
- Proxy access logs with SNI.
Caveats and pitfalls
- Chrome flags change; test with each stable release. Some enterprise policies can disable insecure features (e.g., external protocols) without code changes.
- DNS vs IP allowlists: To enforce domain-level policy reliably without deep packet inspection, funnel through a proxy. Tying DNS results to allowlists in-kernel is fragile.
- Secrets in crash dumps: Ensure core dumps are disabled in containers; configure ulimit -c 0.
- Fonts and locales: Minimal images often miss fonts; missing glyphs can break rendering in deterministic scraping. Pre-install Noto fonts.
- GPU: Disabling GPU simplifies policies and removes driver attack surface. If you require GPU, isolate with container device ACLs and audit driver syscalls.
References and further reading
- Chromium sandbox design: https://chromium.googlesource.com/chromium/src/+/HEAD/docs/linux_sandboxing.md
- Docker default seccomp profile: https://docs.docker.com/engine/security/seccomp/
- AppArmor documentation: https://gitlab.com/apparmor/apparmor/-/wikis/home
- eBPF cgroup hooks (connect4/connect6): https://www.kernel.org/doc/html/latest/bpf/prog_cgroup_sock.html
- Cilium and policy with eBPF: https://cilium.io/
- Envoy proxy: https://www.envoyproxy.io/
- Rootless containers overview (Podman): https://podman.io/docs/installation#rootless-mode
- Chrome DevTools Protocol: https://chromedevtools.github.io/devtools-protocol/
- WebDriver BiDi (W3C): https://w3c.github.io/webdriver-bidi/
Opinionated conclusion
If you’re running agentic browsers without rootless isolation, network egress controls, and a CDP broker, you are accepting unnecessary and material risk.
The recipe above is not theoretical: rootless containers, AppArmor/SELinux, seccomp, and eBPF are mature building blocks. Combine them with an mTLS egress proxy and a disciplined secrets flow to make containment the default and escape the exception. Your agents will still be productive—but far less dangerous when the web fights back.