Why I started Truss Labs
Notes on incorporating in New York, picking a C-Corp, and why I'm working on agent infrastructure instead of another agent product.
AI activity visibility on infrastructure you control
See what AI is doing with your sensitive data — on infrastructure you control.
Truss is the visibility and control layer between AI agents and the systems that hold sensitive data. It captures every prompt, every response, and every action an agent takes, into a durable on-disk ledger your auditors and counsel can read directly. No vendor portal. No opaque chat history. The runtime is software you run yourself — physical box, VM, container, or your own cloud account. There is no Truss-operated cloud in the data path, ever.
It extends the EDR/DLP stack you already run (CrowdStrike, Varonis, Proofpoint) into the prompt-and-response layer those tools structurally cannot see.
Live demo: demo.trusslabs.org — an audit proxy in front of Gemini, with three policy verdicts (block / redact / allow) and a 7-day retention timer. Zero inbound ports on the host.
Captures prompts, responses, sensitive-data class hits, downstream actions, and policy verdicts into audit-defensible receipts on customer-controlled infrastructure. Receipts are JSON files you can grep, query with SQLite, and retain on your own schedule. Designed to answer the questions a board, a county counsel, or a cyber-insurance questionnaire actually asks.
The broader harness I run on my own machine. Same architecture (customer-controlled runtime, on-disk state, no vendor cloud in the data path), plus steering primitives for shaping live agent reasoning. Sold to builders, not to CISOs. The audit-and-policy slice is what productizes for the CISO buyer.
Truss Audit is not a hypothesis. It is the productized version of a tool I have been running for myself, on my own laptop, for the past six months.
That tool is Soul OS. State at ~/soul_registry/, config at ~/dotfiles/soul/. No external SaaS in the data path. Files I own, formats I control, queryable with grep, jq, and sqlite against the directory directly. Sessions distilled at close into an immutable record. Decisions logged to a ledger. Tasks tracked as JSON files I version-control.
The architecture maps almost line-for-line onto what Truss Audit ships: task state in ~/soul_registry/tasks/ becomes AI activity in ~/.truss/receipts/; the session ledger becomes the receipt stream; the gates I wrote to keep agents from breaking my workflow become the policy YAML. Same shape — one for me, one for the customer.
A CISO can ask "do you actually run this?" and the honest answer is: yes, daily, for six months, on this machine.
Below is 383 lines of an AI agent's tool-call log — a real session from this laptop's registry, JSONL on disk. Three small command-line tools, piped together, find every retry loop the agent got stuck in. TRAP-1 is the halt event.
The shape of the pipe — three primitives, four lines:
cat ~/soul_registry/sessions/PROJECT/SESSION/hooks.jsonl \
| python3 primitives/scripts/soul_translate.py \
| python3 primitives/scripts/soul_query.py --json --flag FLAG_CIRCULAR_REASONING \
| python3 primitives/scripts/soul_trap.py run \
| head -3
Same shell vocabulary you already use, applied to your agent's behavior. Files on disk, pipes between them. Replace head -3 with grep, jq, awk, your own scripts — that's the point.
Primitives ship publicly soon — currently bundled with sensitive call notes in the working tree.
Status: pre-product as a SKU. First validated discovery call: 2026.05.07. Live demo up. Building toward a paid pilot. I'm writing about the work as it ships rather than waiting for a launch.