Writing

Topics I'm working through. Some have drafts, most are still forming.

Data Provenance & Trust

A provenance model for AI-augmented workflows

When agents produce output from a mix of curated documents, live email, and prior conversation, it helps to know where things came from. I've been working with a four-dimensional model — structure, trust, durability, and lineage — that tracks how information moves through a system and how much weight it should carry. The practical implementation uses extended attributes for trust tagging and a JSON manifest for tracking transformations.

Trust decay

A signed design review holds its authority for years. A meeting summary starts losing value within days. I'm exploring what happens when you assign decay functions to different data types — emails, meetings, status updates — and use those to filter what gets loaded into agent context. The question is whether this is worth the complexity, or whether simple recency cutoffs do the job well enough.

Mirroring enterprise documents with lineage

Most enterprise knowledge lives in SharePoint, Outlook, and Slack. None of it is directly usable by agents. I've been building a workflow that converts these sources into markdown while preserving where each piece came from and how it was transformed — verbatim extraction versus AI synthesis versus human authorship. The mirror is useful, but it's never the source of truth, and keeping that distinction clear turns out to matter.

Agent Workflows

The flash project pattern

A signal arrives — an email, a Slack message, a request from a meeting — and creates a need for a short-lived deliverable. The workflow is: scaffold a project, assemble relevant context, pull live data from email and calendar, inject human judgment, synthesize. Seven steps. The bottleneck is context assembly, which is still mostly manual. I'm documenting where automation helps and where human selection of "what's relevant" remains necessary.

A proposal-based governance loop

Improvement ideas surface naturally during agent-assisted work. Rather than applying changes immediately, they go into a structured queue — capture, review, implement, audit. Over three weeks this produced 55 proposals, 39 of which were implemented. The system doesn't modify itself autonomously. Changes accumulate, a human reviews at threshold, and the process evolves deliberately. Notes on why this works better than either full automation or ad-hoc fixes.

Spec-driven agentic development

In mid-2025, I started writing specifications before letting agents build anything — defining roles, validation gates, and shared context structures upfront. The first full application was a fantasy football draft tool: a React and TypeScript app with a VBD engine, Monte Carlo risk analysis, and a unified state store — 38 tasks, each with human-in-the-loop validation, tracked through a three-layer context system (global steering, project context, execution ledger). The approach evolved across about two dozen projects after that. Documenting what the methodology looks like in practice, where it helps, and where it adds overhead that isn't justified.