Agentic wearables
content framework
Nobody had defined how AI agents should communicate on a wrist. I wrote the framework anyway — mapping the full spectrum from manual to autonomous, and establishing when the system should be silent, suggestive, or act on its own.
Every product team was
answering this ad hoc.
Meta was pivoting hard toward AI-first product experiences — building an always-on AI agent that would live on wearable devices. The agent would draft messages, manage notifications, control settings, and act on behalf of the user. Every product team was scrambling to figure out what "agentic" meant for their surface.
Nobody had defined what it meant for content design.
How should an AI agent communicate on a 1.5-inch watch screen? When should it speak vs. display text? When should it act silently? When should it ask permission? What voice should it use — the user's, the brand's, or its own? These questions were being answered ad hoc by product designers who were writing content without CD input.
I wrote the framework
before anyone asked for it.
I mapped the full spectrum of agent autonomy — from fully manual (user does everything) to fully autonomous (agent acts independently and notifies after the fact). For each level, I defined what the agent should say and what it shouldn't, when the agent should be silent, suggestive, or directive, and how the agent's communication should adapt to the device surface.
I also addressed the question that nobody wanted to talk about: what happens when the agent gets it wrong? I defined content patterns for correction, undo, and transparency — because an agent that acts on your behalf and sends the wrong message has real social consequences.
The framework also covered guardrails for high-stakes domains — health data, messaging, payments — where the cost of an autonomous mistake is high enough that human review should be required.
The autonomy spectrum.
Defined for content.
Communication guidelines for each agent behavior type — reactive, proactive, and autonomous — with clear criteria for when each applies and what the agent should say at each level.
Device-specific rules for how agent communication adapts to each surface — what works on a watch display is different from audio-only glasses, which is different from a phone.
Escalation patterns for health data, messaging, and payments — where autonomous action carries real social and regulatory risk, and human review is required before the agent acts.
Content design shapes AI.
Not the other way around.
The framework was the first of its kind in the org. It was referenced in sprint planning for the current AI-first device initiative. It gave the team a shared language for making content decisions about agent behavior — instead of every product designer inventing their own approach.
Most importantly, it positioned content design as a discipline that shapes AI, rather than one that gets replaced by it.
What I'd do differently: socialize it more aggressively. A framework that lives in a doc is a reference. A framework that lives in a team's daily process is infrastructure. I got it to reference. I want to get it to infrastructure.