AI Agent Management

What Is LifeOS?

A practical overview of LifeOS: the operating system Rick Wong uses to manage outcomes, systems, agents, decisions, tasks, and content workflows.

LifeOS is the operating system I use to keep AI work from turning into a pile of chats, notes, tools, and half-finished ideas.

It is not a productivity aesthetic. It is a management model for working with AI agents under real constraints: ownership, memory, approval, source of truth, workflows, and measurable outcomes.

If you are trying to build something similar inside a company, email Rick about the AI Workflow & Agent Operating System Diagnostic. If you want the conceptual frame first, keep reading.

The short version

LifeOS separates work into explicit capsules:

  1. Outcomes — the measurable results I am trying to create.
  2. Systems — the repositories, tools, content engines, infrastructure, and workflows that support those outcomes.
  3. Tasks — the actions that need to happen next.
  4. Decisions — the durable choices that should not be rediscovered every week.
  5. Interactions — important conversations and working sessions worth preserving.
  6. Skills — reusable workflows that have proven useful enough to run again.
  7. Context policy — rules for what should be remembered, where it belongs, and what should never be stored.

The point is simple: agents become more useful when they know what they own, what they can touch, what requires approval, and where durable context lives.

Why it exists

Most AI work starts in a chat window.

That is fine for exploration, but it breaks down when the work becomes important:

  • the same context gets repeated over and over;
  • strategic decisions disappear into transcripts;
  • agents mix personal context, business goals, repo details, and tasks;
  • nobody knows which system is the source of truth;
  • useful workflows do not become reusable skills;
  • approvals, risks, and boundaries are unclear.

LifeOS exists to solve that management problem.

It turns AI from “a helpful assistant in a chat” into a coordinated operating layer around real work.

The capsule model

A LifeOS capsule is a bounded context with a clear owner.

Personal capsule

This stores durable user-level context: preferences, patterns, routines, constraints, relationships, and personal operating principles.

Outcome capsules

These manage measurable goals: revenue, financial independence, building a personal AI agent, or any other result that needs strategy, KPIs, plans, and reviews.

Example: the revenue outcome tracks offer hypotheses, buyer evidence, target accounts, and the current go-to-market loop.

System capsules

These manage operational systems: repos, content systems, infrastructure, CRM, email, calendar, prospecting workflows, or any other persistent system with state and permissions.

Example: this site is managed as a content system, with its own state, decisions, runbook, tasks, and repo-local skills.

Skills

A skill is a reusable workflow. It only becomes a skill after it has proven useful enough to repeat.

That distinction matters. Not every good idea deserves to become a process. LifeOS tries to separate experiments from durable operating knowledge.

How I use it in practice

LifeOS currently helps me route work across several loops:

  • Revenue: offer strategy, target selection, buyer evidence, prospect research, outreach assets, and learning reviews.
  • Content: site strategy, article roadmaps, operator notes, SEO decisions, and publishing tasks.
  • Prospecting: company research, purchase-intent signals, artifact briefs, outreach drafts, and quality reviews.
  • Systems: repositories, infrastructure, Telegram, Railway, and content-management boundaries.
  • Agent management: what agents can remember, what they can modify, when they need approval, and how skills evolve.

That is why LifeOS shows up throughout this site. It is the working laboratory behind the public frameworks.

What LifeOS proves about AI agent management

The biggest lesson is that agent management is not mainly about prompts.

It is about operating design:

  • Which work belongs to which agent?
  • Which context is durable and which is temporary?
  • Which file, repo, or system is the source of truth?
  • What can the agent do without approval?
  • What must be logged as a decision?
  • When does a recurring workflow become a skill?
  • How do agents support outcomes instead of creating more activity?

Those are the same questions companies face when they move from AI pilots to governed AI execution.

How this maps to companies

A company does not need to copy my exact LifeOS structure.

But it does need the same operating primitives:

  1. Outcome ownership: what business result are we trying to improve?
  2. Workflow map: where does work actually move?
  3. Agent inventory: which AI systems touch the workflow?
  4. Decision rights: who approves, reviews, escalates, and stops the work?
  5. Source of truth: where does durable context live?
  6. Operating cadence: when do humans review performance, risk, and next steps?
  7. Scorecard: what proves this made the business better?

That is the bridge between LifeOS and the broader AI Workflow & Agent Operating System work.

What to read next

The practical takeaway

If AI work matters, it needs somewhere to live.

Not just a chat history. Not just a project board. Not just a docs folder.

It needs an operating system that connects outcomes, systems, agents, decisions, tasks, skills, and review cadence.

That is what LifeOS is for.

For a company-specific map, email assessment@aiagentmanagement.com.