AI Agent Management

Personal AI Agent Setup Guide

A step-by-step wizard for setting up a personal AI agent with durable memory, safety gates, copy-pasteable prompts, and secret-handling checklists.

A useful personal AI agent is not a chatbot with your name on it. It is a managed operating layer for your work: goals, context, memory, tasks, recurring routines, tools, and review cadence.

The goal is not to connect every tool on day one. The goal is to install a first version that can remember the right things, route work, respect safety boundaries, and improve through weekly review.

Personal agent setup wizard

Build the first useful version in five milestones.

Each milestone has a human decision, a copy/paste prompt for Codex or another coding agent, secret-handling notes, and a verification check. The setup should feel like a guided installation, not a pile of disconnected commands.

The email button opens your email app with pre-filled setup questions. Nothing is submitted automatically.

What you are installing

The first version should have six parts:

  1. Operating model: what the agent is for, what it can do, and what it must not do.
  2. Durable source of truth: the files or systems where context that still matters next month gets stored.
  3. Runtime: the local or hosted agent process that can use tools safely.
  4. Interface: one first channel, usually Telegram, terminal, or web.
  5. Secrets boundary: credentials stored in environment variables or a secret manager, never in prompts or durable notes.
  6. Routines: a small number of scheduled reviews or reminders with clear human approval gates.

Before you begin: setup worksheet

Copy this worksheet into a note. It becomes the shared context for the wizard.

Personal AI agent setup worksheet

My first 2-4 workflows:
-
-
-
-

My durable source of truth should be:
- Local folder/repo:
- Existing tools it may read:
- Existing tools it may write to:
- Context that must stay private/local-only:

My first interface:
- Telegram / terminal / web / other:

My safety rules:
- Agent may read:
- Agent may write:
- Agent may never do:
- Requires explicit approval:

My week-one success criteria:
-
-
-

Secret-handling worksheet

Do not paste real API keys into ChatGPT, Codex prompts, public repos, or long-term notes. Use placeholders while designing the system, then create real secrets in the approved environment when you are ready.

# Secret names to create later. Keep real values out of prompts and docs.
OPENAI_API_KEY="paste-real-value-in-secret-manager-only"
ANTHROPIC_API_KEY="paste-real-value-in-secret-manager-only"
TELEGRAM_BOT_TOKEN="paste-real-value-in-secret-manager-only"
TELEGRAM_CHAT_ID="paste-real-value-in-secret-manager-only"
GITHUB_TOKEN="paste-real-value-in-secret-manager-only"
DATABASE_URL="paste-real-value-in-secret-manager-only"

If you are using Railway for hosting, the human step is usually: create the variables in the Railway dashboard or CLI, then ask the agent to verify only the names are present.

# Railway example: paste real values only into your local terminal or Railway UI.
railway variables --set "OPENAI_API_KEY=<real value>"
railway variables --set "TELEGRAM_BOT_TOKEN=<real value>"
railway variables --set "TELEGRAM_CHAT_ID=<real value>"

# Safe verification: show variable names without printing values.
railway variables | sed 's/=.*$/=<hidden>/'

If you are using a local .env file, keep it ignored by git and commit only a template.

# Human-only: create local secrets file.
cp .env.example .env
chmod 600 .env

# Then edit .env locally. Do not commit it.
# Safe check before commit:
git status --short
git check-ignore .env

Milestone wizard

Milestone 1

Define the operating model

Decide what the agent should help run before choosing tools. A narrow first version beats an ambitious system nobody trusts.

Human decision

Pick 2–4 workflows, privacy rules, and week-one success criteria.

Agent output

A concise operating model with human-only decisions separated from executable tasks.

Verification

You can explain in one paragraph what the agent is allowed to do.

You are helping me design a personal AI agent operating model.

Goal: create the first practical version, not a maximal architecture.

Inputs I will provide:
- My first 2-4 workflows
- My preferred durable source of truth
- My first interaction channel
- Privacy/security boundaries
- Week-one success criteria

Your task:
1. Turn the inputs into a concise operating model.
2. Propose a folder/file structure for durable context.
3. Separate human-only decisions from agent-executable tasks.
4. List the minimum integrations needed for version one.
5. Identify risks, secrets, and permissions that require explicit approval.
6. Produce a step-by-step setup plan with verification checks.

Rules:
- Do not invent credentials.
- Do not ask me to paste secrets into chat.
- Mark secret-handling steps as human-only.
- Prefer a reversible week-one setup over a big-bang architecture.

Milestone 2

Create durable memory and routing

The agent needs a place to store context that should survive the current chat: goals, tasks, decisions, routines, system notes, and interactions.

Human decision

Choose local folder, git repo, Obsidian vault, Notion database, or hybrid.

Agent output

Readable Markdown structure plus context policy and routing rules.

Verification

A new note can be routed without guessing where it belongs.

Create a local personal-agent source-of-truth structure using Markdown files.

Constraints:
- Keep it simple and readable.
- Do not store secrets.
- Use personal/outcome/system/task/decision/interaction/event/routine categories.
- Add README files that explain what belongs in each folder.
- Add a CONTEXT_POLICY.md that classifies what should and should not be saved.
- Add a RESOLVER.md that explains where new notes should be routed.

Suggested top-level folders:
- personal/
- outcomes/
- systems/
- tasks/
- decisions/
- interactions/
- events/
- routines/
- inbox/needs-triage/

After creating files:
1. Print a tree view.
2. Explain how a new note should be routed.
3. List files that should be reviewed before any agent writes durable context.
4. Confirm no secrets were created or requested.

Milestone 3

Install the runtime

The runtime is the agent process: local-only, hosted webhook, or hybrid. Start with the simplest version that can run, read context, and report errors.

Human decision

Choose runtime, hosting mode, model provider, and where secrets live.

Agent output

Install plan, config template, runbook, and verification checklist.

Verification

The runtime starts locally or in staging without exposing secret values.

Help me install a personal AI agent runtime safely.

Before changing files:
1. Inspect the repo/folder and identify the stack.
2. List required commands and dependencies.
3. Identify where environment variables should be stored.
4. Identify any commands that require human approval.

Implementation rules:
- Do not print or commit secrets.
- Do not push code unless I explicitly approve.
- Do not modify production systems.
- Prefer small checkpoints if this is a git repo.
- After each major step, run a verification command and report the result.

Deliverables:
- installed runtime or clear blocker list;
- .env.example or config template with placeholders only;
- runbook for starting/stopping the agent;
- verification checklist;
- list of secret names required, with no secret values.

Milestone 4

Connect one interface

Pick a first channel. Telegram is useful for a personal operating partner; terminal is useful for technical work; web is useful for lightweight review.

Human decision

Create bot/app credentials and approve any external test sends.

Agent output

Config placeholders, interface runbook, and exact smoke tests.

Verification

A test message or command reaches the agent and returns the expected response.

Connect the first interface for my personal AI agent.

Interface selected: <Telegram | terminal | web | other>

Rules:
- Human creates credentials and stores secrets.
- Agent may create config templates with placeholder names only.
- Agent may update docs/runbooks.
- Agent must not send external messages except explicit test messages I approve.

Tasks:
1. Inspect existing config and docs.
2. Add or update interface configuration using placeholders.
3. Add a runbook for local testing and hosted deployment if applicable.
4. Add a verification checklist with exact test messages or commands.
5. Report any missing credentials or manual setup steps.
6. Confirm no secret values were printed or committed.

Milestone 5

Add routines and safety gates

Routines turn the agent from a clever chat window into an operating partner. Keep the first routines advisory until you trust the outputs.

Human decision

Choose cadence, allowed actions, and escalation/approval rules.

Agent output

Routine definitions, safety gates, manual test procedure, and disable/rollback steps.

Verification

The routine can run once manually and produce a useful, reviewable output.

Add first-week routines for my personal AI agent.

Candidate routines:
- daily planning review;
- weekly review;
- inbox triage;
- content idea capture;
- project status digest;
- follow-up reminders.

Rules:
- Start advisory-only unless I explicitly approve automation.
- No external sends without approval.
- No spending money.
- No deleting files.
- No production changes.
- Durable context must be routed according to CONTEXT_POLICY.md and RESOLVER.md.

Deliverables:
1. Routine definitions with cadence, inputs, outputs, and owner.
2. Safety gates for each routine.
3. Manual test procedure.
4. Rollback/disable procedure.
5. First-week review checklist.

A simple milestone checklist

Use this as the acceptance criteria for version one.

Version-one personal agent acceptance checklist

[ ] Operating model written and reviewed
[ ] Durable source-of-truth folder/repo created
[ ] Context policy and resolver exist
[ ] .env.example or secret-name checklist exists
[ ] Real secrets stored outside prompts/docs
[ ] Runtime starts locally or in staging
[ ] First interface sends and receives a test message
[ ] At least one routine can run manually
[ ] Safety gates are documented
[ ] Human can disable the agent or routine quickly
[ ] Week-one review is scheduled

Cost planning ranges

Costs vary by hosting, model provider, integrations, and automation frequency. Verify current vendor pricing before committing.

  • Local-first / light usage: roughly $0–$30 per month if you mostly use local files, manual runs, and an existing model subscription.
  • Hosted personal agent: roughly $20–$100 per month for app hosting, logs, storage, and moderate model/API usage.
  • Power-user operating system: roughly $100–$300+ per month when you add frequent scheduled jobs, multiple integrations, higher model usage, monitoring, and media/document processing.
  • Implementation support: separate from monthly operating cost; depends on scope, integrations, and how much workflow design is needed.

Good fit signals

This is likely a fit if:

  • you already use AI daily but lose context across tools;
  • you want an agent that can remember operating decisions and route work;
  • you are comfortable reviewing agent-generated setup instructions;
  • you want a practical install path rather than abstract AI strategy;
  • you care about privacy, durable memory, and human approval boundaries.

Not a fit if

This is probably not the right starting point if you want:

  • a fully autonomous assistant with no supervision;
  • a black-box agent that stores everything everywhere;
  • a one-click tool with no operating model;
  • secret handling inside prompts;
  • production automation before basic routines are tested.

Personal agent setup intake

If you want help installing the first useful version, use the pre-filled email link so your environment, workflows, privacy constraints, and preferred interface are captured before the first conversation.

Open the setup intake email

Send the context needed to design version one.

The button below opens your email app with a prepared subject line and prompts for your computer, technical comfort level, current tools, target workflows, and privacy or security constraints.

If your email app does not open, send the same details to assessment@aiagentmanagement.com.