← Back to blog
Build in Public Feb 25, 2026

My AI Brain, One Month Later: What Changed

Matteo Lombardi
Feb 25, 2026

A month ago I published a video showing my AI workspace. One terminal. Persistent memory. Agents that know who I am. A one-click setup anyone could install.

That setup is already obsolete.

Not because it was wrong. Because using it every day exposed everything it couldn’t do. So I rebuilt most of it — the brain file, the agent roster, the entire enforcement layer.

Here’s what changed, what broke, and what I’d do differently if I started today.


The numbers

One month of daily use. Total cost: ~$100/month (Claude Max subscription).

WhatJanuary 17February 19Change
Agents~1622 core + 14 subagents+20
MCP integrations3-413++10
Skills (auto-loading)025new
Hooks (enforcement)04new
Brain files1 (407 lines)7 modular files-80% tokens
Config rules655179-73%

Those numbers don’t tell the full story. The real changes were structural.


Rewrite 1: The brain surgery

The original brain was one file. brain/context.md. 407 lines. Everything — revenue, deals, projects, content status, technical notes — in a single document.

Every session started by loading the entire thing. Every session burned tokens on context that wasn’t relevant. Working on content? You’re still loading deal pipeline data. Debugging code? Here’s your content calendar anyway. The model was drowning in noise before I even asked it a question.

So I split it.

brain/
├── context.md          ← 60-line index (loads always)
└── contexts/
    ├── stratega.md     ← company, positioning, pricing
    ├── projects.md     ← active client work
    ├── building.md     ← products under construction
    ├── content.md      ← content engine state
    ├── academy.md      ← Stratega School
    └── side.md         ← side projects

Now /start reads the 60-line index, asks what we’re working on, and loads only the relevant context file. The decision rule is simple: one file per area of work, loaded on demand, never all at once.

Token usage dropped about 80%. But the real difference isn’t cost — it’s output quality. The AI stopped giving me generic answers because it stopped processing generic context. Same model. Same prompts. Dramatically better responses.

The first split attempt didn’t work. I cut the file by topic but left cross-references everywhere — the content context still mentioned deals, the projects context still referenced content deadlines. Each file needed to be self-contained. That took a second pass.

The lesson I keep relearning: the system that works at day 1 breaks at day 30. Not because it was bad. Because usage patterns reveal what the architecture can’t handle.


Rewrite 2: Agent consolidation

I had 27 agents. Some were redundant. The Matteo Voice agent wrote copy in my voice. The Roaster tore it apart and scored it. Separate agents, overlapping jobs. So I merged them into a single Copy Critic that does both.

Same thing happened elsewhere. The Startupper agent got absorbed into the CTO’s “Founder Mode.” The Pretotyper became a protocol inside the Growth Hacker.

27 agents became 22. The rule I use now: does this agent need its own memory and personality that would conflict with another agent’s? If yes, it’s an agent. If no, it’s a protocol inside an existing agent.

Then I added subagents — lightweight specialists that run in parallel without touching the main conversation. A CI Researcher that does three competitor analyses simultaneously. A Code Reviewer that checks changes after every edit. A Content Drafter that writes LinkedIn posts in the background.

22 core agents + 14 subagents = 36 total. But the surface area is actually smaller because routing is automatic now. Which brings me to the real upgrade.


The enforcement layer

This is the part that changed everything, and it’s invisible.

Here’s what used to happen: I’d start a technical task — debugging a workflow, configuring an MCP server — and forget to load my CTO agent first. The output would be generic. I’d waste 10 minutes before realizing the agent wasn’t loaded, then restart.

Now a hook fires before every interaction. It checks if the task is technical. If it is, it silently loads the CTO protocol. No choice. No forgetting. I didn’t even know it fired until I looked at the logs.

# What happens before I type anything:
UserPromptSubmit → agent-call-enforcer.py → detects technical task → loads CTO protocol

A second hook fires when a session ends. It writes a summary to the knowledge graph — automatically. No manual context saving. The system remembers whether I tell it to or not.

Skills replaced static commands. The old /start was a markdown file that ran the same way every time. The new /start is a skill that auto-loads based on context, routes to the right brain module, queries the memory graph for pending items, and adapts to what device I’m on.

The difference: commands are things I invoke. Skills are things the system invokes on its own when they’re relevant.


The integrations that actually matter

The original video had Netlify. Maybe one or two other MCP connections.

Now there are 13:

IntegrationWhat it doesImpact
Google Workspace72 tools: Drive, Sheets, Gmail, Docs, CalendarGame-changer
HubSpotFull CRM: companies, contacts, dealsGame-changer
n8nCreate, execute, debug automation workflowsGame-changer
Meta BusinessAd campaign monitoringHigh
Stratega MemoryKnowledge graph across sessionsHigh
DockerContainer managementMedium
PlaywrightBrowser automationMedium
Brave SearchWeb search inside sessionsMedium
Context7Library documentationMedium
NetlifyDeploy sitesMedium
NotionMeeting notes, databasesLow
SlackNotificationsLow

The top three changed my daily workflow. Google Workspace alone — searching email, reading spreadsheets, updating docs, creating calendar events — all from the terminal. HubSpot is what made cleaning 450 companies down to 74 in one afternoon possible. n8n means I build and debug automation workflows without opening a browser.

The one I removed: GitHub MCP. Replaced it entirely with the gh CLI. Lighter, more reliable, no MCP overhead. Not everything needs to be an integration. Sometimes a command line tool is better.


What I actually built with it

This isn’t theoretical. Here’s what ran through the upgraded system in one month:

Sales — After every call, the transcript goes through my head of sales agent. He drafts the follow-up email in my voice, suggests next steps based on MEDDIC scoring, and updates the CRM. What used to take 30 minutes now takes 5.

CRM cleanup — 450 companies in HubSpot, ~35% data quality. Claude Code connected via MCP cleaned it down to 74 in one afternoon. Almost deleted two companies with active deals in the process. Wrote a full post about that one.

Competitive intelligence — Before a prospect call, the CI agent maps their market, identifies competitors, surfaces recent news. Three deep-dives running simultaneously via subagents.

This website — Four homepage iterations, one rejected full redesign, three-agent war council on the blog page. Full build log here.

Content — Every LinkedIn post goes through this system. The content engine mines my daily work for shareable moments. The copy critic tears apart my drafts and rewrites them in my voice.


What broke

Credentials everywhere. At one point I had API keys in .mcp.json (git-tracked), in settings.local.json, in brain files. One security audit later: everything consolidated into a single MASTER.env file, credentials removed from all tracked files, settings rules cut from 655 to 179.

If you’re starting: centralize credentials on day 1. Don’t scatter them.

18 days of uncommitted work. I was so focused on building that I forgot to push. 319 new files, 82 modified. One crash away from losing everything. Now a launchd daemon auto-commits and pushes at 23:00 every day. The system protects me from myself.

Agent bloat. The jump from 16 to 27 made routing confusing. I’d have a session where two agents gave contradictory advice because their domains overlapped. The consolidation to 22 with clear subagents fixed that. Each agent needs one trigger. If you think about which to call, you have too many.

Context maintenance is a job. The brain files need updating. The knowledge graph needs pruning. Stale context produces stale output. I spend 10-15 minutes per week reviewing and cleaning context. Not glamorous, but it’s the difference between an AI that knows what’s happening and one that’s guessing.


The updated setup

The one-click install from the original video still works:

curl -fsSL https://raw.githubusercontent.com/matteo-stratega/claude-workspace-template/main/setup.sh | bash

It now includes: modular brain structure, skills system with auto-loading, hook examples for enforcement and auto-save, agent template with the consolidated pattern, and updated /start and /close commands.

What’s NOT in the repo: my 22 agents (they encode my business, not a template) and MCP configurations (they need your own API keys). The repo is the skeleton. You add the muscle.

Grab it here — and if you build something with it, I want to see it.


Don’t overbuild on day one

Every agent and integration I added came from a specific problem I hit — not from a template I copied.

You don’t need 22 agents and 13 MCP integrations to start. You need CLAUDE.md, brain/context.md, and /start. Use it for a week. Feel the pain points. Then add exactly what you need.

The system isn’t done. It won’t be done. That’s the point — it grows with the work.


Part 1: I Built an AI Brain That Runs My Entire Business

Watch the original setup: My AI Brain: 1-Click Setup

Grab the repo: claude-workspace-template


Frequently Asked Questions

Is this setup stable enough for real work?

I run my entire business through it — client deliverables, sales, content, competitive intelligence. It breaks roughly once a week. The auto-commit daemon and modular brain mean nothing catastrophic happens when something breaks. You fix it, the context is preserved, you move on.

How much does this cost to run?

Claude Max subscription is $100/month. MCP servers are free — they connect to services you already pay for. Local models (Ollama) are free. Total infrastructure cost beyond Claude: roughly zero if you already have the services.

What’s the biggest mistake people make with this kind of setup?

Overbuilding on day one. Start with CLAUDE.md, brain/context.md, and /start. Use it for a week. Feel where it breaks. Then add exactly what you need. Every agent I have exists because something specific was painful without it.