How I Built an AI Media Buyer with Claude Code
My AI agent manages €750/month in Meta Ads for a client. It reads live campaign data, analyzes performance, flags what to kill and what to scale, and generates reports I can send directly.
This is the full setup. No custom software. No $10K platform. Just Claude Code, two API connections, and one markdown file.
What you’ll build
An AI agent that:
- Reads live Meta Ads data (campaigns, ad sets, individual ads)
- Reads Google Analytics data (sessions, conversions, traffic sources)
- Analyzes performance against predefined rules
- Recommends actions (kill, scale, refresh, test)
- Generates client-ready reports
What it does not do: spend money autonomously. Every decision goes through me. The agent does the analysis. I make the call.
The stack
| Layer | Tool | Purpose |
|---|---|---|
| Brain | Claude Code CLI | Reasoning, analysis, decisions |
| Ad data | Meta Ads API | Campaign/adset/ad performance |
| Web data | GA4 via MCP server | Sessions, conversions, traffic |
| Knowledge | Agent .md file | Brand, products, rules, history |
| Credentials | .credentials/ directory | API tokens, service accounts |
That’s it. Five layers. All free except the ad spend itself.
Step 1 — The agent file
This is where most people go wrong. They open Claude and start prompting. The output is generic because the context is generic.
Instead, I built a knowledge base as a markdown file. Claude reads it at the start of every session. It contains:
Brand identity. Who the client is, what they sell, how they position themselves. Not marketing copy — operating knowledge. Price ranges, product tiers, competitive positioning.
Product catalog with tiers. Not every product gets equal treatment. I ranked the full catalog into tiers:
- Hero products (3): proven performers, maximum budget
- Supporting (4): rotation to prevent fatigue
- Seasonal (3): tied to specific moments
- Avoid: no landing page, past underperformers
This tiering changes how the agent allocates attention. When it analyzes performance, it knows which products should be winning and which are expected to underperform.
Audience segments. Every audience the client has — retargeting pools with sizes, lookalike audiences, interest-based segments. The agent needs to know what exists before it can recommend what to test.
Historical performance. Yearly averages, monthly trends, best and worst performing periods. Without this, the agent has no baseline. “CPL of €8” means nothing without knowing what the account typically delivers. The agent needs context to judge whether a number is good or bad.
Operating rules. This is the most important section. More on this in Step 4.
The agent file is roughly 200 lines of structured markdown. It took a day to build initially and gets updated after each monthly review.
The key insight: an AI without brand context makes generic recommendations. An AI with deep context thinks like a strategist.
Step 2 — Meta Ads API connection
The Meta Ads API gives you access to everything: campaigns, ad sets, ads, creative performance, audience insights, spend data.
What you need:
- A Meta Business Manager account
- A System User with
ads_readpermission (minimum — addads_managementif you want to make changes via API) - A long-lived access token (60 days)
The setup:
- Create a System User in Business Manager → Business Settings → Users → System Users
- Generate a token with the required permissions
- Store the token in a
.credentials/file (never in the agent file or git)
What I query:
- Campaign-level: total spend, total leads, CPL, status
- Ad set-level: spend allocation, frequency, audience performance
- Ad-level: individual creative performance, CPL per ad, impressions
Claude can call the Meta API directly through bash commands or through scripts. I keep a small utility that formats the API response into readable tables.
Token management: tokens expire every 60 days. I set a calendar reminder. When it expires, the agent tells me it can’t access data — there’s no silent failure, which is good. Renewing takes 5 minutes in Business Manager.
Step 3 — GA4 via MCP
Google Analytics 4 connects through an MCP server — a protocol that lets Claude interact with external tools natively.
What you need:
- A Google Cloud service account with GA4 read access
- The GA4 property ID
- MCP server configuration in your Claude Code settings
The setup:
- Create a service account in Google Cloud Console
- Download the JSON credentials
- Grant the service account Viewer access to your GA4 property
- Configure the MCP server in
.claude.json
What I query:
- Sessions and users (total + by source)
- Conversion events (form submissions, key page views)
- Engagement rate and bounce rate
- Traffic source breakdown
The MCP connection means Claude can pull GA4 data mid-conversation. “What were the conversion numbers last week?” — it queries, gets the answer, continues the analysis. No switching tabs, no exporting CSVs.
Step 4 — Operating rules
This is where the agent stops being a chatbot and starts being useful.
Rules are not prompts. They’re decision frameworks encoded in the agent file. The agent applies them consistently every time it reviews performance.
| Signal | Action | Threshold |
|---|---|---|
| CPL above target | Flag for pause | 2x the monthly average |
| CPL below target + volume | Recommend scale | Below average + 10+ leads |
| Frequency above threshold | Recommend creative refresh | Above 3.0 |
| CTR below floor | Recommend swap | Below 1% after 7 days |
| New audience | Protect from early kill | Minimum 7 days before judging |
| Budget concentration | Flag imbalance | One ad >40% of ad set budget |
The rules are written in plain language in the agent file. Claude reads them and applies them to the data. No code needed.
Why this matters: most ad accounts are managed by gut feeling. A rule-based system removes ego from decisions. When the data says kill an ad, the agent says kill it. No “but I like this creative” or “let’s give it one more week.”
The budget concentration rule caught a real problem in month 2. One ad was eating 40% of the retargeting budget at 3x the average CPL. The agent flagged it immediately. I paused it. Budget redistributed to the top performer automatically via Meta’s CBO.
Step 5 — Weekly monitoring cycle
Every week, the workflow is the same:
- I open Claude Code and call the agent
- The agent reads the knowledge base
- I ask: “weekly performance check”
- The agent queries Meta Ads API + GA4
- It analyzes against the operating rules
- It outputs: what’s working, what’s not, what to do
The output looks like a structured brief: performance by ad set, top/bottom performers, rule violations, recommended actions.
I review everything. The agent proposes. I decide. Sometimes I agree. Sometimes I override — usually when I have context the agent doesn’t (like knowing a seasonal event is coming).
The whole check takes 10-15 minutes. Without the agent, the same analysis took 2-3 hours of pulling data, building tables, comparing numbers.
Step 6 — Reporting
Monthly, the agent generates a client report. Same data, different format — executive summary, performance tables, recommended actions, next month outlook.
The report template is part of the agent file. Claude follows the structure every time, which means reports are consistent month over month. The client knows where to find what they care about.
I review the report, adjust the language if needed, and send it. The entire reporting process — from data pull to client-ready document — takes about 20 minutes.
What I’d do differently
Start with the rules, not the API. I spent the first week connecting APIs and the second week figuring out what to do with the data. Should have been the other way around. Define your decision framework first. Connect data second.
Keep the agent file shorter. My first version was 400+ lines. Too much context, too much noise. I trimmed it to ~200 lines focused on what changes decisions. Historical data older than 12 months got moved to a separate reference file.
Test the token renewal process early. My first token expired on a Friday evening. Not ideal. Now I renew a week before expiry.
Don’t automate creative decisions. I tried having the agent suggest ad copy and creative directions. The output was generic. Creative ideation is still a human job. The agent is for analysis and rules, not inspiration.
Limitations
Things the agent genuinely cannot do:
- Creative direction. The agent can tell you which creative is winning and even design ads — but I like to oversee and approve. Mood, visuals, brand tone — that’s my call.
- Hallucination risk. AI may hallucinate. Don’t forget. Every recommendation gets reviewed before action.
- Budget autonomy. The agent never spends money without my explicit approval. Every recommendation is a recommendation.
- Client relationship. Reports are reviewed before sending. The agent doesn’t understand the client relationship, their internal politics, or what they’re sensitive about.
The agent handles 80% of the work (data analysis, rule application, reporting). The remaining 20% (creative direction, client communication, final decisions) is where the human adds irreplaceable value.
The bottom line
This setup took about a day to build. The agent file was the biggest investment — understanding the brand deeply enough to encode it as operating knowledge.
Since then: 4 months of consistent management. Weekly checks in 15 minutes instead of 3 hours. Reports generated in 20 minutes instead of half a day. A full quarter at €8 per lead — the kind of number that makes the account worth scaling.
No custom software. No platform subscription. Just Claude Code, two APIs, and one well-structured markdown file.
If you’re managing ads for a client and spending hours on analysis that follows the same pattern every week — this is the pattern to automate.
The companion piece with Q1 results is here: AI Agent Managed My Meta Ads for a Quarter — The Results.
For more on building systems like this: Inference, my newsletter on building with AI as a solo operator.