AI Agent Managed My Meta Ads for a Quarter — The Results
In February I published a post about letting an AI agent manage a client’s Meta Ads. Two months of data, promising results, honest caveats. I ended it with: “The experiment continues.”
It continued. I now have a full quarter of data — January through March 2026. Same account. Same budget. The agent running my method, end to end.
This is the report.
The setup in 30 seconds
In December I built an AI agent using Claude Code that manages Meta Ads for a client in the furniture industry. The agent connects to the Meta Ads API and Google Analytics 4, reads live campaign data, analyzes performance against a set of rules I defined, and generates reports.
I review everything. I make every call. The agent does the homework.
The full technical setup is covered in the companion article: How I Built an AI Media Buyer with Claude Code.
The numbers
| Month | Leads | Cost per lead | Spend |
|---|---|---|---|
| January | 101 | €7.65 | €773 |
| February | 87 | €8.03 | €699 |
| March | 88 | €8.66 | €762 |
| Q1 Total | 276 | €8.09 | €2,233 |
276 leads at €8 per lead. For this industry, that’s an excellent number — the kind of cost per lead that makes the account worth scaling.
The cost per lead stayed in the €7-9 range for the entire quarter. No spikes. No crashes. Just a floor.
Before the agent
The quarter before (October-December 2025), I managed the same account manually.
| Q4 2025 (me) | Q1 2026 (agent) | |
|---|---|---|
| Leads | 234 | 276 |
| Cost per lead | €9.80 | €8.09 |
| Budget | ~€2,290 | ~€2,230 |
Similar results. Same budget. So what actually changed?
Not the strategy. The agent runs my method — same rules, same thresholds, same logic I built from months of analyzing this account.
What changed is how the monitoring works. Instead of opening tabs, pulling reports and figuring things out — now I chat with my agent that knows every single inch of the ads manager. It runs my method every Monday without fail: pulls fresh data, applies the rules, flags what needs attention. Same process, zero friction.
That cadence is why the floor holds at €7-9 instead of swinging wider.
What the agent actually did this quarter
This isn’t a black box. Here’s what happened, concretely.
Zombie cleanup
The agent found 3 ads that were eating budget with near-zero conversions. These are the ads you don’t notice in the dashboard because they’re not spending a lot individually — but together they were draining budget that could go to winners.
The agent flagged them based on the rules: CPL above threshold after sufficient data. I paused them. Budget redistributed to the top performer automatically via Meta’s campaign budget optimization.
Creative catch
One creative was outperforming the rest by 2x on cost per lead. The agent caught it from the raw numbers before I would have noticed it scrolling through the dashboard. That’s the advantage of reading every row of data every week — patterns surface faster.
Monitoring cadence
Every Monday: a full performance breakdown. Which ads are working, which aren’t, what the frequency looks like, whether any audience is fatiguing. Five minutes to generate. Every week, no exceptions.
The rules that drive it
The agent doesn’t improvise. It applies a playbook I wrote based on how I’d manage this account:
| Signal | Action |
|---|---|
| CPL above threshold | Kill the ad |
| CPL below target + volume | Scale the budget |
| Frequency above 3 | Refresh creative |
| CTR below 1% | Swap the creative |
| New audience | Minimum 7 days before judging |
These rules haven’t changed since Episode 1 in February. They work because they’re based on how this specific account behaves, not on generic best practices.
A good system doesn’t need constant reinvention. It needs consistent execution.
What it still can’t do
I’m not going to oversell this.
Creative direction. The agent can tell you which ad format is winning and also design ads — but I like to oversee and approve. Mood, visual direction, brand tone — that’s my call.
Judgment calls. The agent flags signals. I make the decision. “Scale this” is a recommendation based on rules. Whether to actually scale depends on context the agent doesn’t have — upcoming promotions, seasonal shifts, client priorities. AI may hallucinate, don’t forget.
Client communication. Reports are generated by the agent, reviewed and sent by me. The client talks to a human who understands their business, obviously.
AI handles the analysis. Humans handle the judgment. That split hasn’t changed in five months.
The honest take
In February I said “2 months is not proof.” I was right to be cautious.
Now I have a full quarter. 276 leads. The cost per lead held steady for three months straight. The floor is real.
Is it bulletproof? No. One quarter is better than two months, but it’s still one quarter. Seasonality plays a role. The furniture industry has strong Q1 periods. I can’t fully isolate the agent’s contribution from market conditions.
What I know for sure: my method runs every week without fail. The monitoring happens. The reports are consistent. The rules are applied, not forgotten.
That alone is worth it.
What’s next
The client is evaluating a budget increase for Q2. If the floor holds at higher spend, the agent becomes significantly more valuable — more budget means more decisions to make, and consistent rules scale better.
I’m also watching the frequency metric. Retargeting frequency hit 2.37 this quarter (threshold is 2.5). New creatives are being prepared. If frequency crosses the threshold, the agent will flag it and the creative refresh cycle kicks in.
Episode 3 will cover Q2. The experiment continues.
The companion article covering the full technical setup — agent file, API connections, operating rules, weekly workflow — is here: How I Built an AI Media Buyer with Claude Code.
For more on building systems like this: Inference, my newsletter on building with AI as a solo operator.