450 to 73: How I Cleaned My Entire CRM in One Afternoon
450 companies in HubSpot.
~35% data quality. Duplicates everywhere. No owner set on half of them. The last time I properly cleaned it was… honestly I don’t remember. Over a year ago, minimum.
I knew it was a mess. I just kept ignoring it because cleaning a CRM manually is the kind of task that makes you want to fake your own death.
Then I spent one afternoon talking to Claude Code and got it down to 73.
The CRM graveyard problem
Every founder I know has this. You start adding companies during a sales push, a client project, a conference. You import a CSV from somewhere. Your pipeline grows. Then you get busy actually doing the work and the CRM becomes a place where data goes to die.
The numbers look fine on the surface. “I have 450 companies.” Cool. But 30% are duplicates. Another 20% are dead leads from 2023 you never officially closed. Several have no contact attached. Some have no deal, no note, no anything — just a company name floating in the void.
When your CRM is that dirty, you stop trusting it. And when you stop trusting it, you stop using it properly. It becomes a vanity metric instead of a tool.
I was there.
The approach: Claude Code + HubSpot MCP
I want to be clear about what this is and what it isn’t. This isn’t a fancy AI product. No dashboard. No SaaS. No magic button.
Claude Code runs in the terminal. You connect it to HubSpot via MCP — Model Context Protocol, which is basically a way to plug tools into the AI so it can actually read and write your data. HubSpot has a public MCP integration. You set it up once, and then you can just… talk to your CRM.
That’s it. A CLI and a conversation.
What the AI actually did, step by step
I gave it a simple brief: clean the CRM. Here’s what it ran through:
Pull all companies. First it fetched everything — all 450 records — and gave me a summary of the state. Missing owners, missing contacts, deal associations, last activity dates. I could see the mess laid out clearly for the first time.
Check for duplicates. It cross-referenced company names, domains, and contact overlap. Found a stack of duplicates — including one PickEat entry that had been split across two records somehow. Merged it. One pass, done.
Verify deal associations. This is where it got interesting. Before flagging anything for deletion, it checked each company against open and closed deals. Which brings me to the moment that saved me actual money.
The save. I had a batch of companies queued up to delete. Low activity, no recent notes, looked dead. Claude stopped and flagged two of them: “These have open deals in pipeline. Are you sure you want to delete?”
No. I definitely did not.
I had completely forgotten about those deals. They were sitting there, associated to companies I thought were abandoned. If I’d deleted them manually — which I would have, because I was moving fast — I would have wiped the deal history too. Real pipeline, gone.
That one moment probably paid for every hour I’ve spent learning to use this tool.
Enrich missing data. For companies with gaps — no industry, no size, no owner — it flagged them and set defaults where it could infer from context. Owner got set on everything. No more orphaned records.
Delete the junk. After the deal check, 83 more companies came out in a second pass. Old contacts from cold outreach that never went anywhere. Conference leads that were never real prospects. Imported contacts I’d forgotten existed. Gone.
The result
73 companies. Data quality at 90%+. One afternoon.
Every record has an owner. Every company with a deal is clearly associated. The duplicates are merged. The dead weight is gone.
For context: after the CRM cleanup, I ran a separate pass on contacts. Enriched 804 of them through Dropcontact, reviewed 300 manually, and pulled out 28 Italian targets worth reaching out to right now. That list is actually usable because the underlying data is clean.
Before this, I wouldn’t have trusted a list pulled from that CRM. Now I do.
What actually changed
The obvious thing: the CRM is clean. But the less obvious thing is that I trust it again.
I was avoiding opening HubSpot because I knew what was in there. Now I check it. I update it. It’s a real tool again, not a data graveyard I pretend doesn’t exist.
Also — and I’ll be honest here, because I didn’t fully expect this — having the AI walk through the logic out loud made me realize how many decisions I was making on gut feel instead of data. “This company looks dead.” Yeah, but does it have an open deal? I didn’t know. Now the system checks before I act.
The honest take
This is grunt work. It’s not sexy. Nobody writes Medium posts about cleaning their CRM.
But if you haven’t cleaned yours in more than 6 months, your pipeline numbers are fiction. You’re making decisions based on data you know is wrong and pretending it’s fine. The 450 companies I had weren’t a pipeline. They were noise with a HubSpot subscription.
The tool here isn’t the point. Claude Code, some other AI, doesn’t matter. The point is that this class of work — repetitive, logic-heavy, data-intensive — is exactly what breaks down when it’s purely manual. You get tired. You skip steps. You almost delete two companies with active deals.
AI as ops teammate. Not AI as chatbot.
That framing shift has changed how I think about where to use these tools.
If you’re building with Claude Code or MCP integrations and have something to share — or if your CRM is in the same state mine was — I’m curious what you’re seeing.