3 May 2026·8 min readMeta Ads MCPMCP

Running Meta ads from Claude: a one-week test of the Meta Ads MCP

Meta shipped an MCP server for its ads platform. We ran a week of campaigns from a Claude conversation to see what actually changes. Reporting collapses to a sentence, edits collapse to a paragraph, and the job of a media buyer starts looking very different.

A
Ajay Dhillon
Founder

Meta shipped a Model Context Protocol server for its ads platform earlier this year. For the first time, a language model like Claude or ChatGPT can read account performance, draft creative, push edits, and reallocate budget across Meta campaigns without anyone touching the Ads Manager UI. We ran a week of campaigns from a Claude conversation to see what actually changes when the surface a media buyer works against is a chat window instead of a dashboard.

Some of it was uncomfortably good. Some of it broke in instructive ways. This piece is the report.

What the Meta Ads MCP actually is

For readers new to the term, the Model Context Protocol is the open standard that lets language models read and write to external systems on a user's behalf. Meta's MCP exposes the Meta Marketing API surface, scoped per user, behind OAuth. Connected to a client like Claude or ChatGPT, it gives the model authenticated access to:

  • Campaign, ad set, and ad performance data
  • Audience and targeting definitions
  • Creative assets and ad variants
  • Budget, bid, schedule, and placement settings
  • Reporting, attribution, and conversion windows

What the MCP does not do is decide for you. The model proposes, you confirm, the write happens. Every change passes through an explicit consent step in the connected client. The model never has standing authority to mutate your ad account.

The starting point: what a normal week of Meta ads looks like

Before the MCP, a typical week of in house Meta ads work for a small team or solo founder looked roughly like this:

  • 15 minutes pulling last week's performance into a deck or sheet
  • 20 minutes scanning audience and placement breakdowns for waste
  • 30 minutes refreshing creative on campaigns whose CTR had decayed
  • 15 minutes reallocating budget between winners and challengers
  • 30 minutes drafting a learnings doc that nobody actually reads

About two hours of clicks, screenshots, and spreadsheet work. Most of it is pattern recognition the operator could do in their sleep. The strategic moves, audience theory, creative direction, account structure decisions, were maybe twenty percent of the time. The other eighty percent was operating the tool.

Day 1 to 2: getting Claude oriented

The setup is fast. Authenticate, grant scopes, point the MCP at the right ad account, then talk to it.

The first useful prompt was simple: pull last week's performance across all active campaigns and tell us what changed versus the prior week. Claude returned a written summary in under twenty seconds. Not a screenshot of a dashboard. An actual narrative: which campaigns improved, which decayed, what the cost per result trend looked like, where the spend was leaking.

That is the moment a media buyer feels something move. The first job that disappeared was not strategy. It was reporting.

Within two days of using the MCP we stopped opening the Ads Manager for read tasks entirely. Every question we used to answer by clicking and filtering was now a sentence in a conversation.

Day 3 to 4: when Claude starts editing

The interesting part. By midweek we were asking Claude to draft new creative variants, score them against the existing winning ad in the same ad set, and push the winners live.

A ninety second exchange replaced a workflow that would normally take twenty five minutes. Pull the winning creative, look at why it won, write a new variant in the same tone, generate the asset, upload it, configure the targeting, set the budget split, ship it. Claude did all of it inside the conversation. The MCP boundary handled the actual write. The Ads Manager UI was not opened once.

The discipline this requires is operational, not technical. Every write needs explicit confirmation. We treated the conversation like a pull request. Claude proposes a change, narrates the expected impact, waits for a yes, executes, then reports back. Skipping the narration step and just letting the model push changes is a bad idea. The friction of the consent step is the feature.

Day 5 to 7: where it broke

The honest part of any walkthrough. Three things broke or felt fragile.

Rate limits on the Marketing API. The model fetched broad reporting in a chatty way and hit rate limits faster than a human operator would. The fix was prompting it to batch reads and cache results inside the conversation. Claude accepted the instruction without protest.

Multi account scoping was clunky. When a single conversation needed to touch two ad accounts under different business managers, the consent flow restarted from scratch each time. This will get better as the MCP matures. Today it is a real friction for agencies or operators running multiple accounts in parallel.

Attribution edge cases confused the model. When a campaign used a custom conversion event, the model occasionally referred to the wrong metric name in its summaries. Catchable in the conversation, but worth knowing. If your account has heavy customisation, expect to coach the model on your taxonomy.

In none of these cases did the model push a bad change live. The consent boundary held. The fragility was in the reads and the reasoning, not the writes.

What actually changed about the job

Not speed. The work itself changed shape. Three things compressed and one thing expanded.

What compressed:

  • Reporting collapsed from thirty minutes of screenshots to a fifteen second conversation.
  • Creative iteration collapsed from a multi tool workflow to a paragraph in chat.
  • Budget shuffling collapsed from spreadsheet juggling to a single confirmed instruction.

What expanded:

  • Judgement. Which audience theory to test next. When to kill a creative that is still performing on paper but feels stale. Whether to defend a winning campaign or push into a new geography. The strategy work got more time because the mechanical work got less.

A senior media buyer becomes more valuable in this setup. A junior who only knew how to operate the Ads Manager is in trouble.

Why Meta shipped first

This is the part that matters for every SaaS company watching.

Ads is a job done every day inside someone else's tool. The moment that tool becomes Claude or ChatGPT instead of Ads Manager, the platform that is not reachable from the new tool is the platform that loses the workflow. Meta saw this clearly. They shipped the MCP before any direct competitor.

The agencies and operators who have already moved their reporting and creative ops to Claude are not going back. The behaviour, once formed, is sticky in a way that beats UI advantage. The cost of switching back is higher than the cost of switching forward.

Every SaaS category is two strategy meetings away from this realisation. Your customers do their work inside an LLM now. Either they do it through your product or they do it around it.

What this means if you run a SaaS

Three practical things to do this quarter, in order:

  1. Map the surface. Inventory the operations on your platform that an agent should be able to run on a customer's behalf. Reads, writes, things that need a human in the loop. The output is a tool inventory, not a wishlist.
  2. Ship the MCP. Production grade, OAuth, per customer scoping, redaction, and audit your enterprise buyers will sign off on. Six to ten weeks of work if your API is in reasonable shape.
  3. Watch what gets used. The operations agents actually call are the operations to harden next. The ones nobody calls are the ones you can leave for later. The MCP gives you usage signal you have never had before.

The MCP is not a moonshot. It is a defensive move with a measurable payback window. We help SaaS teams ship one as part of our agent-native SaaS engagement.

Frequently asked

What can the Meta Ads MCP do today? Read account performance, audience definitions, creative assets, and attribution data. Draft and edit creative variants. Push live changes to budgets, bids, schedules, and placements. All writes pass through an explicit user consent step in the connected client like Claude or ChatGPT.

Is it safe to let an LLM make changes to a live Meta ad account? Yes, with the consent boundary the MCP enforces. The model proposes, the user confirms, the write happens. The risk is comparable to a media buyer making a typo, and the audit log makes mistakes trivially reversible. The boundary is the feature.

Do I still need a media buyer if I am running ads through Claude? You need a senior one. Reporting, creative iteration, and budget mechanics compress dramatically. Strategy, audience theory, account structure, and creative judgement become more important. The role shifts upmarket. It does not disappear.

What does this mean for performance marketing agencies? Agencies that adapt fast win. The agencies still selling reporting decks and routine creative refreshes as their main service have a problem. The agencies selling strategy, brand, and creative direction become more valuable because the operational layer underneath them is now twenty minutes of work instead of two hours.

Will other ad platforms ship MCPs? Yes, and soon. Google, TikTok, LinkedIn, and Amazon all have the same incentive. Expect every major ad platform to have a customer facing MCP within the next twelve months. The platforms that ship later will pay the same switching cost in reverse.

How long does it take to build an MCP for a SaaS product? Six to ten weeks for a customer ready MCP if your API is in reasonable shape. The work is mostly in scoping, auth, per customer permissioning, redaction at the boundary, and conformance testing against Claude, ChatGPT, and Gemini.

Related reading

If you run a SaaS that is not Meta, the question is not whether your customers will work this way. They are. The question is whether they work this way through your product, or around it. If you want a production MCP for your product, that is the engagement we run. Talk to us.

Written by
Ajay Dhillon · Founder
08 · Start here

Let’sbuildyoursystemnext.

Thirty minutes with someone who’d be doing the work. No slide deck, no intake form. We’ll tell you what’s feasible, where you’ll hit friction, and what we’d pick up first.

Response
< 24 hours
First read
No NDA needed
Bangalore / Remote
UTC ±12