How to Audit Your Conversion Tracking with Claude or ChatGPT
Adspirer Team
Yes — Claude and ChatGPT can audit your conversion tracking setup via Adspirer and surface misconfigurations, double-firing pixels, attribution window mismatches, and broken conversion events. The audit pulls live data directly from Google Ads and Meta APIs, compares what’s configured against what’s actually recording, and flags specific discrepancies with conversion action names and event counts attached. It cannot access your GTM container, site code, or server-side tag setup directly — but it catches the majority of real-world tracking problems without ever touching your website.
Something is wrong with your reporting and you can’t figure out what.
Your ROAS went from 4.1 to 6.8 in two weeks but your revenue didn’t move. Or it’s the opposite: ROAS tanked but sales seem fine, and your ad platform is telling you conversions dropped while your CRM says they’re up. Maybe your Meta pixel events are spiking and you can’t explain why. Maybe you launched a new campaign last month, it shows zero conversions, and you’re not sure if it’s the campaign or the tracking.
These are the real r/PPC moments — the ones where you’ve checked everything obvious and still can’t find the disconnect.
The problem is almost always in the tracking setup, and diagnosing it usually means bouncing between Google Ads, Meta Events Manager, GA4, and your tag manager, trying to triangulate across platforms that use different attribution models and measure different things. A process that should take 20 minutes ends up taking two hours — if you find the answer at all.
This guide shows you how to run that diagnostic in a single conversation with Claude or ChatGPT. You’ll connect your ad accounts through Adspirer, run six specific audit prompts, and get back a clear picture of what your tracking setup is actually recording — and what it isn’t.
Adspirer connects Claude and ChatGPT directly to your Google Ads, Meta, LinkedIn, and TikTok accounts — no API keys, no developer setup. Connect your accounts in two minutes and start auditing immediately.
The 5 Most Common Conversion Tracking Problems
These five issues account for the vast majority of “my reporting looks wrong but I don’t know why” situations. The table maps each problem to its symptom, the platform where it lives, and what an AI audit can surface from API data.
Before the table: it’s worth understanding that most tracking problems fall into one of two categories. Configuration problems are when the tracking is set up wrong — micro-conversions counted as purchases, wrong attribution window, view-through conversions inflating ROAS. Implementation problems are when the tracking was configured correctly but the tag isn’t firing — URL changed, code removed, trigger broken.
The six audit prompts below surface both categories. The fixes are different: configuration problems are usually fixable within the ad platform (and often from Claude directly), while implementation problems require touching the site or tag manager.
| Problem | Symptom | Platform | What Claude Can Surface |
|---|---|---|---|
| Tag firing on wrong page | Conversion action shows high volume but doesn’t match actual sales | Google Ads | Conversion action name, type, last conversion date, and conversion count — lets you compare against expected volume and spot the broken one |
| Meta Pixel double-firing | Event counts 2–3x higher than expected; CPA looks artificially low | Meta | Event fire counts per day over a date range; events firing at anomalously high rates relative to what a normal funnel would produce |
| Attribution window mismatch | Google Ads and Meta report wildly different conversion numbers for the same period | Both | Attribution model and window settings per conversion action; surfaces the Google 30-day click vs Meta 7-day click discrepancy explicitly |
| Missing UTMs breaking source tracking | GA4 shows most traffic as Direct; ad platform conversions don’t reconcile with GA4 goals | Google Ads / Meta | Confirms whether auto-tagging is enabled (Google) and whether UTM parameters are present in Meta URL tracking templates |
| GCLID not passing to CRM | Google Ads offline conversion imports failing or empty; offline conversions show zero | Google Ads | Whether offline conversion imports are configured, last successful import date, and conversion action status for offline import actions |
The attribution window trap — read this before comparing platforms: Google Ads default attribution is 30-day click, 1-day view. Meta default is 7-day click, 1-day view. On a 14-day report window, Google is counting conversions from clicks up to 30 days old while Meta is only counting clicks from the past 7 days. This single difference explains the majority of cross-platform discrepancies you’ll encounter — and it’s not a misconfiguration, it’s a fundamental architectural difference in how each platform counts. Both numbers can be entirely correct under their own model and still look nothing alike. You cannot reconcile Google Ads and Meta conversion numbers directly. The audit prompts below surface each platform’s attribution settings so you know exactly what you’re comparing.
Run the Audit
The process takes about 15 minutes the first time. After that, re-running specific prompts when something looks off takes under five minutes.
Connect Your Ad Accounts to Adspirer
Sign up at adspirer.ai and connect your Google Ads and Meta accounts via OAuth. If you’re also running LinkedIn or TikTok, connect those too — the LinkedIn audit prompt is included below. No API keys or developer access required. The OAuth connection is the same flow as any ad platform integration — you’ll be prompted to grant read and write access to your account.
Add Adspirer to Claude or ChatGPT
Claude: Go to Customize → Connectors, click Add custom connector, and enter https://mcp.adspirer.com/mcp. Claude discovers all available tools automatically. See the full Claude setup guide.
ChatGPT: Open ChatGPT → Explore GPTs, search for Adspirer, and install the plugin. Authenticate when prompted. See the full ChatGPT setup guide.
Verify the Connection
Start a new conversation and run:
Check my connected ad platforms and show me a summary of active campaigns and conversion actions configured in Google Ads.You should see your account name, campaign count, and a list of conversion actions with their status. If you manage multiple accounts and Claude asks which one to use, specify by name or account ID.
Run the Six Audit Prompts
Use the prompts in the next section. Each one targets a different tracking failure mode. You don’t need to run all six — start with the ones that match your symptoms. If you can’t explain a ROAS spike, start with prompt 3. If your Meta events look inflated, start with prompt 2. If a new campaign shows zero conversions, start with prompt 1.
Interpret the Results
Claude returns the API data interpreted in plain language: conversion actions with zero conversions in 30 days despite active spend are flagged, events firing at anomalous rates are called out, mismatched attribution windows are surfaced. Cross-reference what Claude returns against what you see in the ad platform UI — they should match, and any gap is worth investigating.
Make the Fixes
Most fixes happen outside Claude — in Google Tag Manager, your site code, or the ad platform settings. Claude can tell you what’s wrong and what the correct fix is; it can’t edit your website or GTM container. For changes that live in the ad platform itself — disabling a micro-conversion from the Conversions column, changing an attribution model, adjusting a Meta campaign’s attribution window — Claude can make those changes directly with your confirmation.
The 6 Audit Prompts
Run these in a conversation where Adspirer is connected. Replace account names, platforms, or date ranges as needed for your situation.
This is the most direct signal of a broken tag. A conversion action that’s configured, enabled, and attached to a campaign with real spend — but shows zero or near-zero conversions — almost always means the tag isn’t firing, is firing on the wrong page, or wasn’t deployed after a site update. Claude returns a list sorted by last conversion date so stale conversion actions are immediately visible.
A common scenario: a site update moved the order confirmation page from /checkout/complete to /order/success. The Google tag trigger still targets the old URL pattern. The conversion action shows status “Active” in Google Ads because the tag was verified once historically — but it hasn’t fired in six weeks.
Another: a developer added conversion tracking to a staging environment for testing. The test conversions registered against the real conversion action. Now the last recorded conversion date looks recent, but the volume is wrong.
Double-firing is common after site migrations, theme updates, or when both Meta Pixel base code and CAPI (Conversions API) are sending the same event without deduplication configured. The giveaway is a Purchase event count that’s 2–4x your actual order volume, or any event firing more frequently than events earlier in the funnel. Claude surfaces event-by-event counts; the pattern of which events are anomalously high points you to the problem.
One note: a Purchase event firing at 2x your order count doesn’t automatically mean double-firing — some businesses have legitimate re-fires (subscription renewals, multi-item orders tracked separately). Context matters, which is why comparing to actual order volume is the confirmation step.
The funnel comparison is often more revealing than absolute counts: if your Purchase event fires more times than your AddToCart event in the same period, that’s definitionally wrong and signals double-firing regardless of your actual order count.
This catches a specific configuration problem: conversion actions categorized as page views, button clicks, or phone call starts accidentally included in the main “Conversions” column that Smart Bidding optimizes toward. When micro-conversions with $0 or nominal values mix with your primary purchase conversions, ROAS becomes meaningless. A $5 “Add to Cart” event counted alongside $150 purchases will inflate reported ROAS while actual revenue stays flat — and Smart Bidding will optimize toward the cheap micro-conversions, compounding the problem.
The fix is simple: move micro-conversions from “Conversions” to “All conversions” only. Smart Bidding will ignore them, your ROAS calculation becomes accurate, and you can still track them in the “All conversions” column for informational purposes.
Worth noting: this is one of the few tracking fixes Claude can apply directly. Once it identifies which conversion actions are miscategorized, you can ask it to update the “Include in Conversions” setting on each one without touching the Google Ads UI.
Meta allows setting different attribution windows per campaign, which is a problem when you’re comparing performance across campaigns in the same account. A campaign on 7-day click attribution and one on 1-day click attribution aren’t comparable, even within Meta’s own reporting interface. The one with the longer window will almost always show more attributed conversions — not because it’s performing better, but because it’s counting a longer historical tail of clicks.
This prompt surfaces mismatched attribution windows across your campaigns so you can standardize them — typically to 7-day click, 1-day view for most businesses.
The deeper issue is that Meta’s default reporting in Ads Manager aggregates all campaigns together, so if you have a mix of 1-day and 7-day attribution campaigns, the account-level conversion number is a meaningless blend of two different measurement standards. Standardizing attribution windows is one of the most underrated data hygiene fixes in Meta Ads management.
View-through conversions (VTC) record a conversion whenever someone saw your ad, didn’t click it, but later converted through any channel. Display and YouTube campaigns enable VTC by default. When VTCs are included in your Conversions column, your ROAS calculation includes credit for people who may have converted entirely due to organic search, a referral link, or a Meta ad — your display impression just happened to serve to them at some point in a 30-day window.
This is a structural over-attribution issue baked into Google’s defaults. It’s not fraud, but it significantly inflates apparent ROAS for Display and YouTube campaigns. The audit surfaces whether it’s happening and to what degree.
If view-through conversions are a meaningful share of your reported conversions, you have two options: exclude VTCs from the Conversions column entirely (cleanest for reporting), or leave them in but add a VTC-only breakdown column so you can see the split. The right answer depends on whether you believe your Display campaigns genuinely influence conversions — some businesses have real lift from brand awareness campaigns, and excluding VTCs entirely would undervalue them.
LinkedIn conversion tracking is more frequently broken than Google or Meta because it’s set up less often and tested less rigorously. The Insight Tag is a separate JavaScript snippet from your Meta Pixel and Google tag — it gets missed during site migrations or CMS updates.
This prompt checks whether configured conversion events are actually recording anything, which is the fastest confirmation that the Insight Tag is live and correctly placed. Zero conversions across all events on a campaign that’s been running three weeks is a strong signal the Insight Tag isn’t firing.
LinkedIn also supports URL-based conversions (fire on a specific page URL) and event-based conversions (fire when a specific event is triggered). URL-based ones break exactly like Google’s tag trigger problem — a URL change that doesn’t get reflected in the conversion event configuration.
Google vs Meta: How Conversion Tracking Works Differently
The audit experience differs between platforms because the tracking architectures are genuinely different. Understanding those differences helps you interpret what Claude returns and know when a discrepancy is a problem versus expected behavior.
What Claude Can and Can’t Check
Being explicit about this builds trust and saves time. The API data Claude accesses catches most real-world tracking problems, but a subset of issues require direct site or tag manager access to diagnose. Know the boundary before you start.
| Can Check (via Ad Platform API) | Cannot Check |
|---|---|
| Which conversion actions are configured and their status | Whether tag code is correctly placed on your site |
| Which conversion actions have zero or anomalously low conversions | Whether GTM triggers are firing as expected |
| Attribution model and window per conversion action | Whether custom parameters (revenue, order ID) are being passed correctly |
| Meta pixel event counts and daily rates | Whether server-side CAPI events contain correct customer match data |
| Attribution window settings per Meta campaign | Whether a Pixel fires on all steps of a multi-step checkout |
| Whether LinkedIn Insight Tag conversion events are recording | Actual event payloads or tag debug logs |
| Whether Google auto-tagging is enabled (GCLID) | Whether GCLID is being captured and stored in your CRM fields |
| View-through conversion settings and their contribution | Cross-device conversion paths |
| Conversion value settings (fixed vs dynamic) | Whether a URL trigger pattern matches the actual confirmation page URL |
The practical workflow: use Claude’s audit to identify which conversion action or pixel event is the problem, then use Google Tag Assistant, Meta Events Manager’s test events panel, or GTM preview mode to confirm the root cause at the implementation level.
Think of it as two stages: Claude narrows the problem to a specific conversion action or event, cutting diagnostic time from hours to minutes. Tag-level debugging tools confirm the implementation detail. You rarely need both without the first stage pointing you toward the right one.
It also means you can skip debugging platforms where the tracking looks healthy. If prompt 1 shows all your Google conversion actions are recording normally, you don’t need to open Tag Assistant. Time is better spent investigating the one platform where the audit flagged a problem.
After the Audit: What to Do with What You Find
Running the audit gets you a list of specific problems with conversion action names and numbers attached. Knowing what to do next depends on where the problem lives.
The general rule: if the problem is in the ad platform settings (attribution window, conversion count setting, micro-conversion in Conversions column), Claude can fix it directly. If the problem is in the implementation (tag not firing, URL trigger wrong, Pixel code missing from a page), the fix happens outside Claude — in GTM, in your site code, or in your CAPI server configuration.
If Claude finds zero conversions on a Google Ads conversion action:
- Open Google Tag Assistant and simulate the conversion action (navigate to the confirmation page, submit a test form, etc.)
- Check whether the tag fires — Tag Assistant shows each tag that fires on the page in real time
- If the tag doesn’t fire, the trigger URL is wrong or the tag wasn’t deployed after the last GTM publish. Fix in GTM and republish.
- If the tag fires in Tag Assistant but conversions still show zero in Google Ads after 24 hours, check whether auto-tagging is enabled at the account level. Without auto-tagging, Google Ads can’t match click data to conversions for most campaign types.
- Also verify the conversion action is included in the campaign’s conversion goal settings — a correctly firing tag on a campaign that targets a different conversion goal won’t show conversions.
If Claude finds a Meta Pixel event with anomalously high counts:
- Check your actual order volume against the Purchase event count. If orders were 180 and Purchase events were 340, that’s likely double-firing.
- Go to Meta Events Manager → Diagnostics. Look for duplicate events flagged there — Meta’s own deduplication check will flag events it detects as likely duplicates.
- Check whether both Pixel and CAPI are configured. If yes, verify that
eventIDis being passed and is identical in both the Pixel call and the CAPI payload. - If only Pixel is configured, check whether the Pixel base code appears twice on the page — common after theme updates or when a third-party tool adds its own Pixel embed.
- Also check: is your checkout a multi-step flow where the same page is revisited? Some checkout flows reload the confirmation page, which can re-fire the Pixel on refresh if the tag doesn’t check for duplicate fires.
If Claude finds mismatched attribution windows across Meta campaigns:
- Go to the campaign settings for each affected campaign in Meta Ads Manager
- Under “Attribution setting,” standardize to a single window — typically 7-day click, 1-day view for most businesses
- Note: changing attribution windows affects reported conversion counts going forward. Historical data looks different after the change — expect a temporary apparent performance drop if you’re moving from a longer to a shorter window.
- Document the change date so you don’t compare pre-change and post-change periods as if they’re apples-to-apples.
If Claude flags micro-conversions inflating Google Ads ROAS:
- In Google Ads, go to Tools → Measurement → Conversions
- Click the micro-conversion action (page view, button click, etc.) and change “Include in Conversions” from “Yes” to “No”
- The action stays in your “All conversions” column so you can still track it — it just won’t affect your ROAS calculation or Smart Bidding optimization
Alternatively, ask Claude directly: “Set the [conversion action name] to not be included in Conversions.” Claude can apply the setting change in Google Ads with your confirmation — no need to navigate the UI.
When Tracking Breaks: The Most Common Triggers
Conversion tracking problems don’t usually come from nowhere. If you’re trying to pinpoint when something changed, these are the events most likely to have caused it.
| Trigger | What Breaks | What to Check First |
|---|---|---|
| Site migration or redesign | Tag triggers built on URL patterns stop matching new URLs | Google Ads conversion action status; run prompt 1 |
| CMS or theme update | Pixel or tag code stripped from page templates | Meta Purchase event count vs actual orders; run prompt 2 |
| Checkout flow redesign | Conversion page URL or DOM structure changes; tags stop firing | Google Ads last conversion date on purchase action; run prompt 1 |
| Added CAPI without deduplication | Meta purchase events double-counted | Meta event rates; run prompt 2 |
| Campaign attribution window change | Historical conversion comparisons break | Meta attribution window per campaign; run prompt 4 |
| New campaign launched | New campaign conversions attached to wrong conversion action | Google conversion action breakdown; run prompt 1 |
| Smart Bidding strategy change | Bidding shifts toward a different conversion action | Conversion action “Include in Conversions” settings; run prompt 3 |
If you can’t narrow down when your tracking changed, look at the conversion volume trend in Google Ads or Meta Events Manager. The date where the line drops (or spikes) is usually the week the triggering event happened — and knowing the date makes diagnosing the cause much faster.
Cross-referencing that date against your deployment history or changelog (even a simple Slack message like “we pushed the new checkout today”) usually gets you to the root cause in minutes. Most tracking problems are post-deployment problems in disguise.
Keep Tracking Clean: A Recurring Audit Cadence
A one-time audit catches the current problems. A monthly audit cadence catches new ones before they compound.
The prompts above are quick enough to run monthly — 15 minutes to check all six. The most valuable ones to revisit regularly are:
- Prompt 1 (zero-conversion actions) — Run after any site update or deployment
- Prompt 2 (Meta double-firing) — Run monthly; Pixel double-firing can appear after theme updates
- Prompt 3 (micro-conversion inflation) — Run after any Google Ads account restructuring
- Prompt 5 (view-through inflation) — Run whenever you launch or pause Display or YouTube campaigns
The LinkedIn prompt (prompt 6) is worth running after any site deployment, since the Insight Tag is the most frequently broken tag across all four platforms.
You don’t need to run all six every month. Build a rotation: run prompts 1 and 2 monthly, prompts 3 and 5 quarterly, and prompts 4 and 6 whenever you add a new campaign type or platform.
A useful discipline: run prompt 1 immediately after any significant site deployment, before reviewing campaign performance. Confirmation page URLs are the most common casualty of site updates, and catching a broken Google tag on deployment day beats discovering it three weeks later when you’re trying to explain a conversion drop.
If you want to automate this, Adspirer supports scheduled agent tasks. You can configure a weekly check that runs prompt 1 and prompt 2 automatically and alerts you if anything looks broken — without you having to remember to run it. See the agent skills documentation for details on setting up recurring audits.
FAQ
Conclusion
Broken conversion tracking is one of those problems that compounds quietly. You don’t notice ROAS inflation from micro-conversions until you’ve been making budget decisions based on it for three months. You don’t catch the double-firing Pixel until you notice your CPA looks suspiciously good relative to actual sales. By then, the data you’ve accumulated is unreliable and historical comparisons are meaningless.
The audit prompts above take about 15 minutes the first time and surface the most common problems without requiring GTM access, site inspection, or manual investigation across four separate platform UIs. What you get back is a list of specific conversion actions and pixel events with concrete data about what they’re recording — not generic warnings.
The things Claude can’t check — your GTM container, your site code, whether a tag trigger is correctly configured — are a smaller set of the overall problem space than most people expect. For the majority of tracking issues, the API data is enough to identify the specific problem and point to the fix.
One more thing worth saying: clean conversion tracking isn’t just a reporting hygiene problem. Smart Bidding and Meta’s Advantage+ systems use your conversion signals to optimize. When those signals are wrong — micro-conversions mixed with purchases, double-fired events, attribution inflated by view-throughs — the algorithm optimizes toward the wrong thing. The fix shows up in your actual performance, not just your dashboard.
Start with prompt 1 if Google Ads looks wrong. Start with prompt 2 if Meta looks inflated. Start with prompt 3 if ROAS looks disconnected from revenue. The six prompts cover the full diagnostic surface — you just need to know which symptom you’re chasing.
Most advertisers run the full set once, fix what Claude flags, then check back quarterly or after major site changes. That cadence is enough to keep tracking clean and keep your reporting trustworthy.
Connect your ad accounts and run your first tracking audit in under 15 minutes. Adspirer is free to start — no credit card, no setup beyond connecting your accounts.
Related Articles
- How to Find Wasted Ad Spend in Google Ads and Meta Using AI — Once your tracking is verified, this is the logical next audit
- PPC Automation with ChatGPT and Claude — How to automate your full campaign management workflow
- How to Connect Claude to Google Ads — Step-by-step setup guide for Claude + Google Ads
- 10 Best AI Tools for PPC Managers in 2026 — How Adspirer compares to Ryze AI, Optmyzr, Cometly, and others
More articles to read
Stop Switching Between Ad Platforms: Manage Google, Meta, LinkedIn, and TikTok in One Conversation
You don't need four browser tabs. Connect Claude or ChatGPT to Google Ads, Meta Ads Manager, LinkedIn Campaign Manager, and TikTok Ads through one Adspirer connection and manage all of them from a single conversation.
Best AI Ad Management Platform 2026: Complete Buyer's Guide
7 AI ad management platforms compared honestly — including the new category of MCP-based tools that work inside Claude and ChatGPT. Who should use what, real pricing, and the trade-offs nobody else explains.