Possibilities with MCP Seem Endless 🤖
Recently, I stumbled upon an idea that quickly turned into something surprisingly useful. What if we could extend the idea of an MCP (Model Context Protocol) to CrUX, not just for individual URLs but at scale, where you could paste a list of URLs and instantly see how their origin-level performance is trending?
With a bit of hacking around and a fully local setup using Streamlit and Ollama (Local LLM), I was able to make this happen.

Why This Idea Matters 💡
When you’re running technical SEO audits or monitoring the web vitals of a large site, you often care about how groups of pages, such as subfolders or page types, are doing at the origin level. Individual page-level data is helpful, but origin summaries can uncover patterns that are otherwise easy to miss.
And here’s the kicker: imagine pasting in a list like this:
https://example.com/blog/
https://example.com/docs/
https://example.com/product/
And instantly seeing a breakdown like:
Pages under
/docs/
are improving steadily/product/
Seems stable, with no major shiftsBut
/blog/
is regressing, especially on mobile FID
And guess what? You didn’t even write that summary — Ollama did it for you, based on real-world CrUX data fetched automatically. ⚡
How I Built It ⚙️
Here’s a quick breakdown of what’s powering this bulk CrUX MCP:
Streamlit UI
Paste a list of URLs into a textarea
Select your device type (Desktop or Mobile)
Hit “Analyze” and watch the magic unfold
Backend Logic
Normalize all URLs to origins
Fetch CrUX origin-level data via API for each origin (You will need API for this which is free)
Store metrics like LCP, FID, INP, CLS with their % distributions
Tabular Output
Display a DataFrame showing good/needs improvement/poor %s
Sortable by metrics, filterable by status (Regressing, Stable, Improving)
Ollama Summary Generator
Feed the tabular insights to Ollama locally
It returns a paragraph summary that:
Flags regressions
Highlights performance gains
Offers an overall health status for the origins
This is the part where I geeked out a little – because watching a local LLM spit out useful summaries without needing the cloud or API tokens? That’s just fun.
No Hallucinations, Just Real Data 🔍
One of the biggest concerns with LLM summaries is hallucination — you know, when it just makes things up. Initially, I hit some of those snags. But after testing different prompt structures and cleaning how I prepare the CrUX data for input, the results have become far more reliable.
Now the summaries are:
Based strictly on the fetched data
Free from exaggerated language
Focused on trends and patterns — nothing more, nothing less
Let’s Talk Tech SEO Audits 📋🚀
If you’ve ever done a technical SEO audit, you know how time-consuming it can get — combing through URLs one by one, trying to understand which areas are underperforming. It’s a maze of Core Web Vitals data, BigQuery queries, and manual spot checks.
This CrUX MCP workflow basically turbocharges that process. Instead of digging through data manually, I can:
Paste a list of URLs grouped by subfolders or templates
Let the tool run and auto-analyze their origin-level performance
Get instant summaries on where regressions are happening
Export the table and share it directly with devs or SEO teams
Boom. What might’ve taken a few hours or half a day? Now done in minutes — with far less friction.
What’s even more exciting is the ability to rerun these reports regularly. You start to track trends, spot gradual regressions before they snowball, and actually monitor web vitals as part of your tech SEO workflow, not just during audits.
Use Cases I’m Already Seeing 📈
This CrUX MCP setup opens up a bunch of possibilities:
Technical SEO Audits: Real-world performance insights for subfolders or templates — done faster, done better.
Performance Monitoring: Schedule regular origin checks to track regressions or improvements over time.
Subfolder Deep-Dives: Perfect for large sites with varying templates like
/blog/
,/help/
,/product/
, etc.Competitive Analysis: Just paste competitor URLs, and see how their origins stack up against yours.
Wrapping Up 💬
What started as a random idea became a working local tool that turns CrUX origin data into actionable performance summaries – at scale, and without cloud dependencies. Since it’s built with open tools like Streamlit and Ollama, it’s easy to tweak, extend, or build on.
You can export the performance table. You get an instant, fact-based summary. You’re working entirely local. And best of all – you’re not buried in spreadsheets.
If you’re curious, I’ll be sharing the full code and setup guide soon. This one’s just the beginning.

Kunjal Chawhan founder of Decode Digital Market, a Digital Marketer by profession, and a Digital Marketing Niche Blogger by passion, here to share my knowledge