SERP Research with Python – 2 Amazing Use Cases

SERP research is one of the most essential things to do in SEO.

SERP Research helps you validate your keyword research, your landing page plan, content pruning decision, SERP-based cannibalization, and your internal linking decisions.

For a small website, SERP research isn’t that difficult if we are talking about a small Local SEO website or a small SaaS website.

On this kind of website, one can manually also conduct SERP research & it wouldn’t be that time-consuming.

However, when you are working with large enterprise websites that have thousands of lakhs of pages that is where the manual approach won’t be scalable. This is where you will need to leverage a Python Script to help you with SERP Research.

SERP Research with Python Use Cases

1. Scraping Top 3 Ranking Results with SERP Title for Specified Keywords

				
					import csv
from serpapi import GoogleSearch

# Function to perform Google search and retrieve results
def perform_google_search(keyword):
    params = {
        "q": keyword,
        "location": "Maharashtra, India",
        "hl": "hi",
        "gl": "in",
        "google_domain": "google.co.in",
        "api_key": "your-api"
    }
    
    search = GoogleSearch(params)
    results = search.get_dict()
    
    return results

# Read keywords from text file
with open('keywords.txt', 'r') as file:
    keywords = file.read().splitlines()

# Perform Google search for each keyword and write results to CSV
with open('results.csv', 'w', newline='') as file:
    writer = csv.writer(file)
    writer.writerow(['Keyword', 'Rank', 'URL', 'Title'])
    
    for keyword in keywords:
        results = perform_google_search(keyword)
        
        # Retrieve top 3 ranking results with their URLs and titles
        for i, result in enumerate(results['organic_results'][:3], start=1):
            rank = i
            url = result['link']
            title = result['title']
            writer.writerow([keyword, rank, url, title])

				
			

This particular Python Script allows us to scrape the SERP in the following way.

You have to provide a txt file containing your keywords. The script will Google search those keywords from the location you specify and create an output csv containing the following columns, keyword, top 3 ranking URL, SERP title.

Instead of having to manually search those keywords to see what are the top 3 ranking URLs, you can use this Script with SERPAPI to get this data within seconds or maybe a minute depending on the number of keywords.

How does this help?

1. Let’s say you did bulk keyword research for a blog; now before proceeding with content briefs creation etc., you need to ensure that this topic deserves a blog. When you run the SERP Research script you will be able to see topics for which SERP is cluttered with Pinterest results and YouTube results. Moreover, you will also be able to identify SERP similarities among the topics.

After this SERP analysis, you will share only those topics where SERP research confirms that you should go for it.

2. Let’s say your Semrush Position tracker says to you that 150 keywords in your top 3 have gone down to 4th & 5th Position. This demands a traffic drop analysis. But at the same time, it is also essential for you to know who has outranked you in the top 3. This is where this Python script will come in real handy.

Here are more ideas for the script

  • You can add a layer of BeautifulSoup library that can also help you extract more SEO information about the ranking URLs like Meta Tags, H1 Tag
  • You can additionally with the BeautifulSoup library help extract the meta title & add a module for comparing meta titles against the SERP title to identify instances where the SERP title is different.

2. SERP Similarity Analyzer

				
					import pandas as pd

# Read the input CSV file
input_file = 'input-file.csv'
data = pd.read_csv(input_file)

# Group data by URLs and collect associated keywords
url_to_keywords = {}
for _, row in data.iterrows():
    keyword = row['Keyword']
    url = row['URL']
    if url in url_to_keywords:
        url_to_keywords[url].append(keyword)
    else:
        url_to_keywords[url] = [keyword]

# Initialize lists to store results
similar_queries = []
common_urls = []
common_titles = []

# Iterate through URLs and associated keywords
for url, keywords in url_to_keywords.items():
    if len(keywords) > 1:  # Check for similarity only if more than one keyword shares the URL
        common_urls.append(url)
        common_titles.append(data[data['URL'] == url]['Title'].tolist())
        similar_queries.append(', '.join(keywords))

# Create a new DataFrame for results
result_data = {'Query': similar_queries, 'URL': common_urls, 'Title': common_titles}
result_df = pd.DataFrame(result_data)

# Write results to output CSV
output_file = 'output.csv'
result_df.to_csv(output_file, index=False)

print("SERP similarity analysis completed. Results saved in output.csv.")
				
			

This is a SERP Similarity Python script that will tell you the URL that is common in different keyword searches.

This script is a continuation of the 1st Python Script. In this script, you will upload the output of the first script.

The output of first script contains the following columns.

Keyword, Top 3  Ranking URL, Title

Based on this the script will look for URLs that are common among different Keywords & provide that in the CSV output. 

So this is where you kind of identify >90% SERP Similarity because we are looking in the top 3.

Here is an example of how the output would look like.

Query | URL | Title

query1,query2,query3| common-url.com | [commontitle1, commontitle2]

What’s the use case?

With the help of the output, you can either avoid making a different page for different queries for which potentially the same URL should rank or you can use the output of this script in the content pruning exercise while you take the decision of deleting pages at bulk.

Leave a Comment