How To Scrape Data from Google Maps?

Reading time: 10 min read
Harsha Kiran
Written by
Harsha Kiran

Updated · Oct 25, 2023

Harsha Kiran
Founder | Joined March 2023 | LinkedIn
Harsha Kiran

Harsha Kiran is the founder and innovator of Techjury.net. He started it as a personal passion proje... | See full bio

Girlie Defensor
Edited by
Girlie Defensor

Editor

Girlie Defensor
Joined June 2023
Girlie Defensor

Girlie is an accomplished writer with an interest in technology and literature. With years of experi... | See full bio

Techjury is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more.

Google Maps is the most used navigation app in 2022, so businesses try to be "discoverable" in it. The more they appear in Google Maps searches, the higher their chances of getting more customers. 

With all the information businesses put to be searchable, Google Maps is now full of data crucial in business research—to generate leads, find competitors, or analyze customer sentiments. 

The problem is web scraping Google Maps will take time and resources due to the massive amount of data. Fortunately, there's a way to automate the process.

Keep reading to learn how to extract data from Google Maps!

🔑 Key Takeaways

  • Google Maps is the most downloaded navigation app. Its database is massive, making it a great source of valuable data for companies.
  • To scrape Google Maps, you need Python, BeautifulSoup, a code editor like Visual Studio Code, and the requests library. 
  • While scraping is generally legal for public data, respect trademark and copyright laws. Use a proxy to avoid IP blocking by Google.
  • You can use tools like Spylead, Apify, and Outscraper for Google Maps scraping, each offering unique features and pricing options.

5 Steps to Scrape Google Maps Data

For over two decades, the affluence of the Google Maps database has grown a lot. It even continues to receive updates every second.

Google Maps is currently available in over 220 countries and 40 languages. The platform has 120 million local guides worldwide and has collected more than 170 images from street views. 

Data from Google Maps

Google offers an API to scrape map data. However, it has some serious limitations. For example, the official API does not allow you to scrape Google Popular Times since its data is valuable for gauging customer behavior.

Creating your own free Google Maps scraper or subscribing to a no-code solution is best. Besides Google Maps, you can use this no-code scraper tool to scrape Google Search results

👍 Helpful Article

Data extraction is tedious, but you can automate it with web scraping and API. To know the best approach, check out the differences between web scraping and API.

Google Maps Scraping Requirements 

The tools needed to scrape Google Maps are accessible. Here are the things you need to scrape Google Places using Python:

Code Editor

A code editor is where you will write your scripts. The highly recommended code editor is Visual Studio Code, but you can use whatever you prefer.

Python

Python is a simple programming language. At the time of writing, its most recent version is 3.11.4, but you can use version 3.8 or newer.

Pro Tip

To check if your computer already has Python, run this command:

Python -V

It should return the version number of the installed Python.

Steps to Scrape Data From Google Maps Using Python

Once you have all the tools required for the process, here’s how you can start scraping data from Google Maps: 

Steps to Scrape Google Maps Data Using Python

Note: The steps below will create a Google Maps scraper using the keyword “computer stores in New York.”

Step 1: Install Necessary Libraries

The primary Python library that you will use in this process is BeautifulSoup. Run this command to install it:

pip install bs4

You also need to install the requests library. This module is necessary for sending get requests to the target URL.

pip install requests

Create a Python script file. You can name it whatever you like. In this example, you can call it ‘gmapscraper.py.’ 

Here is what the beginning of the code will look like:

import csv
import requests
from bs4 import BeautifulSoup

The CSV library is native to Python, so installing it is unnecessary.

📝 Note

BeautifulSoup is often compared to Selenium when using Python for web scraping. In this case, BeautifulSoup is better because Google Maps' content is static. If you're scraping dynamic data, Selenium is better. 

Step 2: Send a Get Request

To get the target URL, go to Google and search for the keyword that you want to scrape. Click on More Results to load more entries, then copy the URL.

Define the target URL and user agent by using this code:

url = 'https://www.google.com/search?sa=X&tbs=lf:1,lf_ui:10&tbm=lcl&q=pc+shops+in+new+york&rflfq=1&num=10&rllag=40730428,-73990581,1751&ved=2ahUKEwjauo7J4YmAAxUUbmwGHVlmAKsQjGp6BAhHEAE&biw=1208&bih=719&dpr=1#rlfi=hd:;si:;mv:[[40.7844352,-73.80324329999999],[40.6516932,-74.0195832]];tbs:lrf:!1m4!1u3!2m2!3m1!1e1!1m4!1u2!2m2!2m1!1e1!2m1!1e2!2m1!1e3!3sIAE,lf:1,lf_ui:10'

user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'

headers = {'User-Agent': user_agent}
response = requests.get(url, headers=headers)

This step will add the user agent to the request headers to present itself as a real browser. The get() function will attempt to load the content from the target site.

Step 3: Define the CSS Selectors

CSS selectors will pinpoint the information that you want to scrape. You can get the CSS selectors by analyzing the structure of the HTML content of the page. 

Right-click anywhere on the page and select Inspect. This step will let you access the browser’s DevTools and view the site HTML. 

Note that this method is time-consuming and involves a lot of trial and error. However, you can make the process easier by using a CSS selector finder tool. 

One tool that you can use is SelectorGadget. It is an open-source browser extension tool that lets you find the exact CSS selectors by selecting and rejecting elements.

Here is the example code with the chosen CSS selectors:

soup = BeautifulSoup(response.content, 'html.parser')

selectors = {
    'Shops': '.dbg0pd .OSrXXb',
   'Ratings': '.Y0A0hc',
    'Addresses': '.rllt__details div:nth-child(3)',
    'Phone number': '.rllt__details div:nth-child(4)'
}

The BeautifulSoup() function contains the argument about the type of parser it will use. 

You must also set up a dictionary for the vital information you'll scrape. Here are additional codes to create a dictionary to store the parsed results and iterate over the selectors.

results = {key.capitalize(): [] for key in selectors}

for key, selector in selectors.items():
    elements = soup.select(selector)

The elements containing the phone numbers will also include the opening and closing hours of the stores. If that information is not needed, you can filter them by adding this code:

    for element in elements:
        text = element.get_text()
        if key.capitalize() == 'Phone number':
            phone_number = text.strip().split('·')[-1].strip()
            results[key.capitalize()].append(phone_number)
        else:
            results[key.capitalize()].append(text.strip())

Step 4: Save the Results Into a CSV File

CSV is a plain-text file that can store large amounts of data.  It is also easy to import to spreadsheets and is usually compatible with lead generation software.

The next set of codes will help you store all the scraped data in a CSV file. To start, you need to set up the name of the CSV file by using this code:

filename = 'scraped_data.csv'

Determine the maximum length of the lists in the results dictionary by running:

max_length = max(len(result_list) for result_list in results.values())

The results may need proper alignment. To do this step, use:

for result_list in results.values():
    while len(result_list) < max_length:
        result_list.append('')

Use the keys as column names:

fieldnames = results.keys()

This command will align the values based on the maximum length:

results_list = [{field: results[field][i]
                for field in fieldnames} for i in range(max_length)]

To write the results on a CSV file in the defined filename:

with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
   writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

    writer.writeheader()
    writer.writerows(results_list)

Make sure the encoding argument is set to UTF-8 to avoid encoding errors. After that, print a notification message in your terminal using this:

print(f"Data has been successfully saved to {filename}.")

Step 5: Run the Code

Review the code for any syntax errors. The complete script should look like this:

import csv
import requests
from bs4 import BeautifulSoup

# Send a GET request to the webpage you want to scrape
# Replace with the URL of the webpage you want to scrape
url = 'https://www.google.com/search?sa=X&tbs=lf:1,lf_ui:10&tbm=lcl&q=pc+shops+in+new+york&rflfq=1&num=10&rllag=40730428,-73990581,1751&ved=2ahUKEwjauo7J4YmAAxUUbmwGHVlmAKsQjGp6BAhHEAE&biw=1208&bih=719&dpr=1#rlfi=hd:;si:;mv:[[40.7844352,-73.80324329999999],[40.6516932,-74.0195832]];tbs:lrf:!1m4!1u3!2m2!3m1!1e1!1m4!1u2!2m2!2m1!1e1!2m1!1e2!2m1!1e3!3sIAE,lf:1,lf_ui:10'

# Define the user agent
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'

# Set the user agent in the headers of the HTTP request
headers = {'User-Agent': user_agent}
response = requests.get(url, headers=headers)

# Create a BeautifulSoup object to parse the HTML content
soup = BeautifulSoup(response.content, 'html.parser')

# Define a dictionary of CSS selectors
selectors = {
    'Shops': '.dbg0pd .OSrXXb',
   'Ratings': '.Y0A0hc',
    'Addresses': '.rllt__details div:nth-child(3)',
    'Phone number': '.rllt__details div:nth-child(4)'
   # Add your additional key and CSS selector here if you want to parse more
}

# Create a dictionary to store the parsed results
results = {key.capitalize(): [] for key in selectors}

# Iterate over the selectors in the dictionary
for key, selector in selectors.items():
    elements = soup.select(selector)

    # Iterate over the found elements and extract relevant information
    for element in elements:
        text = element.get_text()
        if key.capitalize() == 'Phone number':
           phone_number = text.strip().split('·')[-1].strip()
            results[key.capitalize()].append(phone_number)
        else:
            results[key.capitalize()].append(text.strip())

# Define the filename for the CSV file
filename = 'scraped_data.csv'

# Determine the maximum length of the lists in the results dictionary
max_length = max(len(result_list) for result_list in results.values())

# Fill any missing values with empty strings to align the rows
for result_list in results.values():
    while len(result_list) < max_length:
        result_list.append('')

# Extract the column names from the keys of the results dictionary
fieldnames = results.keys()

# Create a list of dictionaries, aligning the values based on the maximum length
results_list = [{field: results[field][i]
                for field in fieldnames} for i in range(max_length)]

# Write the results to the CSV file
with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
    writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

    writer.writeheader()
    writer.writerows(results_list)

print(f"Data has been successfully saved to {filename}.")

You can use the built-in terminal in VS Code or your system terminal/command prompt to run the code. Run this command:

python gmapscraper.py

You can preview the results in VS Code by right-clicking on the CSV file and selecting Open Preview. You can also open it as a spreadsheet.

Scraped Google Maps Data in CSV Format

Pro Tip

Like most websites, Google is not welcoming of web scrapers. You may encounter issues due to the anti-scraping measures in place.

One way to overcome this is to limit the number of requests. You can also incorporate proxy rotation in your Python script to avoid being IP-blocked.

Safety and Legality of Scraping Google Maps

Scraping publicly available information is legal, including Google Maps information. However, it depends on how you intend to use that data.

Trademark Law also protects business names. Some of the images may be copyrighted, which means they are protected by DMCA.

Other than that, the risk of your IP being blocked by Google’s anti-scraping mechanisms exists. 

Pro Tip

One way to avoid IP blocking of Google’s security is to use a proxy while scraping. This tool will give you a different IP address, preventing the site from blocking your real IP. 

Best Google Maps Scrapers

There are scrapers that offer Google Maps data extraction completely free of codes. Here are some of the top recommendations:

Spylead

Spylead Google Maps Scraper

Key Features

  • Email finder and verifier
  • Credit system
  • SERP scraper

Price: $39/month for 1500 credits

One of the best tools that you can use to scrape google maps with Spylead

It is mainly an email finder service, but it is also an efficient tool to scrape Google Maps data. The service works in a credit system wherein you’ll spend one credit per 10 results.

Pros

Cons

Flexibility in pricing with the credit system

Does not include “Popular Times” in the scrapable data

Can use other features like the email finder/verifier and SERP scraper

 

Ease of use

 

Apify

Apify Google Maps Scraper

Key Features

  • Free plan with $5 monthly credit
  • Supports JSON, XML, CSV, Excel, and RSS Feed
  • 1300+ actors (scrapers)

Price: $49, then pay-as-you-go for 15000 to 20000 results

Apify is another no-code solution for Google Maps web scraping. It has an easy-to-use user interface, complete with instruction manuals and courses.

The pricing is also flexible with the “pay as you go” system. You only pay for what you use or can stay in the free plan indefinitely.

Pros

Cons

Indefinite free plan

Pricey for large-scale projects

Multiple file format support

 

Many dedicated actors (scrapers) to choose from

 

Outscraper

Outscraper Google Maps Scraper

Key Features:

  • Can scrape up to 15 data points
  • Free plan to pay-as-you-go transition
  • Review targeting with advanced collection settings

Price: $0.0002 per record

Outscraper is a web scraping service based in Texas. The site offers a free plan for the first 500 records, then switches to “pay-as-you-go” pricing.

Its Google Maps scraper can extract up to 15 data points per record. Advanced settings are also available for a more accurate review targeting.

Pros

Cons

Highly flexible pricing

Uncommunicative UI

Advanced settings

 

Conclusion

Google Maps has been everyone’s go-to web mapping platform for two decades. The vital role that it plays in people’s digital and real lives is why Google Maps holds billions of data you can scrape.  

As recommended, you should scrape Google Maps cautiously and moderately. Do your scraping projects with consideration to the importance of the platform to many people.

FAQs .


Is Google Maps API not free?

Yes. The Google Maps API pricing depends on the number and type of requests.

Can you get banned for web scraping?

Depending on the site’s policy, you can be IP-blocked for scraping their web pages.

What is the limit of Google search scraping?

There is no limit. The only limit is your scraper’s capability to circumvent Google’s anti-scraping measures. If you are not using proxies, you can only send 15-20 requests per hour without being blocked.

SHARE:

Facebook LinkedIn Twitter
Leave your comment

Your email address will not be published.