Skip to main content

SE::Google: Google Search Results Scraper


Google Search Results Scraper Overview

The Google search results scraper is one of the most in-demand scrapers, thanks to it you can get huge databases of links ready for further use. You can use queries in the same way you enter them in Google, including search operators (inurl, intitle, etc.).

The Google scraper supports automatic query multiplication, so you can be sure that you will get the maximum number of results from the search results. Also, A-Parser can automatically navigate related queries to the specified depth. Thanks to the multithreading of A-Parser, the processing speed of queries can reach 3000-7000 queries per minute, which on average allows you to get up to 500000 links per minute.

The functionality of A-Parser allows you to save parsing settings for further use (presets), set a parsing schedule, and much more. You can use automatic query multiplication, substitution of subqueries from files, enumeration of alphanumeric combinations and lists to obtain the maximum possible number of results.

Saving results is possible in the form and structure that you need, thanks to the built-in powerful Template Toolkit template engine, which allows you to apply additional logic to the results and output data in various formats, including JSON, SQL, and CSV.

Google Scraper Use Cases

Domain scraping

Scraping thematic domains by keyword phrase from Google and obtaining various parameters by domains

Google News scraping

This preset scrapes Google news by search query and collects the dates of these news

Indexing check

The preset checks the indexing of site pages in Google, going through the list of specified links

Competition analysis

The preset determines the competition in the Google search engine by keywords

Top 3 search results scraping

The preset saves the first top three snippets of Google search results

Questions and answers

Scraper collecting questions and answers from the People Also Ask section

List of collected data


  • Links, anchors, and snippets from the search results, as well as the date from the snippet (if any)
  • Presence and content of ad blocks, as well as their position on the page
  • Number of results for the query (competition)
  • List of related keywords (Related keywords)
  • Presence of additional blocks on the page: product carousel, videos, etc.
  • The scraper also collects such additional data:
    • Presence of a typo in the query and the corrected query
    • Geo-location determined by Google
    • Presence of AMP pages
    • List of People also ask: questions, answers, links to sources, their anchors, and links to media (enabled by a separate option Parse People also ask)



Google scraper has a lot of features and settings:

  • support for all Google search operators (site:, inurl:, etc.)
  • specifying the number of results (10, 20, 30, 50, or 100) and the number of pages (from 1 to 10), at maximum settings Google provides 300 to 500 results for one query, thanks to the ability of A-Parser to bypass this limitation by making multiple requests
  • the ability to automatically navigate through related keywords
  • specifying the language and country of the search results, the ability to choose a local Google domain, as well as the language of the search results interface
  • the ability to specify geolocation, which allows you to get accurate local search results for any place on Earth
  • choosing between desktop or mobile display
  • the ability to choose the type of search results, in addition to the main organic search results, Google scraper can collect news, book, or video search results
  • if necessary, you can connect automatic ReCaptcha2 recognition through recognition services or through XEvil/CapMonster
  • supports specifying the time of search results (for all time or for a certain interval from 24 hours to a year)
  • the ability to disable Google's filter for hiding similar results (filter=)
  • the ability to specify whether to parse the search results if Google reports that nothing was found for the specified query and suggests results for a similar query
  • the ability to specify the number of People also ask that the scraper should collect by clicking on each question in depth

The following scrapers are based on Google scraper:

Use cases

  • Link base collection - for XRumer, AllSubmitter, GSA Ranker, etc.
  • Full SERP dump, including links, anchors, snippets, advertising blocks, and other information, allows for in-depth analysis for SEO specialists and marketers
  • Competition evaluation for keywords
  • Evaluation of competition in PPC (advertising) search results
  • Backlink and site mention search
  • Site indexing check
  • Vulnerable site search
  • Any other use cases involving obtaining search results for an unlimited number of queries

Query examples

Queries should be specified as search phrases, just as if they were entered directly into the Google search form, for example:

покупка авто
окна в москве

Query substitutions

You can use built-in macros to expand queries, for example, we want to get a very large forum base, we will specify several basic queries in different languages:


In the query format, we will specify the character rotation from a to zzzz, this method allows us to rotate the search results to the maximum and get many new unique results:

$query {az:a:zzzz}

This macro will create 475254 additional queries for each original search query, which in total will give 4 x 475254 = 1901016 search queries, an impressive number, but it is not a problem for A-Parser. At a speed of 2000 requests per minute, this task will be processed in just 16 hours.

Using operators

You can use search operators in the query format, so it will be automatically added to each query from your list:


Result output options

A-Parser supports flexible result formatting thanks to the built-in Template Toolkit template engine, which allows it to output results in any form, as well as in a structured form, such as CSV or JSON.

Result format:


Result example:

Result format:

[% FOREACH item IN p1.serp;    loop.count _ ' - ' _ _ ' - ' _ item.anchor _ ' - ' _ item.snippet _ "\n"; END %]

Result example:

1 - - Форум — Википедия - <em>Фо́рум</em> (лат. forum — арх. преддверие гробницы; площадка в давильне для подлежащего обработке винограда; рыночная площадь, городской рынок;&nbsp;...
2 - - Форум (мероприятие) — Википедия - <em>Форум</em> — мероприятие, проводимое для обозначения или решения каких-<wbr>либо в достаточной степени глобальных проблем. Это понятие встречается в&nbsp;...
3 - - Добро пожаловать на справочный форум сообщества ... - Добро пожаловать на справочный <em>форум</em> сообщества Google Play. Избранные записи. Просмотреть все интересные записи &middot; Нужна помощь с игрой?
4 - - Gmail Community - Google Support - Welcome to the Gmail Help Community &middot; Featured posts &middot; Categories.
5 - - The World Economic Forum - The World Economic Forum is an independent international organization committed to improving the state of the world by engaging business, political, academic&nbsp;...
6 - - Home - Kunena - To Speak! Next Generation Forum ... - Kunena! - To Speak! Next Generation Forum Component for Joomla.
7 - - AdGuard Forum - <em>Форум</em> бета тестеров. Пишем сюда отчеты о багах бета-версий. Threads: 355. Messages: 11.6K. Sub-forums: Комментарии к релизам бета-версий&nbsp;...
8 - - Софийски Форум за Сигурност: Платформа за обсъждане ... - Софийски <em>Форум</em> за Сигурност / Sofia Security Forum.
9 - - Forums - Keenetic Community - Keenetic fan club. A place to meet software developers, get the latest updates, and share experience.
10 - - Perfect quality European private server of Aion - - Perfect quality European private server of Aion!

The built-in tools.CSVLine utility allows you to create correct tabular documents ready for import into Excel or Google Sheets.

General result format:

[%  FOREACH i IN p1.serp;    tools.CSVline(, i.anchor, i.snippet); END  %]

File name:


Initial text:


Result example:

Ссылка,Анкор,Сниппет,"Форум — Википедия",,"Forum - Wikipedia","<em>Forum</em> (plural forums or fora) may refer to: Contents. 1 Common uses; 2 Places. 2.1 Natural features; 2.2 Populated places. 3 Arts and entertainment; 4 Media.","The World Economic Forum","The World Economic <em>Forum</em> is an independent international organization committed to improving the state of the world by engaging business, political, academic&nbsp;...","Добро пожаловать на справочный форум сообщества ...","Добро пожаловать на справочный <em>форум</em> сообщества Центр Google Поиска. Избранные записи. Просмотреть все интересные записи &middot; Ответы на&nbsp;...","Добро пожаловать на справочный форум сообщества ...","Добро пожаловать на справочный <em>форум</em> сообщества Google Chrome. Избранные&nbsp;..."

The Template Toolkit is used in the General Result Format to output the serp array in a FOREACH loop.
What is the general result format?

In the results file name, simply change the file extension to csv.

To make the "Initial text" option available in the Task Editor, you need to activate "More options". In "Initial text", we write the column names separated by commas and make the second line empty.

Displaying ad blocks

Result format:

$ads.format('$link - $anchor - $snippet\n')

Example result: - Rent a Car Worldwide - Best Prices Online Guaranteed - Secure Your <em>Car Hire</em> Today. The Best Price Guaranteed. Book at Over 53,000 Locations. Search, Compare and Save Using the World's Biggest Online <em>Car Rental</em> Service. - United States from $9/day - Search for Rental Cars on Kayak - Find and Compare Great <em>Car</em> Deals in USA. Book with Confidence on KAYAK®! - -70% Worldwide Car Rental - Rent Your Car in 5 Minutes‎ - <em>Car rental</em> prices are rising, but if you act fast, you can get a good deal. Don’t stress! We... - Rent a Car for Summer Holidays - Car Rentals for the Best Price - Theft protection and Third Party liability part of a great deal. Free Mileage included.

Result format:


Example result:

test <b>speed</b>
<b>net speed</b> test
<b>google speed</b> test
<b>fast speed</b> test
<b>ping</b> test
<b>xfinity speed</b> test
<b>speed</b> test <b>mobile</b>
test <b>my</b>

To automatically remove HTML tags in the result, use the Results Builder, select the $related array, and apply Remove HTML tags.

Competition of keywords

Result format:

$query - $totalcount\n

Example result:

speed test mobile - 1080000000
test score - 4020000000
net speed test - 1210000000
fast speed test - 2150000000
speed test - 2500000000
test match - 4160000000
ping test - 425000000
google speed test - 1870000000

Determining keywords with errors

Result format:

$query - $misspell\n

Example result:

spead test - 1
test match - 0
speed test - 0
temst match - 1

Request format:


Result format:

$query.orig - $totalcount\n

Example result: - 2 - 4 - 883 - none - 371

To check the indexing of links, substitute the corresponding operator in the Request format: site:.

The result format is displayed as "source URL - number of pages in the index".

As a result, we get the addresses of the pages and their number in the search engine index.

If the page is absent, the result will be: none.

Saving in SQL format

Result format:

[%  FOREACH p1.serp;    "INSERT INTO serp VALUES('" _ query _ "', '";   link _ "', '";  anchor _ "')\n"; END  %]

Example result:

INSERT INTO serp VALUES('test', '', 'Speedtest by Ookla - The Global Broadband Speed Test')
INSERT INTO serp VALUES('test', '', ' Internet Speed Test')
INSERT INTO serp VALUES('test', '', 'IND vs AUS 4th Test highlights: India creates history, wins ...')
INSERT INTO serp VALUES('test', '', 'Find online tests, practice test, and test creation software | Test ...')
INSERT INTO serp VALUES('test', '', 'Recent Match Report - Australia vs India 4th Test 2020 ...')
INSERT INTO serp VALUES('test', '', 'World Test Championship (2019-2021) Points Table - Live ...')
INSERT INTO serp VALUES('test', '', 'ICC Test Match Team Rankings International Cricket Council')
INSERT INTO serp VALUES('test', '', 'Speedtest - Google')
INSERT INTO serp VALUES('test', '', '')

Dumping results in JSON

Result format:

[%  data = {};  data.totalcount = p1.totalcount; data.links = []; FOREACH i IN p1.serp;    data.links.push(; END;  result = {}; result = data  %]

Initial text

[% result = {} %]

Final text

[% result.json %]

Example result:


To make the "Initial text" and "Final text" options available in the Task Editor, you need to activate "More options".

Results processing

A-Parser allows you to process results during parsing, in this section we have provided the most popular cases for the Google scraper

Add deduplication and select $serp.$ - Link in the drop-down list.

Download example

How to import an example into A-Parser



Add deduplication and select $serp.$ - Link in the drop-down list. Select the type of deduplication: Domain.

Download example

How to import an example into A-Parser



Extracting domains

Add Results Builder and select the source in the drop-down list: $p1.serp.$ - Link. Select the type: Extract Domain.

Download example

How to import an example into A-Parser



See also: Results Builder

Removing tags from anchors and snippets

Add Results Builder and select the source in the drop-down list: $p1.serp.$i.anchor - Anchor. Select the type: Remove HTML tags.

Add Results Builder again and select the source in the drop-down list: $p1.serp.$i.snippet - Snippet. Select the type: Remove HTML tags.

Download example

How to import an example into A-Parser



You can add as many Results Builders as you need.

See also: Results Builder

Add a filter and select: $serp.$ - Link in the drop-down list. Select the type: Contains string. Then you need to enter the filtering criterion in the "String".

Download example

How to import an example in A-Parser



See also: Results filters

Possible settings

Parameter nameDefault valueDescription
DeviceDesktop (Chrome 76)Choose desktop or mobile Google search (Desktop (Chrome 76) or Mobile (iPhone iOS 10.2.1))
Pages count5Number of pages to parse (from 1 to 100)
Links per page100Number of links in search results on each page (from 10 to 100)
Serp typeDefault (All)Determines whether to parse from the main page, news or blogs (Books, News, Videos)
Hide omitted resultsDetermines whether to hide omitted results (filter= parameter)
Serp timeAnytimeSerp time (time-dependent search, tbs= parameter, possible values: Past 1 hour, Past 24 hours, Past week, Past month, Past year)
Parse not foundDetermines whether to parse search results if Google reported that nothing was found for the specified query and suggested a different query
Disable autocorrectAllows you to disable Google's autocorrection and parse the search results exactly for the specified query
Exact matchCorresponds to the option in the search engine "Exact match". Attention, this option overrides the value of the Serp time parameter (similar to how these options work in the browser).
Safe searchAuto (default)Ability to enable "Safe Search"
Google domain for parsing, all domains are supported (,,, etc.)
Narrow results by regionAny region Ability to narrow the search down to a specific country
Results languageAuto (Based on IP)Select the language of the results (lr= parameter)
Search from countryAuto (Based on IP)Select the country from which the search is being conducted (geo-dependent search, gl= parameter)
Interface languageEnglishAbility to choose the language of the Google interface for maximum consistency of results in the scraper and in the browser
Location (city)-Search by city, region. Cities can be specified as novosibirsk, russia; a full list of locations can be found in Geotargets (copy - use the value from the Canonical Name column). Also, the correct Google domain must be set
Util::ReCaptcha2 presetdefaultDetermines whether to use Util::ReCaptcha2Util::ReCaptcha2 to bypass reCAPTCHA
Util::AntiGate presetdefaultDetermines whether to use Util::AntiGateUtil::AntiGate to bypass graphical CAPTCHAs
ReCaptcha2 retries3Number of attempts to send the response for reCAPTCHA the specified number of times, without changing the proxy
ReCaptcha2 pass proxyAllows you to pass the proxy (used in the request to Google) and cookies (received in the response from Google) to the ReCaptcha recognition service
Use sessionsSaves good sessions, which allows you to parse even faster, getting fewer errors.
Don't take sessionAbility to not use saved good sessions
Additional headers-Allows you to specify any custom headers