Skip to main content

Net::HTTP - Universal basic parser with support for multipage parsing and CloudFlare bypass

Net::HTTP parser overview

Overview

Net::HTTPNet::HTTP is a universal parser that allows you to solve most non-standard tasks. It can be used as a basis for parsing arbitrary content from any sites. It allows you to download the page code by link, supports multipage parsing (page navigation), automatic proxy work, allows you to check for a successful response by code or by page content.

Net::HTTP parser use cases

REG.RU domain auction

Parsing the auction of expiring domains with filtering

SSL certificate data

Parsing SSL certificate data from leaderssl.ru

Parsing Booking.com resource

Getting search results for apartments and hotels on the site

Product characteristics collection

Example of parsing an unknown number of product characteristics

Parsing IMDB movie database

Gets data about each movie and writes it to the result

HTTPS availability check

The preset checks for the presence of HTTPS on the site

List of collected data

Example of collected data

  • Content
  • Server response code
  • Server response description
  • Server response headers
  • Proxies used in the request
  • An array with all collected pages (used when working with the Use Pages option)

Capabilities

  • Multipage parsing (page navigation)
  • Automatic proxy work
  • Checking for a successful response by code or by page content
  • Supports gzip/deflate/brotli compression
  • Detection and conversion of site encodings to UTF-8
  • CloudFlare bypass
  • Choice of engine (HTTP or Chrome)
  • Check content option - performs the specified regular expression on the received page. If the expression did not work, the page will be reloaded with another proxy.
  • Use Pages option - allows you to iterate over a specified number of pages with a certain step. The $pagenum variable contains the current page number when iterating.
  • Check next page option - you need to specify a regular expression that will extract a link to the next page (usually the "Forward" button) if it exists. The transition between pages is made within the specified limit (0 - without restrictions).
  • Page as new query option - the transition to the next page occurs in a new request. Allows you to remove the limit on the number of pages to navigate.

Usage options

  • Downloading content
  • Downloading images
  • Checking the server response code
  • Checking for HTTPS availability
  • Checking for redirects
  • Outputting a list of redirect URLs
  • Getting the page size
  • Collecting meta tags
  • Extracting data from the source code of the page and/or headers

Query examples

Links to pages should be specified as queries, for example:

http://lenta.ru/
http://a-parser.com/pages/reviews/

Results output options

A-Parser supports flexible result formatting thanks to the built-in Template Toolkit template engine, which allows it to output results in any form, as well as in a structured form, such as CSV or JSON.

Content output

Result format:

$data

Result example:

<!DOCTYPE html><html id="XenForo" lang="ru-RU" dir="LTR" class="Public NoJs uix_javascriptNeedsInit LoggedOut Sidebar  Responsive pageIsLtr   hasTabLinks  hasSearch   is-sidebarOpen hasRightSidebar is-setWidth navStyle_0 pageStyle_0 hasFlexbox" xmlns:fb="http://www.facebook.com/2008/fbml">
<head>
<!-- Google Tag Manager -->
<!-- End Google Tag Manager -->
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=1" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<base href="https://a-parser.com/" />
<title>A-Parser - парсер для профессионалов SEO</title>
<noscript><style>.JsOnly, .jsOnly { display: none !important; }</style></noscript>
<link rel="stylesheet" href="css.php?css=xenforo,form,public,parser_icons&amp;style=9&amp;dir=LTR&amp;d=1612857138" />
<link rel="stylesheet" href="css.php?css=facebook,google,login_bar,moderator_bar,nat_public_css,node_category,node_forum,node_list,notices,panel_scroller,resource_list_mini,sidebar_share_page,thread_list_simple,twitter,uix_extendedFooter&amp;style=9&amp;dir=LTR&amp;d=1612857138" />
<link rel="stylesheet" href="css.php?css=uix,uix_style&amp;style=9&amp;dir=LTR&amp;d=1612857138" />

Server response code

Result format:

$code

Result example:

200
note

The result format [% response.Redirects.0.Status || сode %] allows you to output status 301 if there are redirects in the request.

Getting query data

The $response variable helps to get information about the request and server response.

Result format:

$response.json\n

Result example:

{
"Time": 3.414,
"connection": "keep-alive",
"Decode": "Decode from utf-8(meta charset)",
"cache-control": "max-age=3600,public",
"last-modified": "Tue, 18 May 2021 12:42:56 GMT",
"transfer-encoding": "chunked",
"date": "Thu, 27 May 2021 14:18:42 GMT",
"Status": 200,
"content-encoding": "gzip",
"Body-Length-Decoded": 1507378,
"Reason": "OK",
"Proxy": "http://51.255.55.144:25302",
"content-type": "text/html",
"Redirects": [],
"server": "nginx",
"Request-Raw": "GET / HTTP/1.1\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, br\r\nAccept-Language: en-US,en;q=0.9\r\nConnection: keep-alive\r\nHost: a-parser.com\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)\r\n\r\n",
"URI": "https://a-parser.com/",
"HTTPVersion": "1.1",
"Body-Length": 299312,
"Decode-Mode": "auto-html",
"etag": "W/\"60a3b650-170032\"",
"Decode-Time": 0.003,
"IP": "remote",
"expires": "Thu, 27 May 2021 15:18:42 GMT"
}

Getting redirects

Query:

https://google.it

Result format:

$response.Redirects.0.URI -> $response.URI

Result example:

https://google.it/  -> https://www.google.it/

JSON with redirects

Result format:

$response.Redirects.json

Result example:

[{"x-powered-by":"PleskLin","connection":"keep-alive","URI":"http://a-parser.com/","location":"https://a-parser.com/","date":"Thu, 18 Feb 2021 09:16:36 GMT","HTTPVersion":"1.1","Status":301,"content-length":"162","Reason":"Moved Permanently","Proxy":"socks5://51.255.55.144:29683","content-type":"text/html","IP":"remote","server":"nginx","Request-Raw":"GET / HTTP/1.1\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, br\r\nAccept-Language: en-US,en;q=0.9\r\nConnection: keep-alive\r\nHost: a-parser.com\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)\r\n\r\n"}]

Server response status

Result format:

$reason

Result example:

OK

Server response time

Result format:

$response.Time

Result example:

1.457

Getting page size

As an example, the size is presented in three different formats.

Result format:

[% "data-length: " _ data.length _ "\n";
"Body-Length: " _ response.${'Body-Length'} _ "\n";
"Body-Length-Decoded: " _ response.${'Body-Length-Decoded'} _ "\n" %]

Result example:

data-length: 70257
Body-Length: 23167
Body-Length-Decoded: 75868

Results formatting

A-Parser allows you to process results directly during parsing. In this section, we have provided the most popular cases for the Net::HTTP parser.

Output of H1-H6 headers

Add a regular expression (option Use regular expression) <(h\d)>(.+?)<\/\1>, select $pages.$i.data - Page content in the "Apply to" field, select the sg modifiers in the field opposite the regular expression. An array will be automatically selected as the result type. In the Name field, specify headers, in "$1 to" specify tag, click on the plus sign opposite and in "$2 to" specify content. In the Result format field, use $p1.headers.format('$content\n').

Download example

How to import an example into A-Parser

eJx9VE1v2zAM/SsFkcMGBEFy2MW3NFiKDVnTNekpyEGNaUOLLGmSnDUw/N9H+ktO
N/Rmko+PfCTlCoLwZ//k0GPwkBwqsM03JLD7miQPxuQK7zZSn/3di5a/S4QpWOE8
OoYfRigKpJiJUgWYVhCuFonEXNA5mXJQpmRbZ96uDoOT6Ml3Eapk2GI+n0P9QZrI
8WRKHWLO4gO44n4tOk4bZcxHKWUvhuRyy8kBSJMlByfDcdoh9i3cU8c6h977oMyr
UJAEV2J9PPYsfm1cIXh4E7uYdZMcgjtxwb2hYCZVrOzXZD2KgqtMUhGQo7OsIfr0
eRbemEGkqQzSaKHaCjz7WLVbTALaEJY+ebprZwpyBWwI2HntuzvApLGjyp9tDiSZ
UB6n4KnVtaBG0vcRGdCJYNzWcj/kr8DopVIbvKCKsIb/vpQqpUNZZpT0rUv8P2T7
D0c9yBuXokX/cdTDwNJY99sfMSs1G5OT8vS1WWYhA9l+1VxPAnNynhHtMLNHnllh
HA5lOuauOr0Ni5qvKq5saaPrRsbNWm6dJ6MzmW+7S+2Rpd7TA9zqlSmsQtalS6Vo
LR6f43ksfbcGNmKD75NXTQmW3r9DCMYo/33XtmqdpPP7wg0WNMlx1Y7yJJR6ed6M
IxBPqjknz7QnutPc0AWRivo4/BGG/0g1/i8kVU1r+eWfWhBrYAj5aBieZs6P+S/t
6pW4

See also:

Collecting meta tags

Add a regular expression (option Use regular expression) (<meta[^>]+>), select $pages.$i.data - Page content in the "Apply to" field, select the g modifier in the field opposite the regular expression. An array will be automatically selected as the result type. In the Name field, specify meta, in "$1 to" specify item. In the Result format field, use $p1.meta.format('$item\n').

Download example

How to import an example into A-Parser

eJxtVO9v2jAQ/V8spBatg/XDvkRTJYqEtokRRtNPNJMscsk8/Gu2Q0FR/vfeJSGB
bp/ie37v3Z3PTsUC93u/duAheBZtK2abNYtYBjkvZWB3zHLnwdH2lq0gRNHXJFkj
3jMqFk4WULMrfTBqA74VunYRbdGiAE8SHjhLaaeAIwpuvygIfPvrIf3wMGYdnrRm
Re/QAdw5fkKw+a64IozkPY9KZCKAYmmdpj26ME5xamlk7yckmOQNcnszIvLLi74Z
Dx5P/ACJQXYuJAzwAqMu5wi7ANo9+4wn4Uj98iwTQRjNZZuS6hnKeNbib0l6bZCL
SyfAL5xRCAVoDAg8ncvdslET03mVjfZnq2FRzqWHO+ax1AXHQrL3O9iX48G42FI9
iFfM6JmUSziAHGiN/2MpZIbzneUo+tYJ/0+J//Go+/YuUx3AvTqsoXdposf4x6DK
zNIU58OQQomAsZ+bUtOkPiG4B7D9ma2IpoyDPk3n3GXHK2xBZ8gcRjazA3TVxtVY
rsGd0bkoYmzAiQzOzFIn+E5iPTfKSqC+dCkljsXDZrgeM9+NgYKhwPfieZPi6oUF
Y6T//tSWap3A6/eZClR4kpdZO8sdl/J5s7zcYcOVwuB3CNZH0yn/2L7dyc6o6avY
i6nQGRynjDwCFAZvF3ZYp/0j738F1cVTj6oaJ/bHr1sOtUcMxPCcPI6DRff1GzD1
gDE=

See also:

Pagination examples

Use pages

Use pages. This function allows you to navigate through pagination with a predetermined number of pages. For example, take one of the categories on the product catalog site https://www.proball.ru/catalog/myachi/. At the top and bottom, we see the pagination panel. By clicking on the icons with page numbers, you can see how the parameter with the page number is passed at the end of the request in the browser bar:

https://www.proball.ru/catalog/myachi/?PAGEN_1=1

Use pages is a kind of counter that actually substitutes numbers in the variable $pagenum, increasing them by the value we specify.

img

As can be seen from the screenshot, the variable $pagenum is used in the parser request format in the right place. The Use pages function will iterate and substitute all values in the request, in fact, we will get links for the request

https://www.proball.ru/catalog/myachi/?PAGEN_1=$pagenum

where instead of the variable $pagenum, the page number will be substituted, starting from 1 and up to 4 with a step of 1. Thus, we get a pass through the pages of the required range. This is the limitation of this method - you need to know in advance the number of pages that are in the pagination. Obviously, when parsing several categories at the same time, the number of pages everywhere will be different, and as a result, we can simply specify a larger number of assumed pages. But this is not entirely correct, so there is a more optimal solution, which will be discussed further

Download example

eJx1VNtu2kAQ/ZVqhJSgUAhV+2I1jSgqvSgCmtAnjKKtPTZu1rubvXAR8r931jY2
pOmTvWfPXM7M7BzAMvNk5hoNWgPB8gCq/IcAnME5SxF6oJg2qP31EqZog+DbYjEn
PMaEOW6hdwC7V0g2coNaZ7E3ymI6p1LGY1meN4w7oizfXV+vitYicsbK/B5N6Qh0
9RMsKWiKxgdhlsHK36S4I4OP7E3EmTE3IaQU6m1mMX98XCOLUYcQhuZqrTGh28v+
1W03hE9Q2y6qgGkTpQaY1mxPYPmdstxjPBNPpiF65SUEp5lLZTMpzFFqXS7TSoWh
9xrHmecxDsGhEvVgUdW3/hxJJ3y930NR/L+Szw71PpE6Z/YkQqeEb+ejr1+mj8Ob
jvcnXA7FatUkP6mMiKyG/VJYv/JzebG2VplgMNhut32l5W/GeV+7jieFobjothV4
YBtcSHKSZBxbeEKnumQdahT626P3bj8ym7MKVJn4arbZ/RLZcylFSOJ6ORmaiZY5
QRZ3tgb3RxXLWrMfCVfa/qxsIEgYN9gDQ6lOGCUSv7yhUdHMSj2rO0cNkWLE+R1u
kLe00v9nl3GaKDNKyOh7bfg6ZfaPj6KRdxqKOrrVlENLiuWdTI/anxBVU42pR3Kp
sXFQR6790otVKPxgtM0YqRY6S/Cs4OdgJEWSpbN62I5MJxa0FmZiLHPF0WcsHOc9
P+P3beNHpi6wP7QJvjQelyEorWZdgJWSmx8PVapKZzRYH5rmE/r6XA4iWgVcpoN8
z6J1NiBbQjCVNA1+c5XjQi9Hl/J6gDvFRIxUEasdFqti1ayyZuEdThZacKAHCH/M
vOJ4VZ5BGJXHUBcgGBZ/AXULzRU=

Check next page

Check next page is another function that allows you to organize pagination. The feature of its use is that to go to the next page, you need to use a regular expression that will return a link to the next page. This is a more convenient and most commonly used method. But it cannot be applied to https://www.proball.ru/catalog/myachi/ because there are no links to the next pages in the code. Links there are generated by a script. Therefore, take the site http://www.imdb.com/search/name?gender=female as an example. Here there is pagination both at the beginning and at the end of the list. After viewing and analyzing the source code, you can see the presence of a link that allows you to go to the next page:

img

  • in the Next page RegEx field, write a regular expression
  • in the Limit field, specify the number of pages to be parsed

img

In the example, 4 is specified. By specifying a limit, we determine how many pages the parser should go through. In our case, 5 pages will be parsed, since counting starts from 0. If you specify a limit of 0, the parser will work until it goes through all the pages regardless of their number. This is very convenient to use when you need to parse all the results from all pages.

Download example

eJx1VGFT2kAQ/SuZG2aUggGq4DSijnXG2qpAlY4fCM6cZANXL7n07iJYhv/e3RAS
sO0XuH3ZvX1vd2+XzHLzYgYaDFjDvNGSJdmZeWwyg8lLDxZ2wKfA6izh2oAmpxHr
gfW86+FwgHgAIU+lZfUls28JYKR6Ba1FQEEiQHuqVHCpMvuVyxRdRh+bzfHq/xEp
ZjrAtLEtY9id+i2k5I2223T2H0UcqLlxekOn47ZOHLQ7RyfOonNUdS6SRMIjPN8I
22gfHruHHWf/5np4d1t3pHgB5wsKU1XncqZVBI32J7fpHjaPW26r1XYeeMi1yMPY
FsVJaqyK7sFkWpleH7wR1mUKhurALWdj+jKFBQZ0TcJjZyK5Mac+k8JY1CQsRAdI
HRZuzWdnvm8+7Lu18yodur7foBhCa10+ejob19Yeo6fueMuJn7E8zXDNbVoQygGu
NX9DMPvv8YgwSm0KR+oji9PoGZGYHHbakVihYrPpxvtJ2DSky52ZhhDVrTUwIv5O
MFXnIMZYh34y02fELAgEJeGSecvdDLciEjSAR2y1Go8LwldKR5zwStJyMzFumEH7
exUS4lRIh+/He9VS5QN/haHCoFBIKOErtPKyVLBvQF83t1Vdu7A7DNeZqWIlmx+x
+JUVIVboi0ctwFzhQCFkIbuAwLcN6xGrZDZNSJrFfl/HMC/k0kCdGaR6xZFI8P4L
itXcKt3P24IFU/GFlLfwCrJ0y+7/nAoZ4DO9CDHoax74b5f+X3esCnnbqfBtzjVy
KJ0CdaumG+0vAElRjR4hkdJQXJBnzu/FTZNATENVNgOfawHtENwp+C44UXEopv18
bWw803iI66wfX6oINwAxjlMp67RO7svGX5i8wGSUBN8HX2YpkFax4JhVSppvD2uq
iRY4WO2i+YjOrE28RmM+n7siCp7diYoaBriezBo0m+e40FDCaQgRz6ZxgqM3VTgU
tHizqaHhMZQKFrgJAsCyWJ3CarwaFxu42NbLrT3sLVfYmJ9msPYhaeSBGNbIYCuY
11r9ASIaBUM=

As mentioned above, it is possible to dynamically limit the number of pages in Use pages. To do this, you need to use Use pages and Check next page together. Let's add the Check next page function to the example that was considered when describing Use pages:

img

These two functions work together as follows: Use pages provides passage through pages, and Check next page checks if the next one exists. As soon as Check next page does not find the next page, parsing of this category will be stopped, without waiting for the passage of the entire quantity specified in Use pages. By combining these functions, we add efficiency to the parser's work, saving time and resources.

Download example

eJx1VNtuGjEQ/ZXKQkqiEC6V+rJqGtGo9KIIaEKfWBS5u7OLi9d2bC8EIf69M3uF
XJ52fTyXM2fGs2eeu7WbWXDgHQsWe2aKfxaw3MGMp8C6zHDrwNL1gk3AB8GP+XyG
eAwJz6Vn3T3zOwPoozdgrYjJScR4TrWOb3Vx3nCZo8ni42CwPLQeUe68zu7BFYGY
LX+CBSZNwVES7jlb0k0Kz+jwmX+IJHfuOmQpproSHrLHxxXwGGzIwtBdriwkeHve
u7y5CNkXVvnOy4Rpk6UCuLV8h2DxnfCMMCnU2jWGVHkBsWPm2nihlatLreRybals
SFHjWJAdlyzYl0U9eDDVLZ0jnSvSezgYsMPhfS2fcrC7RNuM+6McnQK+mY2+f5s8
Dq87FFHlGXufaLSCaD2BZ191t45EQl8pxK8oxjVpGV+G7FUNJ/53IhNEnqgvl41g
45Im0jPDXiFmr2R+frby3rig399utz1j9V8uZc/mHTIKQ3V20ar+wDcw1xgkERJa
eIynqk0d5Ax0W0e/6EVuc8K4ZEIdbNn9UeKpKFlptCUBBbix1RlCHgurwF1dxaJS
mcYwL3x/lz4sSLh00GUOqY45Eolf3uB4Wu61nVZNQAG1Gkl5BxuQrVkR/2suJE6x
GyXo9LNyfNtk+irGoSnvOBXO0NYih9Yo1nc6rWtfA5hGjQkhmbbQBKgyV3FxSxhQ
NEJtM0amhU4Ingh+CkZaJSKdVuNdW+Zqjqtoqm51ZiQQY5VL2aV3dd82fuQqgenQ
EnzpfFukoKGuVxTzWkv366GkaqzAwfrUNB/Rt+eyH+GrkDrtZzserUQffRGBVOM0
0LYsxgWfmC3K6zJ4NlzFgIoMD8vDstmdzYbdH23QYI/vnf1zs9KGSiILxFAbhy2g
KP8Byg3yDQ==

Using substitution macros

Substitution macros allow you to implement sequential substitution of values from the specified range.

img

This preset will work as follows. By specifying the template in the query format:

$query?PAGEN_1={num:1:10}

we add substitution of values from 1 to 10 (the range can be specified arbitrarily) in the query itself. Thus, we get requests that provide passage through the required number of pages, like:

https://www.proball.ru/catalog/myachi/?PAGEN_1=1
https://www.proball.ru/catalog/myachi/?PAGEN_1=2
...
https://www.proball.ru/catalog/myachi/?PAGEN_1=10

Using substitution macros to navigate through pagination is similar to the Use pages function and has the same limitations, that is, you need to specify a specific range of values. The advantage of this method can be considered that through substitution macros, you can substitute different values, both numeric and textual, for example, words or expressions. Thus, we can more flexibly insert the necessary parts into requests or form requests ourselves from parts that will be placed in different files, if the task requires it.

Download example

eJxtVFtP2zAU/iuTVQkQXUsn7SUaQx2iu4i1HXRPTYW85CT1cGzPx+lFUf47x26a
AOMp8Xcu3+dzccUcx0ecW0BwyKJlxUz4ZxH7yROr8WbH+sxwi2C9fcmm4KLo22Ix
JzyFjJfSsX7F3N4ABekNWCtSIKNI6ZxrnV7rcN5wWZLL8sPFxaruIpISnS7uAEMi
Zg8/0ZJIc0BPwh1nK2/JYUcBn/i7RHLEy5jlRPVeOCgeHtbAU7Axi2M8X1vIyHo6
OL86i9ln1sQuDoR5y9IA3Fq+JzB8p7zwmBTqEVtHf/MAsVW9WrXoRNuC+1L1zGgQ
IgZZgE5P1s4ZjIbD7XY7MFb/4VIObNnzTnGsTs661Pd8AwtNSTIhoYMndGq09KgC
4K3H7GeDBDdecJoKJ7Ti8qDEy+zU/Vbiny84U5p86dcKwInVBUEOdq4B98dbLFkv
nK/m468304fRZaXKIhpFo4val78M6X4d0rAo4xKhz5DUTzhpS19bqC2WO21nxksk
vGJajaW8hQ3Izi1QfimFpO7hOKOg703g2y6z/3LU7Y2fU9Ecbi1p6JxSfavzYzke
AUxboKlHCm2hTdAwN3lpPQwoP85df8amg14IfNGDl2CiVSbyWbMiR89SLWgHZ+pa
F0aCV6xKKangCHfdLIyxKbA/dAJfB18HCpLVriZzWkv8cX+QaqygWfvYzgOhb4/q
MKG1kzofFnuerMWQYgmBXNt9eCbCBPlBQs8BO8NVClQPZ0uoaUvaR6N9W6pnT0dU
1dSRvzg/+Pg7eQ/CqDhIPWDRqH4C36ybyg==

Using Page as query

To reduce memory consumption, the logic can be defined using the Page as query option. When activated, these functions will substitute each subsequent page into requests as a separate independent request, thereby not accumulating their content in memory. Page as query also allows you to determine whether to increase the query level Increase (similar to the $tools.query.add tool), or not Keep.

img

Download example

eJx1VNty2jAQ/RWPJg/QEts0gUydpBnCDE0bAiQhkwfgQbHXoCJbriQDKcO/dyXM
tc0LeFd7OXv2siSaqqnqSVCgFQkGS5LZbxKQHh1DQ3Vg/piDfCcVklGpQBqrAemA
DoK7fr+H+ghimnNNKkui3zNAVzEDKVkE+MgilMdCRE1h5RnlOZoMvvj+aPWxR46Z
ThFAqnc+5EH8YZxTr+b6TumVpZGYK6fTd+pu9dJBuX5+6Szq52WnkWUcXuHtnmmv
dnbhntWd0v1d/6FdcTibgvMdwqkoO82JFAl4ta+u7575F1W3Wq05zzSmkhVuZB9i
pplI1QZhOMEgHVhow9MeyCvqTCTE10NScj/flIdkOFSfQ06VQhVnSmNhGbqcpujr
mB8rDgnGoFHETBLKSbA8zNBmCTNd8cnqY0zZccsKUFrmcBx9CpBZszbMADX+ajUa
VQh2H3upWkIm1OQ7yaquZpqbEou3ZzqDvsC3mO2rWyh1aGJgnURUg3l1YxunVHb1
Qh8UuE5ghmmX9CVlvy2HqUBb/JQMVAt7hCoNNoBRvm/ADciJlQmGyK3v49qHBDHl
CktWCLVFEUh0/MKwD1QL2S0YREZE2uDcsrEzs/Fvc8YjnPxGjE4/Csf/m3T/ibHa
lrefCsd9LhHDNoqVbrsPO69ItMUYK4/esG5u2o+yaoo8Xc9BxbZwy1nHcJYICds0
ReQiO+54BqmZkl3LcE+2qoMyDtqyp1wSJXIZmgX2ESHV1HBfDKOEMSxscPOPm2Dn
5lvJ/XRTvhoOvbW4MeivnczsamF6uZ6y0QoDhiKN2bhb3IQNmjzt47Hqpk2R4Hob
7tKc84q5FU+7EWyootVG2JFw7Ny0KQy9m+uFMARXP5/XdGSSIaqaAZtgt/azFiFD
yvnLU3v/hezGFoWJ1lngefP53GVJ9OaGIvEUUBlOvBT35AavG9J6HUNC7R6FuDRj
geOM5RoWinu7Pc7LvasbLPEMkF+qt7YxtRoL1CFpCvtPgurqL0u6AK8=

Possible settings

Parameter nameDefault valueDescription
Good statusAllSelects which server response will be considered successful. If parsing returns a different response from the server, the request will be repeated with a different proxy
Good code RegEx-Ability to specify a regular expression to check the response code
Ban Proxy Code RegEx-Ability to ban a proxy for a certain time (Proxy ban time) based on the server response code
MethodGETRequest method
POST body-Content to be sent to the server when using the POST method. Supports variables $query - URL query, $query.orig - original query, and $pagenum - page number when using the Use Pages option.
Cookies-Ability to specify cookies for the request.
User agentAutomatically inserts the user-agent of the current version of ChromeUser-Agent header when requesting pages
Additional headers-Ability to specify arbitrary request headers with support for template engine capabilities and using variables from the request builder
Read only headersRead only headers. In some cases, it allows you to save traffic if there is no need to process content
Detect charset on contentRecognize encoding based on page content
Emulate browser headersEmulate browser headers
Max redirects count7Maximum number of redirects the parser will follow
Follow common redirectsAllows http <-> https and www.domain <-> domain redirects within the same domain, bypassing the Max redirects count limit
Max cookies count16Maximum number of cookies to be saved
EngineHTTP (Fast, JavaScript Disabled)Allows you to choose the HTTP engine (faster, without JavaScript) or Chrome (slower, with JavaScript)
Chrome HeadlessIf enabled, the browser will not be displayed
Chrome DevToolsAllows you to use Chromium debugging tools
Chrome Log Proxy connectionsIf enabled, information about chrome connections will be output to the log
Chrome Wait Untilnetworkidle2Determines when the page is considered loaded. More about values.
Use HTTP/2 transportDetermines whether to use HTTP/2 instead of HTTP/1.1. For example, Google and Majestic immediately ban if you use HTTP/1.1.
Don't verify TLS certsDisables TLS certificate validation
Randomize TLS FingerprintThis option allows you to bypass site bans by TLS fingerprint
Bypass CloudFlare with ChromeAutomatic CloudFlare check bypass
Bypass CloudFlare with Chrome Max Pages20Max. number of pages when bypassing CF via Chrome
Bypass CloudFlare with Chrome HeadlessIf enabled, the browser will not be displayed during CF bypass via Chrome