Source of pages for audit
Rules for scanning pages
Parser settings
Limits and restrictions
Report setup
The “Settings” section gives you the freedom to create convenient crawling conditions, specify the audit frequency, limits and restrictions, upload your own lists of pages to be audited, etc.
Schedule
Here you can create a schedule that will tell the platform when to run audits.
The following frequency settings are available:
1. Weekly – the audit will run regularly every week on the set day of the week.
2. Monthly – the audit will run once a month on the set day and time of the month.
The time can be set only in the GMT time zone.
3. Manually. Restart the audit at any time by pressing the corresponding button in the Report section.
You can manually restart the audit whenever you want, regardless of the automated settings.
Source of pages for audit
Under settings, you can choose the pages that the system needs to crawl.
1. All pages of your site, like Google bot;
2. Include or exclude subdomains;
3. Crawl only pages from the XML sitemap;
A link to the XML sitemap will be added automatically once the first check is completed, as long as it is located under the following address – domain.com/sitemap.xml. You can also upload the XML sitemap manually by pressing the corresponding button.
4. Upload your own list of pages in the .TXT or .CSV file format, for manual crawling (if, for example, you need to crawl new pages or pages blocked in robots.txt).
Rules for scanning pages
You can select specific rules for crawling your web pages or create them independently.
1. Take robots.txt directives into account. You can choose between the following options:
- Yes – the site will be scanned according to the list of valid instructions in the robots.txt.
- No – the instructions in the robots.txt will be ignored.
- Other rules – you can create your own rules and include or exclude certain website pages from scanning.
2. Ignore URL parameters
You can specify which page URL parameters should be ignored when scanning.
There are 2 available options:
1. Ignore all parameters – you exclude all values of URL variables.
2. Ignore custom parameters – you can manually specify what parameters will be ignored.
Parser settings
In this section, you can choose a crawling bot, as well as provide access to pages that are blocked for web crawlers.
All website pages will be checked regardless of what user agent is chosen (even if it is Googlebot-Image).
2. Authorization on restricted pages.
You can grant access to restricted pages to SE Ranking’s Gentle bot. To do this, enter the login and password into the appropriate fields.
Limits and restrictions
1. The maximum number of pages to be scanned. Select the maximum number of pages for scanning according to the availability in your subscription plan.
2. Maximum scanning depth. Select the crawl depth for scanning.
3. Maximum number of requests.
Depending on the server capability, you can increase the number of requests per second to speed up the scanning time. Or decrease the number of requests to reduce the load on the server. The recommended number of requests per second is 5.
You can set different limits for each site in your account.
Report setup
When running an audit of website parameters, SE Ranking relies on current search engine recommendations. In the “Report setup” section, you can independently change the parameters that are taken into account by the platform when crawling sites and compiling reports.