Common mistakes when using proxies for scraping Google | GeoSurf

7 Common mistakes when using proxies for scraping Google

//
Blog

7 Common mistakes when using proxies for scraping Google

Posted at April 15, 2019 in Product education, Proxies for bots articles, Proxy 101, Proxy server, Search Engine Optimization

If you are into web scraping for valuable data, you probably know that the biggest mistake you can make is not to use a proxy server at all.

However, there are quite a few other serious mistakes that can turn your scraping session into a failure. Below are listed the most common mistakes when using proxies for Google scraping.

USING A FREE PROXY

It is great that you know proxy servers are crucial for crawling the web successfully. Still, you shouldn’t use just any proxy you find online, especially not publicly accessible proxies.

Publicly available (often free) proxies are not secure since a great majority of them don’t allow HTTPS connection. However, that is not the only reason why free proxy servers are a bad idea as they could also monitor your connection, steal your cookies, and contain malicious malware. If you plan a project on a scale such as scraping Google – chances are your best option is to invest in proxy servers from a trusted supplier.

NOT USING A PROXY POOL

If you use only one proxy to scrape the web, your crawling reliability, geotargeting options, and the number of simultaneous requests you can make will significantly reduce.

The best solution to this problem is to use a pool of proxies and thus split your requests over a larger number of proxies. Depending on the number of requests, target websites, IP type and quality, as well as other factors, you should purchase a quality proxy pool that can fully support your scraping sessions.

NOT MANAGING YOUR PROXY POOL

Even if you get a first-class proxy pool, you will still need to control it in order to retrieve high-quality data.

Here are the best tips for managing your proxy pool:

  • Identify bans in order to be able to troubleshoot and fix issues
  • Retry requests in case of errors
  • Switch between user agents to avoid bot detection
  • Configure the proxy pool to target websites with different geographical locations

RUNNING INTO HONEYPOT TRAPS

Honeypots are traps used to detect and prevent attempts at unauthorized use of information systems. Some websites install them as links that are not visible to humans, but a spider can see them.

To avoid getting into a honeypot trap when following links, always make sure that the link has proper visibility. Certain honeypot links have the CSS style set to display:none or their color simply blends in with the background color of the page you are scraping.

Now, you may think that detecting honeypots is not easy. Unfortunately, you are right as this requires some programming work if you want to do it right.

On a more positive note, websites do not use honeypots frequently, so you might get away with this one after all.

FOLLOWING THE SAME CRAWLING PATTERN

Unless you change your bot’s settings, it will follow the same scraping pattern which is fairly easy to detect. Humans, on the other hand, do not perform repetitive tasks to the same degree as bots do.

Websites use advanced anti-crawling mechanisms in order to identify robots and prevent crawling. For example, they add infographics and check for user behavior. A human will read the infographic text and relate to its content whereas a bot will just “view” it as an image. Therefore, in order to avoid being detected, you should make a few mouse movements, random clicks, and actions on the page you are crawling.

NOT USING A HEADLESS BROWSER

A headless browser is a browser that does not display the visual layout of a web page but offers a number of other options.

Since some websites will show different content to different types of browsers, headless browsers are great because they allow you to scrape richer content from such websites. You can try Google’s headless Chrome, Selenium, PhantomJS, or another headless browser to see the difference in the content you scrape.

However, note that most headless browsers use a lot of CPU, RAM, and bandwidth, so make sure to scrape the web from a powerful computer.

NOT RESPECTING THE WEBSITE YOU ARE SCRAPING

Finally, there is something many people ignore when scraping the web with proxies – ethics.

When you use a proxy to scrape Google, you can make a huge volume of requests, which can make you greedy and sloppy. In other words, you can overload the target website server with too many requests.

Instead of doing that, play nice and always limit your requests. This is a win-win situation – you will be able to scrape valuable data while not doing any harm to the target website server.