Skip to content

whereyouatwimm/headless-chrome-crawler

 
 

Repository files navigation

Headless Chrome Crawler npm build Greenkeeper badge

Distributed crawler powered by Headless Chrome

Features

Crawlers based on simple requests to HTML files are generally fast. However, it sometimes ends up capturing empty bodies, especially when the websites are built on such modern frontend frameworks as AngularJS, React and Vue.js.

Powered by Headless Chrome, the crawler provides simple APIs to crawl these dynamic websites with the following features:

  • Distributed crawling
  • Configure concurrency, delay and retry
  • Support both depth-first search and breadth-first search algorithm
  • Pluggable cache storages such as Redis
  • Support CSV and JSON Lines for exporting results
  • Pause at the max request and resume at any time
  • Insert jQuery automatically for scraping
  • Save screenshots for the crawling evidence
  • Emulate devices and user agents
  • Priority queue for crawling efficiency
  • Obey robots.txt
  • Follow sitemap.xml
  • Promise support

Getting Started

Installation

yarn add headless-chrome-crawler
# or "npm i headless-chrome-crawler"

Note: headless-chrome-crawler contains Puppeteer. During installation, it automatically downloads a recent version of Chromium. To skip the download, see Environment variables.

Usage

const HCCrawler = require('headless-chrome-crawler');

HCCrawler.launch({
  // Function to be evaluated in browsers
  evaluatePage: (() => ({
    title: $('title').text(),
  })),
  // Function to be called with evaluated results from browsers
  onSuccess: (result => {
    console.log(result);
  }),
})
  .then(crawler => {
    // Queue a request
    crawler.queue('https://example.com/');
    // Queue multiple requests
    crawler.queue(['https://example.net/', 'https://example.org/']);
    // Queue a request with custom options
    crawler.queue({
      url: 'https://example.com/',
      // Emulate a tablet device
      device: 'Nexus 7',
      // Enable screenshot by passing options
      screenshot: {
        path: './tmp/example-com.png'
      },
    });
    crawler.onIdle() // Resolved when no queue is left
      .then(() => crawler.close()); // Close the crawler
  });

Examples

See here for the full examples list. The examples can be run from the root folder as follows:

NODE_PATH=../ node examples/priority-queue.js

API reference

Table of Contents

class: HCCrawler

HCCrawler provides methods to launch or connect to a Chromium instance.

const HCCrawler = require('headless-chrome-crawler');

HCCrawler.launch({
  evaluatePage: (() => ({
    title: $('title').text(),
  })),
  onSuccess: (result => {
    console.log(result);
  }),
})
  .then(crawler => {
    crawler.queue('https://example.com/');
    crawler.onIdle()
      .then(() => crawler.close());
  });

HCCrawler.connect([options])

  • options <Object>
    • maxConcurrency <number> Maximum number of pages to open concurrently, defaults to 10.
    • maxRequest <number> Maximum number of requests, defaults to 0. Pass 0 to disable the limit.
    • exporter <Exporter> An exporter object which extends BaseExporter's interfaces to export results, default to null.
    • cache <Cache> A cache object which extends BaseCache's interfaces to remember and skip duplicate requests, defaults to a SessionCache object.
    • persistCache <boolean> Whether to clear cache on closing or disconnecting from the Chromium instance, defaults to false.
    • preRequest(options) <Function> Function to do anything like modifying options before each request. You can also return false if you want to skip the request.
    • onSuccess(response) <Function> Function to be called when evaluatePage() successes.
      • response <Object>
        • response <Object>
          • ok <boolean> whether the status code in the range 200-299 or not.
          • status <string> status code of the request.
          • url <string> Last requested url.
          • headers <Object> Response headers.
        • options <Object> crawler.queue()'s options with default values.
        • result <Serializable> The result resolved from evaluatePage() option.
        • screenshot <Buffer> Buffer with the screenshot image, which is null when screenshot option not passed.
        • links <Array> List of links found in the requested page.
        • depth <number> Depth of the followed links.
    • onError(error) <Function> Function to be called when request fails.
      • error <Error> Error object.
  • returns: <Promise<HCCrawler>> Promise which resolves to HCCrawler instance.

This method connects to an existing Chromium instance. The following options are passed to puppeteer.connect().

browserWSEndpoint, ignoreHTTPSErrors

Also, the following options can be set as default values when crawler.queue() are executed.

url, allowedDomains, deniedDomains, timeout, priority, depthPriority, delay, retryCount, retryDelay, jQuery, browserCache, device, username, password, evaluatePage

Note: In practice, setting the options every time you queue equests is redundant. Therefore, it's recommended to set the default values and override them depending on the necessity.

HCCrawler.launch([options])

  • options <Object>
    • maxConcurrency <number> Maximum number of pages to open concurrently, defaults to 10.
    • maxRequest <number> Maximum number of requests, defaults to 0. Pass 0 to disable the limit.
    • exporter <Exporter> An exporter object which extends BaseExporter's interfaces to export results, default to null.
    • cache <Cache> A cache object which extends BaseCache's interfaces to remember and skip duplicate requests, defaults to a SessionCache object.
    • persistCache <boolean> Whether to clear cache on closing or disconnecting from the Chromium instance, defaults to false.
    • preRequest(options) <Function> Function to do anything like modifying options before each request. You can also return false if you want to skip the request.
    • onSuccess(response) <Function> Function to be called when evaluatePage() successes.
      • response <Object>
        • response <Object>
          • ok <boolean> whether the status code in the range 200-299 or not.
          • status <string> status code of the request.
          • url <string> Last requested url.
          • headers <Object> Response headers.
        • options <Object> crawler.queue()'s options with default values.
        • result <Serializable> The result resolved from evaluatePage() option.
        • screenshot <Buffer> Buffer with the screenshot image, which is null when screenshot option not passed.
        • links <Array> List of links found in the requested page.
        • depth <number> Depth of the followed links.
    • onError(error) <Function> Function to be called when request fails.
      • error <Error> Error object.
  • returns: <Promise<HCCrawler>> Promise which resolves to HCCrawler instance.

The method launches a Chromium instance. The following options are passed to puppeteer.launch().

ignoreHTTPSErrors, headless, executablePath, slowMo, args, ignoreDefaultArgs, handleSIGINT, handleSIGTERM, handleSIGHUP, dumpio, userDataDir, env, devtools

Also, the following options can be set as default values when crawler.queue() are executed.

url, allowedDomains, deniedDomains, timeout, priority, depthPriority, delay, retryCount, retryDelay, jQuery, browserCache, device, username, password, evaluatePage

Note: In practice, setting the options every time you queue the requests is redundant. Therefore, it's recommended to set the default values and override them depending on the necessity.

HCCrawler.executablePath()

  • returns: <string> An expected path to find bundled Chromium.

HCCrawler.defaultArgs()

  • returns: <Array<string>> The default flags that Chromium will be launched with.

crawler.queue([options])

  • options <Object>
    • url <string> Url to navigate to. The url should include scheme, e.g. https://.
    • maxDepth <number> Maximum depth for the crawler to follow links automatically, default to 1. Leave default to disable following links.
    • priority <number> Basic priority of queues, defaults to 1. Priority with larger number is preferred.
    • depthPriority <boolean> Whether to adjust priority based on its depth, defaults to true. Leave default to increase priority for higher depth, which is depth-first search.
    • skipDuplicates <boolean> Whether to skip duplicate requests, default to null. The request is considered to be the same if url, userAgent, device and extraHeaders are strictly the same.
    • obeyRobotsTxt <boolean> Whether to obey robots.txt, default to true.
    • followSitemapXml <boolean> Whether to use sitemap.xml to find locations, default to false.
    • allowedDomains <Array<string|RegExp>> List of domains allowed to request. Pass null or leave default to skip checking allowed domain
    • deniedDomains <Array<string|RegExp>> List of domains not allowed to request. Pass null or leave default to skip checking denied domain.
    • delay <number> Number of milliseconds after each request, defaults to 0. When delay is set, maxConcurrency option must be 1.
    • timeout <number> Navigation timeout in milliseconds, defaults to 30 seconds, pass 0 to disable timeout.
    • waitUntil <string|Array<string>> When to consider navigation succeeded, defaults to load. See the Puppeteer's page.goto()'s waitUntil options for further details.
    • waitFor <Object> See Puppeteer's page.waitFor() for further details.
      • selectorOrFunctionOrTimeout <string|number|function> A selecctor, predicate or timeout to wait for.
      • options <Object> Optional waiting parameters.
      • args <Array<Serializable>> List of arguments to pass to the predicate function.
    • retryCount <number> Number of limit when retry fails, defaults to 3.
    • retryDelay <number> Number of milliseconds after each retry fails, defaults to 10000.
    • jQuery <boolean> Whether to automatically add jQuery tag to page, defaults to true.
    • browserCache <boolean> Whether to enable browser cache for each request, defaults to true.
    • device <string> Device to emulate. Available devices are listed here.
    • username <string> Username for basic authentication. pass null if it's not necessary.
    • screenshot <Object> Screenshot option, defaults to null. This option is passed to Puppeteer's page.screenshot(). Pass null or leave default to disable screenshot.
    • password <string> Password for basic authentication. pass null if it's not necessary.
    • userAgent <string> User agent string to override in this page.
    • extraHeaders <Object> An object containing additional headers to be sent with every request. All header values must be strings.
    • evaluatePage() <Function> Function to be evaluated in browsers. Return serializable object. If it's not serializable, the result will be undefined.

Note: response.url may be different from options.url especially when the requested url is redirected.

The options can be either an object, an array, or a string. When it's an array, each item in the array will be executed. When it's a string, the options are transformed to an object with only url defined.

crawler.setMaxRequest(maxRequest)

crawler.pause()

This method pauses processing queues. You can resume the queue by calling crawler.resume().

crawler.resume()

This method resumes processing queues. This method may be used after the crawler is intentionally closed by calling crawler.pause() or request count reached maxRequest option.

crawler.clearCache()

  • returns: <Promise> Promise resolved when the cache is cleared.

This method clears the cache when it's used.

crawler.close()

  • returns: <Promise> Promise resolved when ther browser is closed.

crawler.disconnect()

  • returns: <Promise> Promise resolved when ther browser is disconnected.

crawler.version()

  • returns: <Promise<string>> Promise resolved with the Chromium version.

crawler.userAgent()

  • returns: <Promise<string>> Promise resolved with the default user agent.

crawler.wsEndpoint()

  • returns: <Promise<string>> Promise resolved with websocket url.

crawler.onIdle()

  • returns: <Promise> Promise resolved when queues become empty or paused.

crawler.isPaused()

  • returns: <boolean> Whether the queue is paused.

crawler.queueSize()

  • returns: <Promise<number>> Promise resolves to the size of queues.

crawler.pendingQueueSize()

  • returns: <number> The size of pending queues.

crawler.requestedCount()

  • returns: <number> The count of total requests.

event: 'newpage'

Emitted when a Puppeteer's page is opened.

event: 'requeststarted'

Emitted when a request started.

event: 'requestskipped'

Emitted when a request is skipped.

event: 'requestfinished'

Emitted when a request finished successfully.

event: 'requestretried'

Emitted when a request is retried.

event: 'requestfailed'

Emitted when a request failed.

event: 'robotstxtrequestfailed'

Emitted when a request to robots.txt failed

event: 'sitemapxmlrequestfailed'

Emitted when a request to sitemap.xml failed

event: 'maxdepthreached'

Emitted when a queue reached the crawler.queue()'s maxDepth option.

event: 'maxrequestreached'

Emitted when a queue reached the HCCrawler.connect() or HCCrawler.launch()'s maxRequest option.

event: 'disconnected'

Emitted when the browser instance is disconnected.

class: SessionCache

SessionCache is the HCCrawler.connect()'s default cache option. By default, the crawler remembers already requested urls on its memory.

const HCCrawler = require('headless-chrome-crawler');

// Pass null to the cache option to disable it.
HCCrawler.launch({ cache: null });
// ...

class: RedisCache

  • options <Object>
    • expire <number> Seconds to expires cache after setting each value, default to null.

Passing a RedisCache object to the HCCrawler.connect()'s cache option allows you to persist requested urls and robots.txt in Redis so that it prevent from requesting same urls in a distributed servers' environment. It also works well with its persistCache option to be true.

Other constructing options are passed to NodeRedis's redis.createClient()'s options.

const HCCrawler = require('headless-chrome-crawler');
const RedisCache = require('headless-chrome-crawler/cache/redis');

const cache = new RedisCache({ host: '127.0.0.1', port: 6379 });

HCCrawler.launch({
  persistCache: true, // Set true so that cache won't be cleared when closing the crawler
  cache,
});
// ...

class: BaseCache

You can create your own cache by extending the BaseCache's interfaces.

See here for example.

class: CSVExporter

  • options <Object>
    • file <string> File path to export output.
    • fields <Array<string>> List of fields to be used for columns. This option is also used for the headers.  * separator <string> Character to separate columns.
const HCCrawler = require('headless-chrome-crawler');
const CSVExporter = require('headless-chrome-crawler/exporter/csv');

const FILE = './tmp/result.csv';

const exporter = new CSVExporter({
  file: FILE,
  fields: ['response.url', 'response.status', 'links.length'],
  separator: '\t',
});

HCCrawler.launch({ exporter })
// ...

class: JSONLineExporter

  • options <Object>
    • file <string> File path to export output.
    • fields <Array<string>> List of fields to be filtered in json, defaults to null. Leave default not to filter fields.
    • jsonReplacer <Function> Function that alters the behavior of the stringification process, defaults to null. This is useful to sorts keys always in the same order.
const HCCrawler = require('headless-chrome-crawler');
const JSONLineExporter = require('headless-chrome-crawler/exporter/json-line');

const FILE = './tmp/result.json';

const exporter = new JSONLineExporter({
  file: FILE,
  fields: ['options', 'response'],
});

HCCrawler.launch({ exporter })
// ...

class: BaseExporter

You can create your own exporter by extending the BaseExporter's interfaces.

See here for example.

Tips

Distributed crawling

In order to crawl under distributed mode, use Redis for the shared cache storage. You can run the same script on multiple machines, so that Redis is used to share and distribute task queues.

const HCCrawler = require('headless-chrome-crawler');
const RedisCache = require('headless-chrome-crawler/cache/redis');

const TOP_PAGES = [
  // ...
];

const cache = new RedisCache({
  // ...
});

HCCrawler.launch({
  maxDepth: 3,
  cache,
})
  .then(crawler => {
    crawler.queue(TOP_PAGES);
  });

Launch options

HCCrawler.launch()'s options are passed to puppeteer.launch(). It may be useful to set the headless and slowMo options so that you can see what is going on.

HCCrawler.launch({ headless: false, slowMo: 10 });

Also, the args option is passed to the browser instance. List of Chromium flags can be found here. Passing --disable-web-security flag is useful for crawling. If the flag is set, links within iframes are collected as those of parent frames. If it's not, the source attributes of the iframes are collected as links.

HCCrawler.launch({ args: ['--disable-web-security'] });

Enable debug logging

All requests and browser's logs are logged via the debug module under the hccrawler namespace.

env DEBUG="hccrawler:*" node script.js
env DEBUG="hccrawler:request" node script.js
env DEBUG="hccrawler:browser" node script.js

FAQ

How is this different from other crawlers?

There are roughly two types of crawlers. One is static and the other is dynamic.

The static crawlers are based on simple requests to HTML files. They are generally fast, but fail scraping the contents when the HTML dynamically changes on browsers.

Dynamic crawlers based on PhantomJS and Selenium work magically on such dynamic applications. However, PhantomJS's maintainer has stepped down and recommended to switch to Headless Chrome, which is fast and stable. Selenium is still a well-maintained cross browser platform which runs on Chrome, Safari, IE and so on. However, crawlers do not need such cross browsers support.

This crawler is dynamic and based on Headless Chrome.

About

Distributed crawler powered by Headless Chrome

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 100.0%