Skip to main content
Version: 3.2

Puppeteer crawler

This example demonstrates how to use PuppeteerCrawler in combination with RequestQueue to recursively scrape the Hacker News website using headless Chrome / Puppeteer.

The crawler starts with a single URL, finds links to next pages, enqueues them and continues until no more desired links are available. The results are stored to the default dataset. In local configuration, the results are stored as JSON files in ./storage/datasets/default


To run this example on the Apify Platform, select the apify/actor-node-puppeteer-chrome image for your Dockerfile.

Run on
import { Actor } from 'apify';
import { PuppeteerCrawler } from 'crawlee';

await Actor.init();

// Create an instance of the PuppeteerCrawler class - a crawler
// that automatically loads the URLs in headless Chrome / Puppeteer.
const crawler = new PuppeteerCrawler({
// Here you can set options that are passed to the launchPuppeteer() function.
launchContext: {
launchOptions: {
headless: true,
// Other Puppeteer options

// Stop crawling after several pages
maxRequestsPerCrawl: 50,

// This function will be called for each URL to crawl.
// Here you can write the Puppeteer scripts you are familiar with,
// with the exception that browsers and pages are automatically managed by the Apify SDK.
// The function accepts a single parameter, which is an object with the following fields:
// - request: an instance of the Request class with information such as URL and HTTP method
// - page: Puppeteer's Page object (see
async requestHandler({ request, page, enqueueLinks }) {
console.log(`Processing ${request.url}...`);

// A function to be evaluated by Puppeteer within the browser context.
const data = await page.$$eval('.athing', ($posts) => {
const scrapedData = [];

// We're getting the title, rank and URL of each post on Hacker News.
$posts.forEach(($post) => {
title: $post.querySelector('.title a').innerText,
rank: $post.querySelector('.rank').innerText,
href: $post.querySelector('.title a').href,

return scrapedData;

// Store the results to the default dataset.
await Actor.pushData(data);

// Find a link to the next page and enqueue it if it exists.
const infos = await enqueueLinks({
selector: '.morelink',

if (infos.length === 0) console.log(`${request.url} is the last page!`);

// This function is called if the page processing failed more than maxRequestRetries+1 times.
failedRequestHandler({ request }) {
console.log(`Request ${request.url} failed too many times.`);

// Run the crawler and wait for it to finish.

console.log('Crawler finished.');

await Actor.exit();