Skip to main content

Request

Represents a request in the Crawlee framework, containing the necessary information for crawling operations.

The Request class is one of the core components in Crawlee, utilized by various components such as request providers, HTTP clients, crawlers, and more. It encapsulates the essential data for executing web requests, including the URL, HTTP method, headers, payload, and user data. The user data allows custom information to be stored and persisted throughout the request lifecycle, including its retries.

Key functionalities include managing the request's identifier (id), unique key (unique_key) that is used for request deduplication, controlling retries, handling state management, and enabling configuration for session rotation and proxy handling.

The recommended way to create a new instance is by using the Request.from_url constructor, which automatically generates a unique key and identifier based on the URL and request parameters.

Usage

from crawlee import Request

request = Request.from_url('https://crawlee.dev')

Hierarchy

Index

Methods

crawl_depth

  • crawl_depth(new_value): None
  • Parameters

    • new_value: int

    Returns None

enqueue_strategy

  • enqueue_strategy(new_enqueue_strategy): None
  • Parameters

    • new_enqueue_strategy: Literal[all, same-domain, same-hostname, same-origin]

    Returns None

forefront

  • forefront(new_value): None
  • Parameters

    • new_value: bool

    Returns None

:

:

last_proxy_tier

  • last_proxy_tier(new_value): None
  • Parameters

    • new_value: int

    Returns None

max_retries

  • max_retries(new_max_retries): None
  • Parameters

    • new_max_retries: int

    Returns None

session_rotation_count

  • session_rotation_count(new_session_rotation_count): None
  • Parameters

    • new_session_rotation_count: int

    Returns None

state

  • state(new_state): None

Properties

crawl_depth

crawl_depth: int

The depth of the request in the crawl tree.

crawlee_data

crawlee_data: CrawleeRequestData

Crawlee-specific configuration stored in the user_data.

enqueue_strategy

enqueue_strategy: Literal[all, same-domain, same-hostname, same-origin]

The strategy that was used for enqueuing the request.

forefront

forefront: bool

Indicate whether the request should be enqueued at the front of the queue.

handled_at

handled_at: datetime | None

Timestamp when the request was handled.

headers

headers: HttpHeaders

HTTP request headers.

id

id: str

A unique identifier for the request. Note that this is not used for deduplication, and should not be confused with unique_key.

label

label: str | None

A string used to differentiate between arbitrary request types.

last_proxy_tier

last_proxy_tier: int | None

The last proxy tier used to process the request.

loaded_url

loaded_url: str | None

URL of the web page that was loaded. This can differ from the original URL in case of redirects.

max_retries

max_retries: int | None

Crawlee-specific limit on the number of retries of the request.

method

method: Literal[GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE, PATCH]

HTTP request method.

model_config

model_config: Undefined

no_retry

no_retry: bool

If set to True, the request will not be retried in case of failure.

payload

payload: bytes | None

HTTP request payload.

retry_count

retry_count: int

Number of times the request has been retried.

session_rotation_count

session_rotation_count: int | None

Crawlee-specific number of finished session rotations for the request.

state

state: RequestState | None

Crawlee-specific request handling state.

unique_key

unique_key: str

A unique key identifying the request. Two requests with the same unique_key are considered as pointing to the same URL.

If unique_key is not provided, then it is automatically generated by normalizing the URL. For example, the URL of HTTP://www.EXAMPLE.com/something/ will produce the unique_key of http://www.example.com/something.

Pass an arbitrary non-empty text value to the unique_key property to override the default behavior and specify which URLs shall be considered equal.

url

url: str

The URL of the web page to crawl. Must be a valid HTTP or HTTPS URL, and may include query parameters and fragments.

user_data

user_data: dict[str, JsonSerializable]

Custom user data assigned to the request. Use this to save any request related data to the request's scope, keeping them accessible on retries, failures etc.