Skip to main content

_ApifyRequestQueueSingleClient

An Apify platform implementation of the request queue client with limited capability.

This client is designed to use as little resources as possible, but has to be used in constrained context. Constraints:

  • Only one client is consuming the request queue at the time.
  • Multiple producers can put requests to the queue, but their forefront requests are not guaranteed to be handled so quickly as this client does not aggressively fetch the forefront and relies on local head estimation.
  • Requests are only added to the queue, never deleted. (Marking as handled is ok.)
  • Other producers can add new requests, but not modify existing ones (otherwise caching can miss the updates)

If the constraints are not met, the client might work in an unpredictable way.

Index

Methods

__init__

  • __init__(*, api_client, metadata, cache_size): None
  • Initialize a new instance.

    Preferably use the ApifyRequestQueueClient.open class method to create a new instance.


    Parameters

    • keyword-onlyapi_client: RequestQueueClientAsync
    • keyword-onlymetadata: RequestQueueMetadata
    • keyword-onlycache_size: int

    Returns None

add_batch_of_requests

  • async add_batch_of_requests(requests, *, forefront): AddRequestsResponse
  • Add a batch of requests to the queue.


    Parameters

    • requests: Sequence[Request]

      The requests to add.

    • optionalkeyword-onlyforefront: bool = False

      Whether to add the requests to the beginning of the queue.

    Returns AddRequestsResponse

fetch_next_request

  • async fetch_next_request(): Request | None
  • Return the next request in the queue to be processed.

    Once you successfully finish processing of the request, you need to call mark_request_as_handled to mark the request as handled in the queue. If there was some error in processing the request, call reclaim_request instead, so that the queue will give the request to some other consumer in another call to the fetch_next_request method.


    Returns Request | None

get_request

  • async get_request(unique_key): Request | None
  • Get a request by unique key.


    Parameters

    • unique_key: str

      Unique key of the request to get.

    Returns Request | None

is_empty

  • async is_empty(): bool
  • Check if the queue is empty.


    Returns bool

mark_request_as_handled

  • async mark_request_as_handled(request): ProcessedRequest | None
  • Mark a request as handled after successful processing.

    Handled requests will never again be returned by the fetch_next_request method.


    Parameters

    • request: Request

      The request to mark as handled.

    Returns ProcessedRequest | None

reclaim_request

  • async reclaim_request(request, *, forefront): ProcessedRequest | None
  • Reclaim a failed request back to the queue.

    The request will be returned for processing later again by another call to fetch_next_request.


    Parameters

    • request: Request

      The request to return to the queue.

    • optionalkeyword-onlyforefront: bool = False

      Whether to add the request to the head or the end of the queue.

    Returns ProcessedRequest | None