Skip to main content

_ApifyRequestQueueSharedClient

An Apify platform implementation of the request queue client.

This implementation supports multiple producers and multiple consumers scenario.

Index

Methods

__init__

  • __init__(*, api_client, metadata, cache_size, metadata_getter): None
  • Initialize a new instance.

    Preferably use the ApifyRequestQueueClient.open class method to create a new instance.


    Parameters

    • keyword-onlyapi_client: RequestQueueClientAsync
    • keyword-onlymetadata: RequestQueueMetadata
    • keyword-onlycache_size: int
    • keyword-onlymetadata_getter: Callable[[], Coroutine[Any, Any, ApifyRequestQueueMetadata]]

    Returns None

add_batch_of_requests

  • async add_batch_of_requests(requests, *, forefront): AddRequestsResponse
  • Add a batch of requests to the queue.


    Parameters

    • requests: Sequence[Request]

      The requests to add.

    • optionalkeyword-onlyforefront: bool = False

      Whether to add the requests to the beginning of the queue.

    Returns AddRequestsResponse

fetch_next_request

  • async fetch_next_request(): Request | None
  • Return the next request in the queue to be processed.

    Once you successfully finish processing of the request, you need to call mark_request_as_handled to mark the request as handled in the queue. If there was some error in processing the request, call reclaim_request instead, so that the queue will give the request to some other consumer in another call to the fetch_next_request method.


    Returns Request | None

get_request

  • async get_request(unique_key): Request | None
  • Get a request by unique key.


    Parameters

    • unique_key: str

      Unique key of the request to get.

    Returns Request | None

is_empty

  • async is_empty(): bool
  • Check if the queue is empty.


    Returns bool

mark_request_as_handled

  • async mark_request_as_handled(request): ProcessedRequest | None
  • Mark a request as handled after successful processing.

    Handled requests will never again be returned by the fetch_next_request method.


    Parameters

    • request: Request

      The request to mark as handled.

    Returns ProcessedRequest | None

reclaim_request

  • async reclaim_request(request, *, forefront): ProcessedRequest | None
  • Reclaim a failed request back to the queue.

    The request will be returned for processing later again by another call to fetch_next_request.


    Parameters

    • request: Request

      The request to return to the queue.

    • optionalkeyword-onlyforefront: bool = False

      Whether to add the request to the head or the end of the queue.

    Returns ProcessedRequest | None