Skip to main content
Version: 1.7

RequestQueue

{"content": ["Represents a queue of URLs to crawl.\n\nCan be used for deep crawling of websites where you start with several URLs and then recursively\nfollow links to other pages. The data structure supports both breadth-first and depth-first crawling orders.\n\nEach URL is represented using an instance of the Request class.\nThe queue can only contain unique URLs. More precisely, it can only contain request dictionaries\nwith distinct uniqueKey properties. By default, uniqueKey is generated from the URL, but it can also be overridden.\nTo add a single URL multiple times to the queue,\ncorresponding request dictionary will need to have different uniqueKey properties.\n\nDo not instantiate this class directly, use the Actor.open_request_queue() function instead.\n\nRequestQueue stores its data either on local disk or in the Apify cloud,\ndepending on whether the APIFY_LOCAL_STORAGE_DIR or APIFY_TOKEN environment variables are set.\n\nIf the APIFY_LOCAL_STORAGE_DIR environment variable is set, the data is stored in\nthe local directory in the following files:\n``\n{APIFY_LOCAL_STORAGE_DIR}/request_queues/{QUEUE_ID}/{REQUEST_ID}.json\n``", "Note that {QUEUE_ID} is the name or ID of the request queue. The default request queue has ID: default,\nunless you override it by setting the APIFY_DEFAULT_REQUEST_QUEUE_ID environment variable.\nThe {REQUEST_ID} is the id of the request.\n\nIf the APIFY_TOKEN environment variable is set but APIFY_LOCAL_STORAGE_DIR is not, the data is stored in the\nApify Request Queue\ncloud storage."]}

Index

Methods

add_request

  • async add_request(request, *, forefront, keep_url_fragment, use_extended_unique_key): dict
  • {"content": ["Adds a request to the RequestQueue while managing deduplication and positioning within the queue.\n\nThe deduplication of requests relies on the uniqueKey field within the request dictionary. If uniqueKey\nexists, it remains unchanged; if it does not, it is generated based on the request's url, method,\nand payload fields. The generation of uniqueKey can be influenced by the keep_url_fragment and\nuse_extended_unique_key flags, which dictate whether to include the URL fragment and the request's method\nand payload, respectively, in its computation.\n\nThe request can be added to the forefront (beginning) or the back of the queue based on the forefront\nparameter. Information about the request's addition to the queue, including whether it was already present or\nhandled, is returned in an output dictionary.\n", {"


    Parameters

    • request: dict
    • keyword-onlyforefront: bool = False
    • keyword-onlykeep_url_fragment: bool = False
    • keyword-onlyuse_extended_unique_key: bool = False

    Returns dict

    \n- requestId (str)

drop

  • async drop(): None
  • {"content": ["Remove the request queue either from the Apify cloud storage or from the local directory."]}


    Returns None

fetch_next_request

  • async fetch_next_request(): dict | None
  • {"content": ["Return the next request in the queue to be processed.\n\nOnce you successfully finish processing of the request, you need to call\nRequestQueue.mark_request_as_handled to mark the request as handled in the queue.\nIf there was some error in processing the request, call RequestQueue.reclaim_request instead,\nso that the queue will give the request to some other consumer in another call to the fetch_next_request method.\n\nNote that the None return value does not mean the queue processing finished, it means there are currently no pending requests.\nTo check whether all requests in queue were finished, use RequestQueue.is_finished instead.\n", {"


    Returns dict | None

get_info

  • async get_info(): dict | None
  • {"content": ["Get an object containing general information about the request queue.\n", {"


    Returns dict | None

get_request

  • async get_request(request_id): dict | None
  • {"content": ["Retrieve a request from the queue.\n", {"


    Parameters

    • request_id: str

    Returns dict | None

is_empty

  • async is_empty(): bool
  • {"content": ["Check whether the queue is empty.\n", {"


    Returns bool

is_finished

  • async is_finished(): bool
  • {"content": ["Check whether the queue is finished.\n\nDue to the nature of distributed storage used by the queue,\nthe function might occasionally return a false negative,\nbut it will never return a false positive.\n", {"


    Returns bool

mark_request_as_handled

  • async mark_request_as_handled(request): dict | None
  • {"content": ["Mark a request as handled after successful processing.\n\nHandled requests will never again be returned by the RequestQueue.fetch_next_request method.\n", {"


    Parameters

    • request: dict

    Returns dict | None

open

  • {"content": ["Open a request queue.\n\nRequest queue represents a queue of URLs to crawl, which is stored either on local filesystem or in the Apify cloud.\nThe queue is used for deep crawling of websites, where you start with several URLs and then\nrecursively follow links to other pages. The data structure supports both breadth-first\nand depth-first crawling orders.\n", {"


    Parameters

    • keyword-onlyid: str | None = None
    • keyword-onlyname: str | None = None
    • keyword-onlyforce_cloud: bool = False
    • keyword-onlyconfig: Configuration | None = None

    Returns RequestQueue

reclaim_request

  • async reclaim_request(request, forefront): dict | None
  • {"content": ["Reclaim a failed request back to the queue.\n\nThe request will be returned for processing later again\nby another call to RequestQueue.fetchNextRequest.\n", {"


    Parameters

    • request: dict
    • forefront: bool = False

    Returns dict | None