Skip to main content

Automated tests

Learn how to automate ongoing testing and make sure your Actors perform over time. See code examples for configuring the Actor Testing Actor.

You should make sure your Actors are well-maintained. You might not always get feedback from your users. Therefore, it is crucial that you periodically check if your Actors work as expected. You can do this using our monitoring suite or by setting up daily runs of the Actor Testing (pocesar/actor-testing) tool.

The monitoring suite is sufficient for most scenarios and includes automated alerts. See more information on the suite's page or check out our tutorials.

We recommend using the Actor Testing Actor for specific and advanced use cases. This guide will help you set it up.

Step-by-step guide

  1. Prepare 1–5 separate testing tasks for your Actor. (See below).
  2. Set up a task from the Actor Testing Actor. (See below).
  3. Run the test task until all tests succeed (a few times).
  4. Schedule the test to run at the frequency of your choice (recommended daily) and choose a communication channel receiving info about it (Slack or email).
  5. Ensure you review and fix any issues on a weekly basis.

Set up tasks you will test

Tasks that test an Actor's configurations

We also advise you to test your Actor's default run—one that uses the pre-filled inputs. It is often the first task your users run, and they may be put off if it doesn't work.

Set a low maxItem value for your testing tasks, so that you don't burn your credit. If you need to test your Actor with a large amount of data, set the scheduler to run less frequently.

Set up a task from the Actor Testing Actor

You can find the setup guide in the Actor's README. We recommend testing for the following scenarios.

Run status:

await expectAsync(runResult).toHaveStatus('SUCCEEDED');

Crash information from the log:

await expectAsync(runResult).withLog((log) => {
// Neither ReferenceError or TypeErrors should ever occur
// in production code – they mean the code is over-optimistic
// The errors must be dealt with gracefully and displayed with a helpful message to the user


Information from statistics (runtime, retries):

await expectAsync(runResult).withStatistics((stats) => {
// In most cases, you want it to be as close to zero as possible
.withContext(runResult.format('Request retries'))

// What is the expected run time for the number of items?
.withContext(runResult.format('Run time'))
.toBeWithinRange(1 * 60000, 10 * 60000);

Information about and from within the dataset:

await expectAsync(runResult).withDataset(({ dataset, info }) => {
// If you're sure, always set this number to be your exact maxItems
.withContext(runResult.format('Dataset cleanItemCount'))
.toBe(3); // or toBeGreaterThan(1) or toBeWithinRange(1,3)

// Make sure the dataset isn't empty
.withContext(runResult.format('Dataset items array'))

const results = dataset.items;

// Check dataset items to have the expected data format
for (const result of results) {
.withContext(runResult.format('Direct url'))

.withContext(runResult.format('Biz ID'))

Information about the key-value store:

await expectAsync(runResult).withKeyValueStore(({ contentType }) => {
// Check for the proper content type of the saved key-value item
.withContext(runResult.format('KVS contentType'))

// This also checks for existence of the key-value key
{ keyName: '' },