Avatar

Bonjour, I'm Julia.

Parallelise Cypress E2E Tests in a GitHub Actions Workflow

#cypress #devtools #testing

9 min read

Context

I had been working on a NextJS repository that typically took between 14 and 20 minutes to complete a run. Upon investigating the cause of this delay, I discovered that 80% of the time was consumed by Cypress end-to-end tests running sequentially. Most of these tests were designed to examine a checkout flow that involved calls to Stripe. This interaction was the primary reason for the variability in test completion times.

Occasionally, connecting to Stripe would take longer than expected, leading to test failures due to timeouts. These flaky tests necessitated the re-running of the entire pipeline, wasting both time and GitHub Actions minutes and disrupting the engineer's focus. This inefficiency made deploying changes a slow and painful process.

The fix

To tackle this issue, I concentrated on reducing the time consumed by the Cypress tests. I employed several strategies, including:

  • Parallelising end-to-end tests across five runners.
  • Caching node modules and the Cypress binary.
  • Enabling automatic retries for Cypress tests.

Parallelisation and caching

To parallelise the e2e tests, I added a matrix strategy using 5 runners. The repo had 15 e2e tests at the time of making this change, equating to 3 tests per runner. Note that there's a balancing act to be done between adding more runners, versus the additional cost of running each one. Additionally, each node will require some sort of set-up, and I found that there were about 50 seconds of overlapping tasks between each runner (steps to do with checkout and set-up of the node).

Each runner took about 4.5 minutes to run. For 5 runners, this means approximately 22 minutes in total, compared to the 12 minutes of run time it previously took to run the e2e tests. When looking at the entire deployment pipeline, rough tests indicated an increase in run time of about 45 seconds to 1 minute when going from 5 to 4 runners, and a similar increase in time again when going from 4 to 3 runners (fewer runners = less parallelisation).

To keep overlapping tasks at a minimum, I split the workflow into two jobs:

  1. Installation of dependencies, type checking, linting and unit tests;
  2. E2E tests (this still required the setup of node, but uses the node modules and Cypress binary caches, thus skipping the dependencies installation step).

I tried running the two jobs in parallel, but it actually resulted in a longer run time (due to the dependencies installation step and repeating of post-run steps). Running jobs sequentially also means failing faster if the significantly faster "unit tests" check fails.

One of the GH actions used in the workflow is actions/setup-node@v3. This actually uses actions/cache@v3 under the hood, but specifying a node modules and Cypress binary cache allowed me to specifically reuse these caches in the e2e_tests job.

Here's the resulting workflow (the previous version is in the Appendix below):

name: Check changes on branch

on:
  push:
    branches:
      - '**'
      - "!main"

jobs:
  lint_and_unit_tests:
    runs-on: ubuntu-latest
    timeout-minutes: 10
    permissions:
      contents: "read"
      id-token: "write"
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 16
          cache: "yarn"
      - name: Cache node modules
        uses: actions/cache@v3
        id: cache-node-modules
        with:
          path: node_modules
          key: yarn-${{ hashFiles('yarn.lock') }}
      - name: Cache Cypress binary
        uses: actions/cache@v3
        id: cache-cypress-binary
        with:
          path: ~/.cache/Cypress
          key: cypress-${{ hashFiles('yarn.lock') }}
      - name: Install dependencies
        run: yarn install
      - name: Run lint
        run: yarn run lint
      - name: Run formatter check
        run: yarn run format:check
      - name: Run types check
        run: yarn run types:check
      - name: Run unit tests
        run: yarn run test

  e2e_tests:
    runs-on: ubuntu-latest
    needs: lint_and_unit_tests
    timeout-minutes: 20
    strategy:
      fail-fast: false
      matrix:
        # Number of containers must match TOTAL_RUNNERS below
        containers: [0, 1, 2, 3, 4]
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 16
          cache: "yarn"
      - name: Restore node modules from cache
        uses: actions/cache@v3
        with:
          path: node_modules
          key: yarn-${{ hashFiles('yarn.lock') }}
      - name: Restore Cypress binary from cache
        uses: actions/cache@v3
        with:
          path: ~/.cache/Cypress
          key: cypress-${{ hashFiles('yarn.lock') }}
      - name: Run e2e tests
        uses: cypress-io/github-action@v5
        with:
          # Already installed in cache
          install: false
          command: yarn node scripts/cypress-ci-run.js
        env:
          # Must match number of containers above
          TOTAL_RUNNERS: 5
          THIS_RUNNER: ${{ matrix.containers }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      - uses: actions/upload-artifact@v2
        if: failure()
        with:
          name: cypress-screenshots
          path: cypress/screenshots
      - uses: actions/upload-artifact@v2
        if: failure()
        with:
          name: cypress-videos
          path: cypress/videos

Cypress automatic retries

As the majority of the e2e tests call Stripe, there are occasions when our tests would fail due to delays in connections and timeouts. I also noticed that tests would sometimes fail when visiting the first app page when running the first test. This is because it can sometimes take some time for the Next dev server to build the first page route. Adding automatic retries helped to overcome these issues.

// cypress.config.js

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    baseUrl: 'http://localhost:3000',
    chromeWebSecurity: false,
    defaultCommandTimeout: 20000,
    retries: {
      runMode: 2,
      openMode: 0,
    },
  },
})

How to parallelise Cypress tests (without paying for Cypress Cloud)

Cypress does not make the parallelisation of tests easy (they offer this in their paid solution, Cypress Cloud). Some manual scripts were therefore needed to handle the splitting of specs across the number of runners in the workflow.

To run the Cypress tests, we use cypress-io/github-action@v5. We wanted to continue using this action as it handles some things out of the box for us, but annoyingly, does not allow us to pass in matrix variables. It was therefore necessary to set up our own command for this, passing in the matrix variables through environment variables.

      - name: Run e2e tests
        uses: cypress-io/github-action@v5
        with:
          # Already installed in cache
          install: false
          command: yarn node scripts/cypress-ci-run.js
        env:
          # Must match number of containers in matrix strategy
          TOTAL_RUNNERS: 5
          THIS_RUNNER: ${{ matrix.containers }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

To summarise how it all works at a high level (see files in the Appendix below):

  1. The cypress-spec-split.js script is responsible for returning a list of e2e spec file paths. By passing in the totalRunners and specific runner (thisRunner) of the GH Action workflow, we get back the shortlist of spec file paths that this specific runner needs to run.

  2. The cypress-ci-run.js script is responsible for actually starting the dev server, and calling the yarn cypress:run command, passing in the shortlist of specs that a particular runner should run (from step 1 above). This is done by using the --spec flag.

  3. The GH Actions workflow calls yarn node scripts/cypress-ci-run.js, passing in the totalRunners and specific runner (thisRunner) arguments from the matrix strategy.

Please go read this excellent post, where I got the scripts. The post also runs through in more detail, how the scripts work.

Concluding statements

  • The few tests I've run show a running time of 7-9 minutes post-changes (assuming cache is used - for a brand new run, it was between 11-12 minutes). I'll be continuing to monitor this with subsequent feature branches, in addition to checking whether we continue seeing flaky Stripe tests. I'll also need to monitor how much this increases costs.
    • Hopefully less re-runs due to flakiness make up for cost increases due to running parallel runners.
  • 7 minutes still feels quite long, since developers would likely have context switched in that time. It would be interesting to try out something like Playwright to see if we can get our e2e tests running faster.

Appendix

Previous workflow

name: Check changes on branch

on:
  push:
    branches:
      - "**"
      - "!main"

jobs:
  lint_and_test:
    runs-on: ubuntu-latest
    timeout-minutes: 25
    permissions:
      contents: "read"
      id-token: "write"
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 16
          cache: "yarn"
      - name: Install dependencies
        run: yarn install
      - name: Run lint
        run: yarn run lint
      - name: Run formatter check
        run: yarn run format:check
      - name: Run types check
        run: yarn run types:check
      - name: Run unit tests
        run: yarn run test
      - name: Run e2e tests
        uses: cypress-io/github-action@v4
        with:
          command: yarn run e2e:run
      - uses: actions/upload-artifact@v2
        if: failure()
        with:
          name: cypress-screenshots
          path: cypress/screenshots
      - uses: actions/upload-artifact@v2
        if: failure()
        with:
          name: cypress-videos
          path: cypress/videos

Cypress parallelisation files

// cypress-spec-split.js

const fs = require('fs').promises
const globby = require('globby')

const specPatterns = {
  specPattern: 'cypress/e2e/*.cy.{ts,tsx,js,jsx}',
  excludeSpecPattern: ['tsconfig.json'],
}

const testPattern = /(^|\s)(it|test)\(/g

const isCli = require.main && require.main.filename === __filename

const ERR_MISSING_ARGUMENTS = 'Missing arguments'
const ERR_INVALID_TOTAL_RUNNERS = 'Invalid total runners'
const ERR_INVALID_RUNNER = 'Invalid runner'

function getArgsInSplit() {
  const [totalRunnersStr, thisRunnerStr] = process.argv.slice(2)

  if (!totalRunnersStr || !thisRunnerStr) {
    throw new Error(ERR_MISSING_ARGUMENTS)
  }

  const totalRunners = Number(totalRunnersStr)
  const thisRunner = Number(thisRunnerStr)

  if (isNaN(totalRunners)) {
    throw new Error(ERR_INVALID_TOTAL_RUNNERS)
  }

  if (isNaN(thisRunner)) {
    throw new Error(ERR_INVALID_RUNNER)
  }

  return { totalRunners, thisRunner }
}

async function getTestCount(filePath) {
  const content = await fs.readFile(filePath, 'utf8')
  return (content.match(testPattern) || []).length
}

async function getSpecFilePaths() {
  const options = specPatterns

  const files = await globby(options.specPattern, {
    ignore: options.excludeSpecPattern,
  })

  return files
}

async function sortSpecFilesByTestCount(specPathsOriginal) {
  const specPaths = [...specPathsOriginal]

  const testPerSpec = {}

  for (const specPath of specPaths) {
    testPerSpec[specPath] = await getTestCount(specPath)
  }

  return Object.entries(testPerSpec)
    .sort((a, b) => b[1] - a[1])
    .map((x) => x[0])
}

function splitSpecs(specs, totalRunners, thisRunner) {
  return specs.filter((_, index) => index % totalRunners === thisRunner)
}

async function cypressSpecSplit() {
  if (!isCli) {
    return
  }

  try {
    const specFilePaths = await sortSpecFilesByTestCount(await getSpecFilePaths())

    if (!specFilePaths.length) {
      throw Error('No spec files found.')
    }

    const { totalRunners, thisRunner } = getArgsInSplit()

    const specsToRun = splitSpecs(specFilePaths, totalRunners, thisRunner)

    console.log(specsToRun.join(','))
  } catch (err) {
    console.error(err)
    process.exit(1)
  }
}

cypressSpecSplit()
// cypress-ci-run.js

const { exec } = require('child_process')

function getEnvNumber(varName, { required = false } = {}) {
  if (required && process.env[varName] === undefined) {
    throw Error(`${varName} is not set.`)
  }

  const value = Number(process.env[varName])

  if (isNaN(value)) {
    throw Error(`${varName} is not a number.`)
  }

  return value
}

function getArgs() {
  return {
    totalRunners: getEnvNumber('TOTAL_RUNNERS', { required: true }),
    thisRunner: getEnvNumber('THIS_RUNNER', { required: true }),
  }
}

function cypressCIRun() {
  try {
    const { totalRunners, thisRunner } = getArgs()

    const command = `yarn start-server-and-test dev http://localhost:3000 'yarn cypress:run --spec "$(yarn --silent node scripts/cypress-spec-split.js ${totalRunners} ${thisRunner})"'`

    const commandProcess = exec(command)

    if (commandProcess.stdout) {
      commandProcess.stdout.pipe(process.stdout)
    }

    if (commandProcess.stderr) {
      commandProcess.stderr.pipe(process.stderr)
    }

    commandProcess.on('exit', (code) => {
      process.exit(code || 0)
    })
  } catch (err) {
    console.error(err)
    process.exit(1)
  }
}

cypressCIRun()

© 2016-2024 Julia Tan · Powered by Next JS.