On maintaining a k6 codebase, Part 1

tools, testing, load testing, and k6
a movie still from A Bug's Life, 1998
A Bug's Life, 1998

Note: This is a technical blog post, different than most of my usual writings. It’s meant for readers that have used k6, and might enjoy some of my personal workarounds and tips. If you’ve never heard about k6, I recommend reading my past post about it.

The problem

When folks start trying out k6, everything “looks” simple.

We want to load test an endpoint? Write a few lines of Javascript, let k6 do the rest.

The typical example script would look something like this:

import { check } from 'k6';
import http from 'k6/http';

export default function () {
  const res = http.get('http://test.k6.io/');
  check(res, {
    'is status 200': (r) => r.status === 200,
  });
}

And running is as easy as most CLI tools, just run: k6 run <our-script.js>.

Compared to other tools on the market: it’s a buttery smooth first time developer experience.

Want to mimic a scenario that requires more endpoints? We have a couple of ways of doing just that:

This is all fine for one script. In all cases we’ll likely end up with a load script that will look, in the less complex of cases, a bit like:

import { check } from 'k6';
import http from 'k6/http';

//...

export default function () {
  const res = http.post('...');
  check(res, {
    // some assertions and saving some data
  });

  const res2 = http.put('...');
  check(res2, {
    // some assertions and saving some data
  });

  const res3 = http.get('...');
  check(res3, {
    // some assertions and saving some data
  });

 // script goes on ad-infinitum
}

The script will grow as big as the single API flow we are trying to mimic.

BUT, now we’re in trouble! What if we want to tackle a complex scenario, for example:

Soon enough, we’ll start scratching our heads. We’ll go through k6 documentation. We’ll pour over modules and deep-dive into test lifecycles, k6-execution and SharedArrays.

Despair! Chaos reigns! Oh the humanity! It’s hideous to watch fellow listeners!

Reality sets in. There’s not really a recipe laying around with a scalable way to maintain our load testing codebase that can both:

Proposed solutions

There are some workarounds that have helped me maintaining a few load testing codebases.

Abstract requests in clients

Abstracting requests in clients means separating “behavior” from the “descriptive” part of tests. The principle is:

Our main script ONLY paints the interaction flow we’re trying to mimic against our target system. Everything else is somewhere else.

In our load test script we are only looking at the design of our test. Any specifics of the interactions we can abstract away into clients and other helpers.

How much we should abstract is anyone’s guess. Too much abstraction early on is dull. And like folks say, it may be better to duplicate than to pursue the wrong abstraction.

Where I personally try to strike the balance is:

Any sort of requests’ description and logic, we can probably get away with by creating a “Client”

Put into an example scenario, let’s say:

The script itself in k6 would look something like this:

/* global __ENV */
import { group } from 'k6'
import { CustomClient } from './util/customclient.js'

// ...

const hostname = __ENV.hostname
const dinossaur_species = __ENV.dinossaur_species

export default function () {
  group('Deploy food for a given species', function () {
    const client = new CustomClient(hostname)
    const feeders = client.getDinosaurFeeders(dinossaur_species)
    for (let i = 0; i < feeders.length; i++) {
        client.deployFood(feeders[i])
    }
  })
}

In the above script we ignore the specifics of the API requests. We don’t know how to actually list/get all the dinosaur feeders and then deploy food on each of them. Those specifics we would encode and expose in a client, which would look like:

import http from 'k6/http';
import {group, check, fail} from 'k6';
import someHeadersMethod from '../our-lib/some-headers.js';

//...

export default class CustomClient {
    constructor (hostname) {
      // ctor.
      this.hostname = hostname
      this.nonAutheticatedHeaders = someHeadersMethod(hostname)
      this.uniqueId = uuid().generate()
      // Since each virtual user & iteration combo is unique,
      // ...we can in theory keep data throughout the test in the CustomClient
    }

    // Example method
    getDinosaurFeeders (species) {
      const url = `http://${this.hostname}/feeders/${species}`
      return group('Get Dinosaur Feeders', () => {
        const r = http.get(url, {}, { headers: this.nonAutheticatedHeaders })
        const status = check(r, {
          'status is 200': r => r.status && r.status === 200
        })
        if (!status) {
          fail(`Unexpected status for ${url}, received ${r.status}, log_id ${this.uniqueId}`)
        } else {
          return r.json('feeders') // e.g. feeders is an array of strings
        }
      })
    }
}

For other experiments we can then reuse the CustomClient and other abstractions.

Expose environment variables from the start

In one of the earliers points, Abstract requests in clients, there was a snippet of code that might have gone unnoticed:

const hostname = __ENV.hostname
const dinossaur_species = __ENV.dinossaur_species

These two lines fetch Environment variables. These will in turn affect the behavior of the script in run-time.

We can use them when running on k6 directly:

hostname=<some-host> dinossaur_species=<some-species> k6 run dino-park-security-fences.test.js

And we could also use them when running on Docker:

docker run --name dino-park-tests \
  --net host \
  -v ${PWD}:/src \
  -i grafana/k6 run /src/dino-park-security-fences.test.js \
  -e hostname=<some-host> -e dinossaur_species=<some-species>

It’s a pretty innocuous feature.

The trick is to make use of this feature as early on as possible. We’ll want to be able to setup some configuration and tweaks, regardless of “host” where we run k6 scripts:

All these support using Environment Variables, which enable tweaking our tests.

Wrap assertions and meaningful failures

One thing that is often overlooked when setting up load testing scenarios in k6 is what to do in case of failure.

A failure state oftentimes falls into these patterns:

To counter this: Use of k6’s checks() and fail().

import {check, fail} from 'k6';

//...

const status = check(/* ... */);
if (!status) {
    fail(`Unexpected status for ${url}, received ${r.status}, log_id ${this.uniqueId}`);
}

Closing remarks

There are problems with each of these workarounds. The biggest of all is likely the added effort to start and setup a load-testing repository.

There’s a “locking” experience when writing tests using k6, much like with Postman or Insomnia. In k6’s case we don’t have control of what translates the Javascript-esque code to run in a Go based “load engine”.

The act of making k6 code a bit more structured or terse, inflicts other responsibilities. Like adding steps to keep it clean, setting up linting, and manage its dependencies. This is not the case with some other load testing libraries.

When it comes to load testing tool choice, as with any other testing tools, it becomes a matter of balance. Rather, of picking our battles, or poison of choice.

These are also more abstract problems that follow from this approach using k6, but shared with other load testing tools:

I’ll share more of each in two future posts. Stay tuned.


If you read this far, thank you. Feel free to reach out to me with comments, ideas, grammar errors, and suggestions via any of my social media. Until next time, stay safe, take care! If you are up for it, you can also buy me a coffee ☕