Test Automation Best Practices in Action ( Part 5)

Core Framework & Test Design – Random Data Generation with faker.js

So far, we’ve built a beautifully abstracted framework using the Page Object Model, made our tests readable with BDD, destroyed flakiness by dropping static waits, and scaled our coverage using Data-Driven Testing.

Today, we are tackling the data itself. One of the most subtle traps in test automation is relying on hardcoded static data.

Let’s say you write a test that creates a user named "John Doe" with the email "test@example.com". It passes. Great! But what happens when you scale your CI/CD pipeline to run tests in parallel? Two test workers try to register "test@example.com" at the exact same time, and one fails due to a database collision.

What happens if a real user tries to register with a 15-character alphanumeric string, but your UI only expects 10? Your hardcoded "John Doe" test will never catch that layout bug.

To build a truly resilient suite, we need Random Data Generation. Let’s look at how we solve this using faker.js in Playwright!


What is Random Data Generation?

Instead of hardcoding inputs, we use a library like @faker-js/faker to generate dynamic, randomized data—like names, emails, addresses, and even hex colors—on the fly during test execution.

This practice does two critical things:

  1. Prevents State Collisions: Every test run uses completely unique data, meaning you can safely run hundreds of tests in parallel against the same database.
  2. Discovers Edge Cases Naturally: Over time, faker will throw weird, long, or complex strings at your application. It forces your tests to assert on actual system behavior rather than predefined constants.

Seeing it in Action

In our Test Automation Best Practices repository, we have a test that verifies the system can handle a brand new, dynamically generated color.

Here is how we orchestrate this in random-data.spec.ts:

import { test, expect } from '../baseFixtures'
import { faker } from '@faker-js/faker'

test.describe('Random Data Testing with faker.js', () => {
  let createdColorName: string | null = null

  // Clean up the dynamically generated data after the test!
  test.afterEach(async ({ request }) => {
    if (createdColorName) {
      await request.delete(`/api/colors/${createdColorName}`)
      createdColorName = null
    }
  })

  test('should create dynamic random color via API and verify through UI', async ({ homePage, page, request }) => {
    // 1. Generate unique random data
    const randomColorName = faker.string.alphanumeric(15)
    const randomHex = faker.color.rgb()

    const newColor = { name: randomColorName, hex: randomHex }
    createdColorName = newColor.name

    // 2. Arrange - Inject the random state via API
    const createResponse = await request.post('/api/colors', {
      data: newColor
    })
    expect(createResponse.ok()).toBeTruthy()

    // 3. Act - Navigate to the UI
    await homePage.goto()

    // Since this random string isn't in our english translation file (en.json),
    // i18next falls back to the key "colors.<randomColorName>".
    const customBtn = page.getByRole('button', { name: `colors.${newColor.name.toLowerCase()}` })

    // Wait for the exact random API endpoint deterministically
    const responsePromise = page.waitForResponse(
      (resp) => resp.url().includes(`/api/colors/${encodeURIComponent(newColor.name)}`) && resp.status() === 200
    )
    await customBtn.click()
    await responsePromise

    // 4. Assert - Verify the UI rendered our random hex code!
    await expect(homePage.currentColorText).toContainText(newColor.hex)
  })
})

💡 Expert Best Practices Applied Here:

  1. State Teardown (test.afterEach): When you generate random data, you must clean up after yourself. Notice how we store the createdColorName and use the API to DELETE it in the afterEach hook? This ensures true test isolation and keeps your test database pristine.
  2. Defensive Network Interception (encodeURIComponent): Because faker generates random strings, those strings might eventually contain spaces or special characters. If we just blindly interpolated the string into our waitForResponse URL check, it might fail. Wrapping it in encodeURIComponent(newColor.name) is a bulletproof way to handle dynamic routing.
  3. Handling i18n Fallbacks: What happens when you test a multi-language app with a string that doesn’t exist in your translation dictionary? The app usually falls back to the translation key. We anticipate this by locating the button using colors.${newColor.name.toLowerCase()}. This proves the test is aware of the application’s underlying architecture!

I built this repository to be a living blueprint of Test Automation excellence. Check out the full, open-source repository here: jpourdanis/test-automation-best-practices

Do you have an enhancement idea or a brilliant testing pattern I missed? Fork it, build it, and submit a Pull Request! Let’s collaborate and raise the quality bar together.

Happy testing! 🚀

Do you want to do an exploration on this example together?

You can book some time with me to discuss your current situation and do exploratory testing of the sample of this post or any other kind of issue you met regarding Software Testing & Quality Engineering.