What makes a good frontend test? 37 tips and tricks to use to write better frontend tests

On this page

December 22, 2025
#testing#frontend#tips

I love working with tests. It is always a fun process writing a test, and then updating the code to get the test to pass and go green.

But there is nothing more frustrating than trying to figure out what some existing tests were doing. (Often, when I was the original author of those tests too!)

Here are all my tips for writing high quality frontend tests.

Some examples I've simplified by removing the frontend specific parts, just to make the point easier. This is a site all about testing frontend apps, but to be honest most clean testing principles apply to frontend or backend testing!

Good tests are clear about what they are testing

If you are testing a component, having separate test() (or it()) for different assertions makes it much easier to maintain.

Important to note: I am not suggesting a strict rule of 1 test = 1 assertion.

That is a silly rule that some take too far. But keep unrelated assertions in different test() calls.

For example, the following example is messy and hard to maintain, and when it fails it isn't clear what it is testing:

test('component works ok', async () => {
  render(<SomeComponent />);

  // test data loads:
  expect(
    screen.getByText('Loading...')
  ).toBeInTheDocument();

  await waitFor(() =>
    expect(fetch).toHaveBeenCalled()
  );
  expect(
    await screen.findByText(
      'your mock data from api'
    )
  ).toBeInTheDocument();

  expect(
    screen.queryByText('Loading...')
  ).toBeNull();

  // test we can create
  await userEvent.click(
    screen.getByRole('button', {
      name: 'Create',
    })
  );
  expect(
    await screen.findByText(
      'Created ok message'
    )
  ).toBeInTheDocument();
  expect(fetch).toHaveBeenCalledWith(
    '/create'
  );

  // test we can delete
  await userEvent.click(
    screen.getByRole('button', {
      name: 'Delete',
    })
  );
  expect(
    screen.getByText('Deleted item')
  ).toBeInTheDocument();
  expect(fetch).toHaveBeenCalledWith(
    '/delete'
  );
});

In that test, we are:

  • testing it loads data/calls fetch
  • and testing we can use the create form
  • and testing we can use the delete form

This is a bit of a made-up example, but it is so easy to add these sorts of test (especially when adding a tiny feature at a time... easier to update an existing test and test that small new bit functionality in the same test).

These sorts of tests are annoying to maintain, because if they fail it isn't immediately clear what part failed.

So in this example, I would suggest splitting up into three tests. The overhead of 3 different tests, re-rendering each time is worth it, to get better developer experience and cleaner tests.

Avoid testing implementation details

Try to avoid testing implementation details. In the non-frontend world of testing, we would always try to test public methods on classes. In frontend tests we do the same concept - but we test everything via what our components are rendering and the interaction we can perform on them (via buttons).

This should be an obvious one as it is so commonly referenced, but it is very easy to get carried away and start testing implementation details, and not the behaviour that users (real users on a UI, or a public API) would use.

For example - testing a hook vs testing a component that uses the hook.

//  ❌ bad - testing implementation details
test('component uses useState correctly', () => {
  const { result } = renderHook(() =>
    useState(0)
  );
  expect(result.current[0]).toBe(0);
});

// ✅ good - testing user-facing behavior
test('counter displays initial value of 0', () => {
  render(<Counter />);
  expect(
    screen.getByText('Count: 0')
  ).toBeInTheDocument();
});

If you are only testing the hook, then there is no proof your component is working as expected.

It also means if you refactor your component (same functionality, but maybe you change how you increment - with another hook), your existing test is going to pass.

Note: there are times that it makes sense to test your component, but also test some of the internals in isolation. Maybe your component uses a hook, and it is much easier to test the various combinations of things that the hook returns. But just remember if you switch out that hook to something else, your component is going to be missing test coverage...

Test your application logic, not third-party code

Don't test framework or library (React, Next) specific code.

If you're using a framework or library, there isn't much value in you testing that.

Of course, test your code that uses it.

This means for React apps

  • don't bother testing that it can take some JSX and render it
  • if you pass in style={someStyleObject}, there is no point testing that React is setting the style object correctly

Where it gets a bit less clear is for things like how to check that a NextJS app changes pages. I often find the only reliable way is to run end-to-end tests (Playwright or Cypress) and check the url changed.

Using the correct (most suitable) query functions in React Testing Library or Vitest Browser Mode

I've written about this a lot on this site - but it is important to use the correct query functions (like getByText(), getByPlaceholderText() etc).

Using the correct ones means:

  • it is easier to read the test (it is clearer what you're trying to do in the test),
  • it encourages a more realistic test (finding elements by their label text for example is basically what real humans do when interacting with a form),
  • and nudges you into using semantically correct HTML elements (which helps with accessibility).

For example:

// ❌ avoid this - getByTestId is a fallback when there is nothing better to use
const button = screen.getByTestId(
  'submit-btn'
);
const heading = screen.getByTestId(
  'main-title'
);
const input = screen.getByTestId(
  'email-input'
);

// ✅ Good - using semantic queries in order of priority
const button = screen.getByRole(
  'button',
  { name: 'Submit' }
);
const heading = screen.getByRole(
  'heading',
  { name: 'Sign Up' }
);
const input = screen.getByLabelText(
  'Email address'
);

The RTL query priority order is:

  1. getByRole - best for buttons, headings, forms
  2. getByLabelText - perfect for form inputs
  3. getByPlaceholderText - good for inputs without labels
  4. getByText - for non-interactive content
  5. getByDisplayValue - for form elements with values
  6. getByAltText - for images
  7. getByTitle - for elements with title attributes
  8. getByTestId - last resort only

Note: getByRole can be the slowest of all, so although it is higher priority, if you are seeing slow tests on components with a lot of elements it is something to bear in mind.

When writing tests, the easiest way is to add a data-testid to everything and use getByTestId(...). But it takes only a tiny bit more effort to use better query function.

Note: data-testid definitely has its uses. Sometimes it makes tests much much easier to write and maintain. Just don't overdo it.

Avoiding mocking, and prefer testing real implementations

Sometimes you see tests which is testing a React component, and every component it includes is mocked.

This comes from one way of thinking about unit tests - testing something completely in isolation. In my opinion this is an outdated way of testing that should be avoided.

And sometimes it can be easier to initially write (as you don't have to care about those components. Just mock them so they do nothing).

I've noticed AI loves to over mock when you ask it to test a component (I would guess because it isn't sure how the other components should actually work).

// ❌ Bad - over-mocking everything
vi.mock('./Header', () => ({
  Header: () => (
    <div data-testid="header">
      Mocked Header
    </div>
  ),
}));

vi.mock('./Sidebar', () => ({
  Sidebar: () => (
    <div data-testid="sidebar">
      Mocked Sidebar
    </div>
  ),
}));

vi.mock('./Footer', () => ({
  Footer: () => (
    <div data-testid="footer">
      Mocked Footer
    </div>
  ),
}));

vi.mock('./UserProfile', () => ({
  UserProfile: () => (
    <div data-testid="user-profile">
      Mocked User
    </div>
  ),
}));

test('dashboard renders correctly', () => {
  render(<Dashboard />);

  expect(
    screen.getByTestId('header')
  ).toBeInTheDocument();
  expect(
    screen.getByTestId('sidebar')
  ).toBeInTheDocument();
  expect(
    screen.getByTestId('footer')
  ).toBeInTheDocument();
  expect(
    screen.getByTestId('user-profile')
  ).toBeInTheDocument();
});

This test isn't really testing any real implementation. It is just testing your test mocks. So I find to hard to see any value in that test.

Here is an improved version - still mocking some things, but this time just the data fetching part. It is also making assertions that every element (like the header, sidebar etc) contains the expected real value, not just that there is a data-testid with that value in the DOM.

// ✅ Good - only mock what needs to be mocked
vi.mock('./api/userService', () => ({
  fetchUserData: vi
    .fn()
    .mockResolvedValue({
      name: 'John Doe',
      email: 'john@example.com',
    }),
}));

test('dashboard displays user information after loading', async () => {
  render(<Dashboard />);

  // Test that user data appears
  expect(
    await screen.findByText(
      'Welcome, John Doe'
    )
  ).toBeInTheDocument();
  expect(
    screen.getByText('john@example.com')
  ).toBeInTheDocument();

  // Test that loading indicator disappears
  expect(
    screen.queryByText('Loading...')
  ).not.toBeInTheDocument();
});

The main times I will mock or spyOn:

  • mock api responses
  • mock errors (to check the thing you're testing handles it)
  • mock a third party library/component
  • mock when the component is using web APIs such as Canvas which is not supported in React Testing Library. (ideally you can extract that part of the component out into it's own export, so you can mock a small subset of the component)
  • mock something that you know is tested elsewhere, which are not relevant to this component's test.

If you do mock, try to avoid over mocking

If you do decide to use vi.mock() or jest.mock() (or their slight variations like .doMock()) be aware that you are mocking the entire module.

This means that if that module starts exporting a new property, your mocks probably all need updating.

I've seen this multiple times where a module was mocked, then several months later another export added and all the tests break.

There is a work around with vi.importActual() and the vitest version (in Jest, use jest.requireActual()), which can help:

// ❌ Bad - mocking entire module, hard to maintain
vi.mock('./userService', () => ({
  fetchUser: vi.fn().mockResolvedValue({
    name: 'John',
  }),
  updateUser: vi.fn(),
  deleteUser: vi.fn(),
}));

// ✅ Better - import actual module and only mock what you need
vi.mock('./userService', async () => {
  const actual = await vi.importActual(
    './userService'
  );
  return {
    ...actual,
    fetchUser: vi
      .fn()
      .mockResolvedValue({
        name: 'John',
      }),
    // Keep all other exports as they are
  };
});

// Jest version:
jest.mock('./userService', () => ({
  ...jest.requireActual(
    './userService'
  ),
  fetchUser: jest
    .fn()
    .mockResolvedValue({
      name: 'John',
    }),
  // Keep all other exports as they are
}));

This way, if the userService module gets new exports, your tests won't break because you're only overriding the specific functions you need to mock.

Another (nicer) work around is to try to extract the things you will mock into separate files, so you can 'safely' mock the entire file.

But a much better 'workaround' is to use vi.spyOn() (or jest.spyOn()). You get automatic TypeScript typings, you can easily restore it, and when reading a test it is much easier to read and figure out what is going on.

// ✅ Good - spy on specific methods instead of mocking entire modules
beforeEach(() => {
  vi.spyOn(
    userService,
    'fetchUser'
  ).mockResolvedValue({
    name: 'John',
  });
});

Prioritise fixing flaky tests

Personally I see flaky tests as almost as important as a bug which has made it to production.

If you have to keep re-running tests because some of your tests sometimes pass, sometimes fail then:

  • firstly, it is an obvious waste of time
  • the flaky test might be due to a real bug in your app (not just buggy test code)
  • and very soon engineers figure out that a file is always failing, and just ignore it and merge without your CI all passing. This can easily result in accidentally merging another failing test without realising.

There are times when deleting a flaky test has more value than keep re-running a test that might pass. But ideally remove the flakiness!

Don't only test the happy path

The "happy path" is the most important path - the "successful" or error-free flow.

For example a "contact form submits OK and sends the form details to the backend" would be the happy path.

However the sad path is equally as important.

The sad path means the opposite of the happy path. In other words - when things go wrong.

  • error states
  • form validation issues
  • network issues
  • authentication or authorisation issues
  • and edge cases

These are all part of the user experience that real users will experience.

And these are the types of things that any QA department will not notice as easy as bugs in the happy path

// ✅ Good - testing the happy path
test('submits contact form successfully', async () => {
  render(<ContactForm />);

  // ... fill out form
  // ... click submit

  expect(
    await screen.findByText(
      'Message sent successfully!'
    )
  ).toBeVisible();
});

// ✅ Also good - testing the sad path
test('shows error when form submission fails', async () => {
  // Mock API to return error
  vi.mocked(
    fetch
  ).mockRejectedValueOnce(
    new Error('Network error')
  );

  render(<ContactForm />);

  // ... fill out form
  // ... click submit

  expect(
    await screen.findByText(
      'Failed to send message. Please try again.'
    )
  ).toBeVisible();
});

Good tests do not overuse snapshots

What are snapshots? (Show details)

There are two types of snapshots.

The first is just expect(val).toMatchSnapshot() which after the first run of the test will write a file with the output of val. In future test runs it will compare the current value to that snapshot file.

The second - expect(val).toMatchInlineSnapshot() is very similar, but updates the test file and stores the output of the value inline.

After the first run, Jest or Vitest will replace that line with something like expect(val).toMatchInlineSnapshot("some-output") (note: it can be a serialised object, array etc too)

I love .toMatchInlineSnapshot(). I use it all the time during development.

Even with TDD, sometimes I know it is just faster to use inline snapshots than write out complex objects in the .toStrictEqual(...).

But they are the cause of many problems in my opinion.

If the code base over uses snapshots, we all get used to yarn test:watch and then hitting u (to update the snapshots). Bugs make it through, and it is so easy to miss.

I like them for things like error message strings, especially if you expect in a future edit they will change soon. It just makes development quicker with minimal drawbacks.

But if you use them with massive objects, huge arrays, or serialised DOMs, it quickly makes tests very confusing. It is much better to test just the parts that are relevant for the test

For example, instead of this overly complex snapshot:

test('updates theme to dark mode', () => {
  updateTheme(mockUserId, 'dark');

  const userProfile =
    getUserProfile(mockUserId);

  expect(userProfile)
    .toMatchInlineSnapshot(`
    {
      "id": "user123",
      "name": "John Doe",
      "email": "john@example.com",
      "preferences": {
        "theme": "dark",
        "language": "en",
        "notifications": {
          "email": true,
          "push": false,
          "sms": true,
          "marketing": false,
          "newsletter": true,
        },
        "privacy": {
          "showEmail": false,
          "showPhone": true,
          "allowAnalytics": true,
          "cookieConsent": true,
        },
      },
      "profile": {
        "bio": "Software engineer passionate about testing",
        "location": "San Francisco, CA",
        "website": "https://johndoe.dev",
        "socialMedia": {
          "twitter": "@johndoe",
          "github": "johndoe123",
          "linkedin": "john-doe-engineer",
        },
      },
      "metadata": {
        "createdAt": "2023-01-15T10:30:00Z",
        "updatedAt": "2024-03-20T14:45:30Z",
        "lastLoginAt": "2024-03-21T09:15:22Z",
        "loginCount": 247,
        "accountStatus": "active",
      },
    }
  `);
});

When that test starts failing, how do you know what we were really testing there???

I find that the following version is much cleaner and focuses only on what matters for this specific test:

test('updates theme to dark mode', () => {
  updateTheme(mockUserId, 'dark');

  const userProfile =
    getUserProfile(mockUserId);

  expect(
    userProfile.preferences.theme
  ).toBe('dark');
});

The second version is much easier to read and understand what the test is actually verifying.

Another advantage is that if the user profile data structure gets new attributes (or some removed), this test will be unaffected (unless the userProfile.preferences.theme gets updated).

If you and your team do use snapshots I have two main tips:

  • be very careful of developers seeing a test fail, hitting u (to update the snapshot) and committing it. It is very easy to not realise that a bug was introduced, and all we've done is update the test so the bug passes
  • avoid snapshotting HTML. You can do something like expect(screen.getByText('hi')).toMatchInlineSnapshot(...). This will end up being huge chunks of HTML, hard to maintain (other than hitting u to update it).

Well organised test file structure, with correctly named tests

Using a mix of nested describe() blocks, test() (or it()) with clear titles, and using .each() where appropriate make maintaining your tests much easier.

They let you:

  • Quickly/easily find tests and/or know where to add a new test so related behaviours are tested near each other
  • Understand what each test does without reading the implementation
  • Run specific groups (in a describe block) of tests during development, with .only()

When you start working on huge test files (another code smell) with no structure, you can end up adding duplicate tests or not realising something has no test coverage.

Good tests run fast

Nothing feels less productive than having to wait 30+ seconds for your tests to run.

Of course, running all your tests will take longer than 30 seconds on any decent sized app.

But when you are running just one test or two test files, making changes in one or two components it is so much nicer when you get nearly instant feedback.

Tip: make use of the --watch mode when running tests, and filter down by a specific filename. For example vitest --watch UserProfile.test.tsx.

Avoid Querying for elements based on classnames or other similar attributes

A common mistake that I sometimes see in tests is trying to query for elements in the DOM based on things like their class names.

const { container } = render(
  <SomeComponent />
);
// ❌ Bad - selecting by class name
const submitButton =
  container.querySelector(
    '.submit-btn'
  );
const errorMessage =
  container.querySelector(
    '.error-text'
  );

(And it is even worse when those classnames are auto generated random string from CSS-in-JS)

Access to things like .querySelector() should be seen as an escape hatch for those few times you genuinely do need to reach for it.

Normally you should be able to use React Testing Library (or Vitest Browser Mode) standard queries (like getByRole() or getByText().

Writing tests like this end up being a pain to maintain, and break the core principle of React Testing Library, which is to query for elements semantically. But I've written about that enough previously in this post so I won't go into it again.

Avoid asserting that elements have specific classnames

In the last tip, I mentioned how it is bad to query for elements based on their classname.

The opposite side to that coin is also true - we shouldn't be making assertions checking that specific classes were set.

There are exceptions. I probably do it a few times a year. If you were testing theming related code, then go for it.

But for regular tests, I always see .toHaveClass as a code smell to avoid.

Without visual regression tests (taking real screenshots and comparing them) you are often not really proving that these classes are doing anything anyway.

There are so many built in matchers that you can normally use them to achieve what you were trying to do via class names. Here are some examples:

// ❌ Bad - testing implementation details
expect(button).toHaveClass(
  'sc-bdVaJa-d'
);
expect(button).toHaveClass(
  'border-red-500'
);

// ✅ Better - test visibility and interaction states
expect(button).toBeVisible();
expect(button).toBeEnabled();

// ✅ Good - test semantic attributes
expect(button).toHaveAttribute(
  'aria-pressed',
  'false'
);

// ✅ Good - test actual content users see
expect(button).toHaveTextContent(
  'Submit Form'
);

// ✅ Good - test form states
expect(input).toBeRequired();
expect(input).toHaveValue(
  'john@example.com'
);

Avoid Testing State management (like Redux) internals

When testing components that use state management libraries such as Redux, or Zustand, you should test the component behaviour and not test the internals of your state management system. This is very similar to a tip at the top of this post - but I have seen it crop up many times on bigger apps that use tools like Redux or Zustand.

// ❌ Bad - testing Redux internals
test('dispatches correct action', () => {
  const store = createMockStore();
  render(
    <Provider store={store}>
      <TodoList />
    </Provider>
  );

  const addButton = screen.getByRole(
    'button',
    { name: 'Add Todo' }
  );
  userEvent.click(addButton);

  expect(store.getActions()).toEqual([
    {
      type: 'ADD_TODO',
      payload: 'New todo',
    },
  ]);
});

// ✅ Good - same test, but this time we're testing what is rendered
// to know that we added a new todo item
test('adds new todo when button is clicked', () => {
  render(
    <Provider store={mockStore}>
      <TodoList />
    </Provider>
  );

  const addButton = screen.getByRole(
    'button',
    { name: 'Add Todo' }
  );
  userEvent.click(addButton);

  expect(
    screen.getByText('New todo')
  ).toBeInTheDocument();
});

Test what real users experience, not the implementation details. This is much more maintainable and gives much more value to your tests.

Avoid testing with .toBeDefined() - use a more useful test instead

If you are testing some return value from a function, it can be tempting to just stick a expect(something).toBeDefined() to prove that it isn't empty

But why stop there? In most cases it is more valuable as a test (and easier to read, you know what is going on) if it asserted the exact value it should be...

// ❌ Bad - testing basic existence
test('user profile has data', () => {
  const user = getUserProfile();

  expect(user).toBeDefined();
  expect(user.name).toBeDefined();
  expect(user.email).toBeDefined(); // not proving that it's an email here... it could be anything - including null
});

// ✅ Good - testing actual values and behavior
test('user profile contains correct data', () => {
  const user = getUserProfile();

  expect(user.name).toBe('John Doe');
  expect(user.email).toBe(
    'john@example.com'
  );
  expect(user.isActive).toBe(true);
});

In the first test it was proving that we return something that has a name and email value. But doesn't prove we set those up correctly.

What if user.name is an empty string? Or what if user.email is invalid? The test would still pass.

BTW In the second test I didn't even assert expect(user).toBeDefined(). It is just noise, the subsequent tests prove that it is an object anyway.

And .toBeDefined() will pass even if null is returned:

// This function is buggy - it returns null
const getUserById = id => {
  // Bug: returning null instead of user data
  return null;
};

// ❌ Bad - this test passes even though the function is broken!
test('getUserById returns user data', () => {
  const user = getUserById('123');

  expect(user).toBeDefined(); // This passes. Because null is classed as defined
});

// ✅ Good - test the actual expected structure
test('getUserById returns user with correct properties', () => {
  const user = getUserById('123');

  expect(user).toEqual({
    id: '123',
    name: 'John Doe',
    email: 'john@example.com',
  });
  // This would fail and catch the bug
});

The same issue happens with functions that return empty arrays or empty strings:

const getErrorMessages = () => {
  return []; // Bug - should return actual error messages
};

// ❌ Bad - passes even with empty array
test('returns error messages', () => {
  const errors = getErrorMessages();
  expect(errors).toBeDefined(); // [] is defined!
});

// ✅ Good - test the actual content
test('returns validation error messages', () => {
  const errors = getErrorMessages();
  expect(errors).toEqual([
    'Email is required',
    'Password must be at least 8 characters',
  ]);
});

In Typescript: Avoid overuse of as any - be more specific

I think in tests we can be more lenient about using incorrect types.

For example if in the following, we know that in userCanCreatePost() we only need to send active: false, so we could get away with this:

// this function userCanCreatePost expects an entire `User` object, with tons of more properties
expect(
  userCanCreatePost({
    active: false,
  } as any)
).toBe(false);

But it is much safer, with the same amount of effort, to use a type assertion of the actual type - e.g. {active: false} as User.

Then if there is a typo, we'd get an TypeScript error:

expect(
  userCanCreatePost({
    isActive: false,
  } as any)
).toBe(false);

The best way around this is in the next tip - using fixture and helper functions.

Use fixtures and helper functions to make tests easier to read & write

If you are often writing similar code to create mock data, consider using prebuilt objects with that shape, or fixture functions to generate them:

interface User {
  id: string;
  name: string;
  email: string;
  isActive: boolean;
  preferences: {
    theme: 'light' | 'dark';
    notifications: boolean;
  };
}

// helper fn to create dummy data, with optional override:
export const createUser = (
  overrides?: Partial<User>
): User => {
  return {
    id: 'user-123',
    name: 'John Doe',
    email: 'john@example.com',
    isActive: true,
    preferences: {
      theme: 'light',
      notifications: true,
    },
    ...overrides,
  };
};

export const createInactiveUser =
  (): User => {
    return createUser({
      isActive: false,
    });
  };

export const createAdminUser =
  (): User => {
    return createUser({
      name: 'Admin User',
      email: 'admin@example.com',
    });
  };

test('displays user profile correctly', () => {
  const user = createUser({
    name: 'Jane Smith',
    email: 'jane@example.com',
  });

  render(<UserProfile user={user} />);

  expect(
    screen.getByText('Jane Smith')
  ).toBeInTheDocument();
  expect(
    screen.getByText('jane@example.com')
  ).toBeInTheDocument();
});

test('shows inactive status for inactive users', () => {
  const inactiveUser =
    createInactiveUser();

  render(
    <UserProfile user={inactiveUser} />
  );

  expect(
    screen.getByText('Status: Inactive')
  ).toBeInTheDocument();
});

test('shows list of users', () => {
  const users = [
    createUser({ name: 'User 1' }),
    createUser({ name: 'User 2' }),
    createInactiveUser(),
  ];

  render(<UserList users={users} />);

  expect(
    screen.getByText('User 1')
  ).toBeInTheDocument();
  expect(
    screen.getByText('User 2')
  ).toBeInTheDocument();
  expect(
    screen.getByText('Status: Inactive')
  ).toBeInTheDocument();
});

This makes your tests much easier to read. You can easily understand that if we are passing createUser({isActive: false}) in, that the thing that is important for this test is just the isActive: false part.

This means when tests fail, it is more obvious what the test was trying to do (as only the important fields in the helper function will be used).

Good use of render helper functions, to set up common context providers

In most real apps, you end up with a _app.tsx or main.tsx with something like:

return (
  <ThemeProvider currentTheme="dark">
    <ReduxProvider>
      <UserOptionsProvider>
        <FeatureFlagProvider
          enabled={query.featureSwitch}
        >
          <AuthProvider
            currentUser={user}
          >
            {props.children}
          </AuthProvider>
        </FeatureFlagProvider>
      </UserOptionsProvider>
    </ReduxProvider>
  </ThemeProvider>
);

So then often in your tests, you end up calling render with some of those parent providers, as they're crucial for your components to work:

render(
  <ThemeProvider currentTheme="dark">
    <ReduxProvider>
      <FeatureFlagProvider>
        <AuthProvider
          currentUser={mockUser}
        >
          <YourActualComponentHere />
        </AuthProvider>
      </FeatureFlagProvider>
    </ReduxProvider>
  </ThemeProvider>
);

(Or sometimes you will see all those providers and their useContext() hooks mocked!)

It would be much easier to have some helper functions like this, which works the same as standard render() but with all the parent components passed in.

// put this in a shared test helper file
const renderWithProviders =
  component => {
    return render(component, {
      wrapper: props => {
        return (
          <ThemeProvider currentTheme="dark">
            <ReduxProvider>
              <UserOptionsProvider>
                <FeatureFlagProvider
                  enabled={
                    query.featureSwitch
                  }
                >
                  <AuthProvider
                    currentUser={user}
                  >
                    {props.children}
                  </AuthProvider>
                </FeatureFlagProvider>
              </UserOptionsProvider>
            </ReduxProvider>
          </ThemeProvider>
        );
      },
    });
  };

// then in your tests
renderWithProviders(
  <YourActualComponentHere />
);

If you then notice you are still having to provide some providers, for example to test a specific user <AuthProvider currentUser={adminUser}>...</AuthProvider> or a feature flag <FeatureFlagProvider enabled="demo">...</FeatureFlagProvider> then adding these as options in your render helper makes it much easier to read:

// put this in your test helper file(s)
const renderWithProviders = (
  component,
  options
) => {
  return render(component, {
    wrapper: props => {
      return (
        <ThemeProvider
          currentTheme={
            options?.currentTheme ??
            'dark'
          }
        >
          <ReduxProvider>
            <UserOptionsProvider>
              <FeatureFlagProvider
                enabled={
                  options?.featureFlag
                }
              >
                <AuthProvider
                  currentUser={
                    options?.currentUser ??
                    user
                  }
                >
                  {props.children}
                </AuthProvider>
              </FeatureFlagProvider>
            </UserOptionsProvider>
          </ReduxProvider>
        </ThemeProvider>
      );
    },
  });
};

// then your tests are much cleaner:
renderWithProviders(
  <YourActualComponentHere />,
  {
    featureFlag: 'demo',
    currentUser: adminUser,
  }
);

This is a bit messy to write the actual renderWithProviders() function, but you rarely have to update it. And your tests are so much cleaner and easier to read/write.

Tests should be as clean as production code (with a bit more copy/paste duplication)

Some code bases treat tests as messy code, with no code standards. They're like an after thought.

I believe that test files should be almost as high quality as production code.

I say 'almost' as I think there are some exceptions such as:

  • In Typescript code bases, I think it is ok to use any or other type assertions more often. Sometimes it can make tests clearer to read (you can do const mockData = {disabled: true} as SomeProduct - it is clear that this test only cares about disabled property)
  • More copy/paste is acceptable, if it means your tests are easier to read. Sometimes abstracting code into shared functionality isn't useful in tests.
  • Normally comments are useless unless they explain why. But in tests comments can be great to point out what the test is doing. For example if you are passing a complex object into some function that you are testing, a comment pointing out the specific property that is important for the test can make it much easier to maintain

Explain expected values

If you are testing a pure function that calculates a tax rate, you might be tempted to do something like:

expect(
  calculateTaxInUsd(
    someProduct,
    country
  )
).toBe(16.4);

But when that fails - how did we know that 16.40 tax was correct?

Either a comment, or calculate it with maths (hopefully not the exact same as your implementation) to explain how to generate it.

Without this, it will be common to just make some changes in the implementation, see the result is now different, and copy/paste the expected value from the failed test result.

Avoid using implementation to calculate expected values

If you are testing some component which outputs a value - let's say some tax value.

You will normally render your component, then do something like:

const expected = 16.4;
expect(
  screen.getByRole('heading')
).toHaveTextContent(
  `Tax: $${expected}`
);

What you should avoid doing is calling the same function as your app code to calculate $16.40. What if there is a bug in that and we're using it to assert the code is working correctly.

Avoid useless tests, don't test things that can't fail

Every test function should serve a purpose.

There is no value in having a test that is testing something that doesn't matter, or is written in a way that it won't fail if there is a bug.

For example:

test('this is a pointless test', async () => {
  const { container } = render(
    <Greeting />
  );
  expect(container).toBeDefined();
});

It would be quite hard for this test to fail (technically it could, if there was an error during the rendering).

I also personally avoid testing things that can never fail. For example:

const Greeting = ({ name }) => {
  return (
    <div>
      <h1>Hello, ${name}</h1>
      <h2>Welcome to the site</h2>
    </div>
  );
};

For that (simplified) component there is definitely value in testing that <SomeComponent name="Fred"/> renders "Hello, Fred".

However, I would never test Welcome to the site. There is no logic in there to show anything other than that.

Note: If this component was potentially rendered as part of a parent component then it could absolutely make sense to check Welcome to the site is rendered!

Here is an example where checking if Welcome to the site could be appropriate:

const ParentPage = ({
  isLoggedIn,
  username,
}) => {
  return (
    <div>
      {isLoggedIn ? (
        <Greeting name={username} />
      ) : (
        <JoinUp />
      )}
    </div>
  );
};

Avoid testing your mocks or testing your tests

Sometimes it is easy to go overboard with mocks, and your test is just testing your mock.

If you have to reimplement a function in a test, then it might be time to think if you are testing anything useful.

A code smell for this is when you have a spyOn with a complex .mockImplementation()

Here is an example. Let's say we have a PriceCalculator component that uses a taxService to calculate taxes:

// ❌ Bad - testing the mock implementation, not the actual behavior
test('displays correct price with tax', () => {
  const mockCalculateTax = vi
    .spyOn(taxService, 'calculateTax')
    .mockImplementation(
      (price, region) => {
        // We're reimplementing the entire tax logic in our test!
        const taxRates = {
          US: 0.08,
          UK: 0.2,
          CA: 0.13,
        };
        return (
          price *
          (taxRates[region] || 0)
        );
      }
    );

  render(
    <PriceCalculator
      basePrice={100}
      region="US"
    />
  );

  expect(
    screen.getByText('Total: $108.00')
  ).toBeInTheDocument();
  expect(
    mockCalculateTax
  ).toHaveBeenCalledWith(100, 'US');
});

In this example, we have reimplemented the tax calculation logic in the mock.

In my opinion a better way to easily test this is to mock specific return values, and don't try to reimplement the logic in your mock.

// ✅ Better - mock with simple return values, test the component behavior
test('displays price with tax from tax service', () => {
  vi.spyOn(
    taxService,
    'calculateTax'
  ).mockReturnValue(8.0); // Simple mock return value

  render(
    <PriceCalculator
      basePrice={100}
      region="US"
    />
  );

  expect(
    screen.getByText('Total: $108.00')
  ).toBeInTheDocument();
  expect(
    taxService.calculateTax
  ).toHaveBeenCalledWith(100, 'US');
});

But also don't forget to test the tax service separately:

test('tax service calculates US tax correctly', () => {
  const result =
    taxService.calculateTax(100, 'US');
  expect(result).toBe(8.0);
});

This is a bit of a simplified example, but the general rule that I think is important - If you find yourself writing complex logic in your mocks, it can be a sign that you should either test that logic separately or simplify your mocks.

Avoid leaking between tests

Each test should be able to be run by itself, and also in any order. Tests that run after another test should not be affected.

It is a bad code smell if tests only pass if they run in a certain order.

Avoid doing these:

  • Mock and spy functions are set up in one test (e.g. vi.spyOn(something, 'someFn').mockReturnValue(true)) and other tests expect that mock to return true for their tests.
  • Database data is set up in one test, and the next test below it expects that database row to be there.

You can configure vitest to run tests in a random order (sequence.shuffle.tests ). Doing this makes it less likely you can merge in tests which leak into other tests and/or tests that rely on previous tests.

Note: You can use beforeAll()/beforeEach() to set things up before a test, or afterEach()/afterAll() to clean things up to reset the state of everything before another tests runs.

No hard coded signed hashes

This is similar to using hard coded magic numbers. But if you ever deal with signed tokens (like JWTs), then hard coding the signed token value is so annoying to maintain, unless it is clear how to re-generate the signature with new data.

// ❌ Bad - hard coded JWT token with no way to regenerate
test('verifies valid JWT token', () => {
  const hardCodedToken =
    'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.KMUFsIDTnFmyG3nMiGM6H9FNFUROf3wh7SmqJp-QV30';

  const decodedToken =
    verifyAndDecodeToken(
      hardCodedToken
    );
  expect(decodedToken.valid).toBe(true);
  expect(
    decodedToken.payload.name
  ).toBe('John Doe');
});

But now let's say we need to add a new value to the token, like a status property. So we want to add:

expect(result.payload.status).toBe(
  'active'
);

But now it is quite hard to regenerate the original token.

Ideally you have something in your tests to generate & decode, so you can self document in the tests. But also a simple comment can save tons of time when it needs updating.

And for sake of this blog post: I generated it on jwt.io ;)

Avoid if/else statements in tests

Tests should never have conditional (if/else) in them (one exception: see below!)

Here is an example of a useless test, that because of the if(result) the expect() is never called, so we don't realise we have a bug!

// note: typed to return either:
// undefined, or
// {success: boolean} (not isSuccess)
const maybeReturnsSomething = ():
  | undefined
  | { success: boolean } => {
  return undefined;
};
test('it returns isSuccess', () => {
  const result =
    maybeReturnsSomething(); // << might return something, might not
  if (result) {
    expect(result.isSuccess).toBe(true);
  }

  // this test passes... but that is because `result` was empty so the expect() did not fail
});

The fix would be just removing the if(result). If you get TypeScript errors, then using the non null assertion operator can avoid it

test('it returns isSuccess', () => {
  const result =
    maybeReturnsSomething(); // << might return something, might not
  expect(result!.isSuccess).toBe(true); // << this will now always run (so it will catch the bug)
});

When to use conditionals

If you are using each(), then it is often useful to use conditions, but only in very simple ways like this:

it.each([true, false])(
  'includes button when someProp = %s',
  isEnabled => {
    render(
      <Component someProp={isEnabled} />
    );
    const maybeButton =
      screen.queryByRole('button');

    if (isEnabled) {
      expect(
        maybeButton
      ).toBeInTheDocument();
    } else {
      expect(maybeButton).toBeNull();
    }
  }
);

Add comments or useful variable names to help explain what the intention of the test is

When reading tests, it is much easier to understand the intent of a test if there are descriptive variable names (just like a production code). Sometimes extra comments (even if duplicated) can help point out what exactly we are testing or why the expected value in sin some way.

Here's an example of a test that is hard to understand:

test('handles logic', () => {
  const result = calculateCost(5);
  expect(result).toBe(9);
});

This test tells us nothing about what it's actually testing. What does "handles logic" mean? Why is the expected result 9?

Here's the same test rewritten with better variable names and comments:

test('calculates total cost including tax and shipping', () => {
  const baseItemCost = 5;
  const expectedTotalWithTaxAndShipping = 9; // $5 base + $2 tax + $2 shipping

  const result = calculateCost(
    baseItemCost
  );

  expect(result).toBe(
    expectedTotalWithTaxAndShipping
  );
});

Although this is a simple and contrived example, hopefully the idea is clear.

When someone revisits this test to update it (or fix it if their changes broke something), it is clear what the test was meant to be doing.

Note: This section does go against some other tips elsewhere on this page, which says to keep code clean, without comments. Use your judgement!

Testing accessibility

As a software engineer working on frontend applications, it is important to consider accessibility.

But it is often very easy to forget about, ignore, or even worse - introduce accessibility issues.

Using React Testing Library queries like getByLabelText(), getByRole() etc it helps ensure that you are probably writing semantically correct markup.

There are also helper functions getRoles() and isInaccessible() in RTL. In all honesty though, they are very rarely used.

I'll be adding an article soon on how to automate (some) accessibility testing, with other tools and libraries.

In React Testing Library, try to use findBy, over waitFor with getBy

If you are using React Testing Library and need to wait for something to appear (such as after a timeout, or a re-render), you might try await waitFor(...) with an expect in there.

The more 'correct' for testing this sort of async behaviour is to use await screen.findBy... functions.

// ❌ Unnecessary - waitFor + getBy
await waitFor(() => {
  expect(
    screen.getByText('Data loaded')
  ).toBeInTheDocument();
});

// ✅ Better - findBy handles waiting automatically
expect(
  await screen.findByText('Data loaded')
).toBeInTheDocument();

Using waitFor is more suitable for things that are not appearing in the DOM, for example checking some function/spy was called:

const someFn = vi.fn();

await waitFor(() =>
  expect(
    window.fetch
  ).toHaveBeenCalled()
);
await waitFor(() =>
  expect(someFn).toHaveBeenCalled()
);

Use toBeVisible() instead of toBeInTheDocument() when asserting components should be visible to users

A common issue when testing components that hide or show elements via CSS (rather than adding or removing them from the DOM) is misusing toBeInTheDocument() and toBeVisible().

If you are using Jest or Vitest and you hide elements with inline styles (e.g. style={{ display: 'none' }}) or the hidden HTML attribute, then toBeVisible() is very useful and reliable.

But as soon as you rely on classes (for example, className="hidden" with .hidden { display: none; } in your CSS file), Jest or Vitest will not treat the element as hidden unless you load the relevant stylesheet in your test environment.

// ❌ Element exists but might be hidden
expect(
  screen.getByText('Success message')
).toBeInTheDocument();

// ✅ Element is actually visible to users
// (Assuming Jest or Vitest can correctly determine if it is visible!)
expect(
  screen.getByText('Success message')
).toBeVisible();

When testing this sort of behaviour, if it is entirely CSS-based, then I would only recommend using something like Playwright, Cypress, or Vitest Browser Mode (but you must still ensure your CSS is loaded correctly in the test environment).

Mocking fetch calls - with fetch mocks, or MSW

Don't ever make real HTTP requests in your tests! You should always mock out API calls.

My personal preferred way is to either mock window.fetch, or mock it with using Mock Server Worker (MSW).

Real API calls are slower than mocks, can be unreliable, are unnecessary, and harder to test different situations (e.g. error cases).

If you want to make real API calls, then you should lean towards E2E (or API contract) testing for those.

Use userEvent over fireEvent

If you are using React Testing Library, then you have two approaches to triggering interaction actions (like clicking on elements or typing in text boxes)

The preferred one is userEvent (like userEvent.click(somebutton)) - it is much more realistic (triggering all the realistic events like moving a mouse, hover, mouse down and so on).

There is also the fireEvent set of functions, which can still simulate the action (like clicking) however it does not have all the additional realistic actions.

If at all possible, aim to use userEvent. There are some times that you will want to (or have to) use fireEvent but they are quite rare. A good example I've seen is when testing complex dropdowns, where userEvent was trying to be too realistic and firing the onBlur in a way that made the test too complex.

Use custom matchers

If you have a lot of similar tests making assertions on some complex object, don't forget that you can write custom Jest or Vitest matchers for your expect(...) assertions.

For example, if you're testing user objects throughout your test suite:

// add this in your jest or vitest setup file
expect.extend({
  toBeValidUser(received) {
    const pass =
      received &&
      typeof received.id === 'string' &&
      typeof received.name ===
        'string' &&
      received.email.includes('@');

    if (pass) {
      return {
        message: () =>
          `expected ${received} not to be a valid user`,
        pass: true,
      };
    } else {
      return {
        message: () =>
          `expected ${received} to be a valid user`,
        pass: false,
      };
    }
  },
});

// then you can use it in tests:
test('creates user successfully', () => {
  const user = createUser(
    'John',
    'john@example.com'
  );

  expect(user).toBeValidUser();
});

These custom matchers, setup with expect.extend() work in both Jest and Vitest.

Remember to test different timezones

Anything involving dates is difficult. And when you are testing frontend applications you cannot assume everyone is in UTC timezone.

I've seen apps work great during winter in UK (when we are on UTC+0). Then in British Summer Time the tests continue to pass but the app breaks with real users (due to being UTC+1).

If you have anything date related - especially if you are calculating differences in time) you should be testing them in different timezones.

Using fake timers, don't wait in real time

In Jest and Vitest you can use fake timers. Set the system time to a specific date, and run all pending setTimeout or setInterval timeouts.

Your tests shouldn't wait in real time for something that can be faked and runs instantly.

Even if you are only waiting 100ms, it all adds up when you have a huge suite of tests.

Read more about fake timers in your Jest or Vitest tests

Use unique strings in mock data

There is nothing worse than seeing an error saying something was expected to be "mock-value", and you search your code base and there are 15 places defining mock-value.

You can make life much easier for yourself if every mock data/value is somewhat unique, so when it fails it is easier to track down.

// ❌ Bad - generic mock values make debugging hard
const mockUser = {
  id: 'test-id', // << same id as below
  name: 'Test',
  email: 'test@example.com',
};

const mockProduct = {
  id: 'test-id', // << same id as above
  name: 'Test',
  price: 100,
};

When a test fails with "expected 'test-id' but received 'undefined'", you don't easily know which mock value is causing the issue

// ✅ Good - unique mock values for easier debugging
const mockUser = {
  id: 'user-john-123',
  name: 'Bart',
  email: 'bart@example.com',
};

const mockProduct = {
  id: 'product-laptop-456',
  name: 'Macbook Pro',
  price: 2999,
};

Now when tests fail because an id or name didn't match, you can easily find which one is causing the error.

This is an ongoing list of tips/tricks, that I've had in my drafts (and keep adding ideas to) for several months. As time goes on I'll be adding more tips/tricks. If you want to receive these by email check out my newsletter for frontend testing tips

Found this useful? Share this article

TwitterLinkedIn Facebook

If you found this interesting, check out my free FE testing newsletter

If you found this blog post useful or you learned something, you might like my free newsletter, keeping you up to date with frontend testing.

I cover topics like testing frontend apps, how to write Vitest (including the new Vitest Browser Mode), high quality tests, e2e tests and more - with actionable tips and tricks to make sure your tests are as high quality as they can be!

I only send it every couple of weeks, never spam, and you can of course unsubscribe at any time.

Want to become a pro at testing your React apps?

I've got a huge range of interactive lessons from beginner to expert level.