In the first part of this article I demonstrated how a typical client API request to a server can be unit tested using the JavaScript testing framework Jest. I also looked at how this could be handled when the API call is part of a Redux state manipulating action within a React application.
These tests allow us to verify our code up to the system boundary, checking it can correctly handle the expected outcome from the API call and respond accordingly. However, such tests don’t provide any feedback as to whether the API request or response will be a success. This fragile part of the system is at risk from unintentional changes to server-side code, such as changes to the request or response format, or the underlying method behaviour.
To resolve this issue, a broader set of tests can be created that deal with the integration of the separate systems and ensure that any breaking changes to the workflow are detected. Let’s look at how we might set these up.
Creating a testing API
Up until this point we have only considered the testing setup on the client-side (front-end) of the application, but now we also need to think about our setup on the server side (back-end).
Calling our API endpoints as part of an integration test can have consequences, for example the manipulation of records in a database. It should go without saying that our tests should only be run against a dedicated test environment – not against a live server! However, when we execute our tests we also want them to run deterministically, so that running the same test twice will always give the same expected result.
To do this, we setup a test fixture for our database. A test fixture is a setup process that initialises the system ready for testing, allowing tests to be repeatable by ensuring the system is in the same known state everytime. For example, if a record in the database needs to be modified as part of an integration test, we might expect it to already exist and be in a particular state. If it’s not, the response from the server to the client may be different and the integration test will fail.
This step forms part of the first part of the typical ‘AAA’ testing methodology:
- Arrange – setup the system for the test
- Act – exercise the system under test
- Assert – check the state of the system is expected
The exact process of setting up the test fixture will depend on the technology being used for serving up our API endpoints and for database storage. For example, this could be driven by Node.js, Ruby or .NET in combination with a SQL or NoSQL database.
Either way, the aim is to expose one or more API endpoints that can perform the setup process (and optionally a teardown process if required). We can then call these as part of our test procedures to ensure our data is in a consistent and known state. Depending on the nature of the tests these setup requests may be required per individual test or shared across a range of tests.
React Integration Tests
The first step to writing our integration tests is to make sure the API endpoints being tested are called and the result verified. We can start by taking our previously mocked tests and removing references to the fetch-mock library. Remember that the test should return a promise so Jest knows it’s asynchronous and will wait for it to complete.
At this point we have a test making a request to our server, which we are hopefully running in a test environment. However we have no guarantees as to the state of system at the point of the call being made, so it’s time to make use of our database setup and teardown endpoints.
Jest provides a set of useful helper methods to aid writing tests. The beforeEach() method can be defined to execute some code at the beginning of every test, and combined with the afterEach() method for any code after.
In addition, beforeAll() and afterAll() methods are available; these will be executed once at the very beginning of the tests and end of the tests respectively.
An important point to note is that all of the before and after methods will apply only to tests written in the same test file. Typically it is convenient to put similar tests together in the same file, but for this reason we may want to limit any before and after code. Jest provides support for scoping tests within the same file to allow full control over the setup and teardown process.
Testing the Boundaries
Now we’ve got the basics of our integration tests, it’s time to put together a set of tests to detect breaking API changes. In this example the tests will cover a typical set of API endpoints for a ‘CRUD’ system (for creating, retrieving, updating and deleting entitites). This example uses a customer entity.
First of all, a set of functions are created to call each API method and return the HTTP response object (the exact details of the request would depend upon the API format).
Secondly, additional functions are created to call the test database methods. Here we’ve exposed two helpers to setup the database, and one to restore it afterwards. The first setup method ensures that our customer database is empty: useful if our API is likely to insert data. The second ensures our customer database contains a single record and returns that data; this can be used when our API is expecting to perform operations against existing records.
Using the API helper functions we can now build up our tests, each of which will fail if the API definition changes in the future. Testing the create customer API starts with a call to ensure the test customer database is empty. The body of the test requests that some test data is inserted and that a 200 status response is received from the server, which is enough to confirm that the API worked as intended.
Note that calling the teardown method is not shown above but could be included easily enough if required (either in an afterEach() or afterAll() method depending upon the implementation details).
For the get customer method, we need to able to get some data out so we can verify the API response is in a format we expect and doesn’t change unexpectedly. To do this the beforeEach() method calls the setup method to initialise our database with a single customer record. Because this method returns details about the test record, we can store the details for later use in the test itself. For the test assertions, as well as checking the response status was successful, it is also possible to check the response data and compare it against the test data to confirm its integrity (note that here we are assuming the response is in JSON format, but it could be anything).
The update and delete methods are similar in nature, both requiring initial data to work from and therefore again initialising a single entity in their setup methods.
For the update test, the test body calls the updateCustomer API endpoint against the test database entity to attempt updating it with some arbitruary data, whilst the delete test tries to delete it. Both tests pass if the API methods return a 200 status code to confirm they function as expected.
In Summary
Here we’ve set out to develop implementations of tests across boundaries – specifically API endpoints in a typical client-server web application. By doing so, the integrity of the endpoints can be automatically verified, to ensure any changes to the API specification or implementation (whether intentional or not) are flagged up as soon as possible.
The code examples above provide a way of achieving this, even if there is a lot of overhead in terms of the initial setup involved. Because of this, it may be undesirable to test an entire API if it is particularly large, but rather a more optimal strategy to implement only on the most complex or heavily loaded aspects of the system.
Another aspect to consider when writing such tests is how they fit into the overall development workflow, particularly within a continuous integration or deployment process. An ideal strategy would be to use an automated deployment to a test environment, where it is far safer to run the tests against a dummy database in a controlled manner.
Finally, integration tests are just one part of testing the integrity of a system. In the first article we looked at unit tests, which provide more granular testing at a lower level. At the other end of the spectrum are end-to-end (or acceptance) tests, which cover a larger segment of a system and simulate several actions being performed in typical use-case scenarios. Effective testing requires the intelligent use of such tools, balanced against the time and resources needed to run them.
Featured image: Ruiwen Chua/Flickr, Creative Commons