Back in 2015, Paul Biggar, founder of CircleCI, wrote a blog post titled Testing 3rd Party APIs and Microservices which outlines their approach to testing third-party APIs like Twilio, Mailgun, and GitHub. He advocated mocking all API calls during testing to keep API tests fast and insulated from flaky API responses. He also suggested periodically using those mocks to call the live API to ensure the API works correctly and that mocks stay up-to-date.
We really like this approach to API testing, and we believe our testing approach at Superface improves on it.
Design while testing, test while designing
We have a concept at Superface we call Integration Design, which involves mapping the features of an API to a business capability. For example, if Twilio is the specific API we want to use, sending an SMS would be the business capability. Business capabilities aren’t tied to a particular provider and let us express our business intent in a way that doesn’t include implementation details.
As someone goes through the API documentation and decides how the functionality meets their business capability, integration designers can record their actual API calls instead of writing mocks by hand for testing. This approach makes mocking more practical, reusing the traffic from their calls instead of doing the extra of writing mocks by hand. As people design their integrations, they can write tests and use their tests to help them design their integrations.
Test integration designs in a build pipeline
Integration designers can run their tests in a build pipeline using the recorded API traffic to ensure their integrations continue to work correctly over time. This keeps the tests fast and insulated from flaky API calls—the benefits of mocking Biggar described. The added benefit is that it makes the mocking a part of the integration design process rather than an additional step. As a result, they can quickly test all of their integrations on every commit or Pull Request.
This approach also means anyone on their team can run the tests locally with recorded traffic without ever hitting the actual API. This makes for a nice workflow, where the integration design process works together with the testing process.
Periodically test integrations so you can smoothly handle any API changes
Biggar also suggested periodically using mocked calls to make actual API calls to ensure the API is working correctly. The Superface approach to integration design and testing also makes this possible and automatic. People can use the same inputs from the integration design step to make new API calls to the actual API. If the API responses don’t match or the outputs are incorrect, the team can address the problem by updating the integration design.
This is where Superface’s integration design process not only improves on Biggar’s approach but completely changes the way we test APIs and update integrations. If the tests fail, the integration designer and team can get to work correcting the way they map the new API implementation to their business capabilities. They can catch implementation changes this way and update the recorded traffic as needed. As a result, their tests stay fast, and their code remains focused on business capabilities.
And when they finish updating the mapping, they can push their fix to Superface. Superface will update all their clients. No need to change the code or redeploy. Teams can make changes while their application is running without any downtime.
Share the integrations with the Superface magic
Superface also makes it possible to share integrations designs across teams and programming languages using a language we created called Comlink. Once someone writes a profile defining a business capability and maps it to an API implementation, they can share the profile and map with other people.
This means an integration only needs to be designed and tested once. After that, other people can rely on the original author’s design and testing, install the capability, and start using the API within a few minutes.
We believe our approach improves and extends on what Biggar describes. It makes mocking natural by recording traffic to use throughout the design and testing phases. It allows people to handle API changes and update clients without redeploying them. And it gives people the ability to share their API integrations, streamlining and improving the way a larger group of people adopt an API.