Visual Regression Testing
This document is a guideline for running and creating visual regression tests in ATLAS project.
In order to safeguard the visual consistency of our components, we employ visual regression tests. A standard snapshot test captures a component's screenshot in the browser and compares it with reference snapshots. If a failure occurs and the two snapshots don't match, it indicates either an unexpected change or the need to update the reference snapshot to the new version.
For this purpose we use Cypress, a headless version of Chrome, and cypress visual regression plugin. In cypress we connect to our docusaurus host where we have a dedicated route for visual testing of components.
note
We want to run the tests in an identical
environment to Github Aciton and also we do want to run the tests in same environment regarless of machine or processor user is using.
Setup for Visual Testing
To achive this we connect to a Rancher Desktop (Free Docker Desktop) and office VPN for security. Read-More
Rancher Desktop
You can download Rancher Desktop from here.
Install with default settings.
Office VPN
Please follow the instruction to connect the office VPN here. This is to make sure only Adjust engineers gain access to the remote server.
Finally export to Docker host by running export DOCKER_HOST=tcp://3.71.105.139:2375
and run tests.
Running visual regression tests with Docker
Make sure the Rancher Desktop is running. And when running for the first time it will copy content of the project to Docker Image, this can take a little while.
yarn run-test-with-docker
This command will run all tests on the Rancher Desktop. It automatically will run each test case under the cypress/visual-tests
folder with the actual mode.
Update Snapshots
yarn run-update-test-with-docker
Above command will run all tests on Rancher Desktop and update our base folder. We should run this command when we create a new test or when we update our existing test cases. It needs to be run manually when we expect visual changes or we add new tests or changes will run each test file under the cypress/e2e folder and update references in the base mode.
Should see similar results.
Commands for Rancher Desktop
Command for List of Containers
docker ps
Command for logging the contents of the running container
docker logs <containerID>
Command for Exploring the container file system
docker exec -it <containerID> bash
Error statement: No version of Cypress is installed
.
Command to re-install Cypress globally.
npm uninstall -D cypress -g
npm install cypress
npx cypress install --force
Creating new test scenario
1. Create a Test File:
- First, create a new file to write your test code. In this example, we store our test files in the
cypress/e2e
directory. - Create the file with
ComponentName.cy.js
naming that represents the component you are testing. For instance, if you are testing a button component, name the file Button.cy.js.
2. Import Necessary Support and Define Wait Time:
- Before writing your tests, you should import
cypress-real-events/support
. This package provides additional support for simulating real events in your tests. - If you intend to use wait times in your tests, define the waitTime that suits your scenario.
3. Define Test Description and beforeEach Block:
- Begin your test by using the
describe()
function to provide a clear description of what the test is going to cover. - Inside the
describe()
block, create abeforeEach()
block. This block sets up the environment that will be used before each test in the file. Here, you can add settings that you want to apply to every test.
4. Visit the URL and Set Viewport:
- Inside the
beforeEach()
block, set up the URL for the component you want to test. This URL is constructed using the visualRegressionBaseURL and path variables. - Use
cy.visit(url)
to visit the specified URL in the Cypress browser. - Call
cy.setViewport()
to set the viewport. This ensures consistent viewport settings for your tests. Putting it all together, the starting code for your first test in Cypress would look like this:
// Import necessary support
import 'cypress-real-events/support';
// Define waitTime if needed
const waitTime = 1000; // Replace 1000 with your desired wait time in milliseconds
// Write the test
describe('Description of Test', () => {
beforeEach(() => {
// Set up the environment for each test
const visualRegressionBaseURL = Cypress.env('visualRegressionBaseURL');
const path = 'component_name'; // Replace 'component_name' with the actual path to your component
const url = `${visualRegressionBaseURL}${path}`;
cy.visit(url);
cy.setViewport(); // Set the viewport to your desired settings
});
// Your test cases go here...
});
5. Write Test Code:
Once you have set up the test file and the beforeEach()
block, you can proceed to write your actual tests using the it()
blocks. Each it()
represents an individual test case. Inside these blocks, you'll identify the element you want to focus on using cy.get()
and then use compareSnapshotWithConfig()
to perform visual regression testing by comparing screenshots.
Look at an example test file for the Button component's visual regression tests:
import 'cypress-real-events/support';
const waitTime = Cypress.env('waitTime');
describe('Button component’s visual regression tests', () => {
beforeEach(() => {
const visualRegressionBaseURL = Cypress.env('visualRegressionBaseURL');
const path = 'Button';
const url = `${visualRegressionBaseURL}${path}`;
cy.visit(url);
cy.setViewport();
});
// First test case
it('Icon-left', () => {
// Identify the element you want to test using cy.get()
cy.get('#icon-left').compareSnapshotWithConfig('#icon-left', 'icon-left');
});
// Additional test cases can be added here...
});
In this example, we have a test case labeled 'Icon-left'. Inside this test case, we use cy.get('#icon-left')
to select the element with the ID #icon-left
(you can replace this with the appropriate selector for your component). We then call compareSnapshotWithConfig()
to compare the screenshot of this element with the reference snapshot for the same element.
You can add more test cases by creating additional it()
blocks. Each test case should focus on a specific aspect or state of the component you want to test, ensuring comprehensive coverage of its visual appearance.
note
If you want to add new features for your component test, be sure that the function is included in design page. You can find page designs under packages/docusaurus/src/pages/visual-regression
path.
Base Mode
The "Base" mode refers to the set of reference screenshots that serve as a baseline for comparison. These screenshots are stored in the "base" folder and represent the expected output for specific functionalities or components in the application. These reference screenshots are considered the correct output and are used as a point of comparison during testing.
Actual Mode
The "Actual" mode contains the screenshots of the current developments or changes made to the application. These screenshots are captured during the test runs and are stored in the "actual" folder. The "Actual" screenshots represent the output produced by the latest version of the application under test.
Differences between Actual and Base Modes
During testing, we compare the "Actual" screenshots with the corresponding "Base" screenshots to identify any discrepancies or deviations from the expected behavior. If there are any differences, they indicate potential issues or regressions in the application, and further investigation is required.
By maintaining a clear separation between the "Actual" and "Base" modes and comparing them systematically, we ensure the stability and reliability of our application's user interface and user experience.
note
It's crucial to update the "Base" screenshots whenever there are intentional changes in the application that reflect the desired behavior. This ensures that the tests accurately represent the expected output as the application evolves.