This document describes:
-
the layout of the
trustify-testsrepository -
how to contribute a test
-
how to setup your environment to run the
trustify-testson your local environment
The layout of the trustify-tests repository looks like follows:
.
├── package.json
├── playwright.config.ts
├── config
├── etc
└── tests
├── api
│ ├── fixtures.ts
│ ├── client
│ ├── dependencies
│ ├── features
│ └── helpers
├── common
│ ├── constants.ts
│ └── assets
│ ├── csaf
│ └── sbom
└── ui
├── dependencies
├── features
│ ├── *.feature
│ ├── @sbom-explorer
│ └── @vulnerability-explorer
├── helpers
└── steps
-
package.json- project configuration thenpm(Node.js Package Manager) understands; you can define your scripts (commands) here that you can then execute bynpm run <your command> -
playwright.config.ts- a configuration for Playwright and Playwright-BDD -
configcontains configuration files that are common for the repository; currently it contains-
openapi.yaml- a file with the Trustify API definition; every time the file changes on the Trustify side it should be also updated here -
openapi-ts.config.ts- a configuration for@hey-api/openapi-tstelling it how to generate the content oftests/api/client; whenever this oropenapi.yamlfile changesnpm run openapishould be executed to update the content oftests/api/client
-
-
etccontains auxiliary files such as Podman/Docker compose files to start a Playwright container -
tests/apicontains API tests organized as follows-
fixtures.ts- API tests fixtures written in TypeScript -
clientcontains a TypeScript interface to Trustify API generated fromconfig/openapi.yamlbynpm run openapi -
dependenciescontains setup and tear down routines which are run before the start and after the end of the API test suite, respectively -
featurescontains API tests itself;_openapi_client_examples.tsshows how to use generated TypeScript interface to Trustify in API tests -
helperscontains auxiliary utilities used by API tests
-
-
tests/commoncontains data and definitions shared by both API and UI tests-
constants.ts- constant definitions used by both API and UI tests -
assets/csafcontains compressed (bz2) samples of CSAF files -
assets/sbomscontains compressed (bz2) samples of SBOM files
-
-
tests/uicontains UI tests; UI tests are developed following BDD (Behavior Driven Development) methodology; the directory is organized as follows-
dependenciescontains setup and tear down routines which are run before the start and after the end of the UI test suite, respectively -
featurescontains the UI tests itself; the content of the directory is further organized as follows-
*.featurefiles are test scenarios described in Gherkin;*.featurefiles on the top level of thetests/ui/featuresdirectory describe scenarios that need to be implemented first in the front end; that is, they describe the expected front end behavior -
@*directories contain*.featurefiles and*.step.tsfiles used to test the so far implemented front end features; see also Tags from path documentation
-
-
helperscontains auxiliary utilities used by UI tests -
stepscontains implementation of common BDD steps used intests/ui/features
-
To contribute an API test, put your code under the tests/api/features directory.
If the test also contains a generic code that could be shared by more API tests,
put that code under the tests/api/helpers directory. In a case that code is
also intended to be shared by UI tests, put it under the tests/common directory
instead. If you have also some assets that need to be contributed together with
the test, put them under the tests/common/assets directory.
To contribute a UI (front end) test, put your code under the tests/ui/features
directory. Depending on the status of a feature your test is trying to cover,
there are two ways of how to proceed:
-
A test is covering an implemented UI feature. Put your test under a
tests/ui/features/@*directory. You can choose from the existing or create your own depending on your use case. -
A test is covering a use case (scenario) not yet implemented. Describe your use case in Gherkin and put it inside a
tests/ui/features/*.featurefile. The use case should be communicated with the upstream before. Once the upstream implements the requested features covering your use case, the next step is to put your*.featurefile(s) under atests/ui/features/@*directory and implement missing steps to make it work under the Playwright BDD framework.
Other directories you should be interested in when contributing a UI test:
-
tests/common(described earlier) -
tests/ui/helpersfollows the same rules astests/api/helpers -
tests/ui/stepsis intended to be a right place for steps that are common across many use cases (scenarios)
Playwright is officially supported to:
- Windows 10+, Windows Server 2016+ or Windows Subsystem for Linux (WSL)
- macOS 13 Ventura, or later
- Debian 12, Ubuntu 22.04, Ubuntu 24.04, on x86-64 and arm64 architecture
To run Playwright on unsupported Linux distributions like Fedora, Playwright can be configured on docker/podman and the tests can be executed from the client (local machine). To do this, follow the section below.
First, clean-install the requirements:
npm ciThen, get the Playwright version (it is important that client and server
versions of Playwright must match, otherwise you get <ws unexpected response> ws://localhost:5000/ 428 Precondition Required-like error when you try to run
tests):
export PLAYWRIGHT_VERSION="v$(npx playwright --version | cut -d' ' -f2)"By default, Playwright is listening on port 5000 (the default value of
PLAYWRIGHT_PORT from etc/playwright-compose/.env). You can override this
value if it is already taken by the system or other application:
export PLAYWRIGHT_PORT=<your choice of port number>Then, start the Playwright service (you can override etc/playwright-compose/.env
by exporting environment variables with your own values as demonstrated above or
you can just pass to {docker,podman}-compose your own .env file via
--env-file <env_file> CLI option):
podman-compose -f etc/playwright-compose/compose.yaml upAfter a while, the container should be in Ready state and you should see the
output (replace 5000 by the value of PLAYWRIGHT_PORT):
Listening on ws://127.0.0.1:5000/
Now you can execute the Playwright tests (again, replace 5000 by the value of
PLAYWRIGHT_PORT):
TRUSTIFY_URL=http://localhost:8080 PW_TEST_CONNECT_WS_ENDPOINT=ws://localhost:5000/ npm run testWhen you are finished with testing, you can shut down the container by Ctrl+C
and:
podman-compose -f etc/playwright-compose/compose.yaml down