121 is an open source platform for Cash based Aid built for the Humanitarian sector by the Netherlands Red Cross. -- Learn more about the platform: https://www.121.global/
See: status.121.global
Static analysis, formatting, code-style, functionality, integration, etc:
See also: Testing
The documentation of the 121 platform can be found on the Wiki of this repository on GitHub: https://github.com/global-121/121-platform/wiki
-
Install Git: https://git-scm.com/download/
-
Install Node.js: https://nodejs.org/en/download/
-
Install the version specified in the
.node-version-file. -
To prevent conflicts between projects or components using other versions of Node.js it is recommended to use a 'version manager'.
-
FNM (for Windows/macOS/Linux
-
NVM - Node Version Manager (for macOS/Linux).
-
NVM for Windows (for Windows))
-
-
-
Install Docker
-
On Linux, install Docker Engine + Compose plugin: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
-
On macOS, install Docker Desktop: https://docs.docker.com/docker-for-mac/install/
-
On Windows, install Docker Desktop: https://docs.docker.com/docker-for-windows/install/
If there are issues running Docker on Windows, you might need to do the following:
- Install WSL2 Linux kernel package.
Check step 4 on https://learn.microsoft.com/en-us/windows/wsl/install-manual - Set WSL2 as default version in PowerShell
wsl --set-default-version 2- Check step 5 on https://learn.microsoft.com/en-us/windows/wsl/install-manual
- Install WSL2 Linux kernel package.
-
With these tools in place you can checkout the code and start setting up:
git clone https://github.com/global-121/121-platform.git
Navigate to the root folder of this repository:
cd 121-platform
Then install the required version of Node.js and npm:
-
If you use FNM:
fnm use(And follow the prompts) -
If you use NVM
-
On macOS/Linux:
nvm install -
On Windows:
nvm install <version in .node-version-file>
-
Now, make sure to run the following in the root folder to install the necessary pre-hooks:
npm installCopy the centralized .env-file
cp -i services/.env.example services/.env
Each environment-variable is explained in the .env.example-file. See the comments above each variable.
The initially set values are the defaults that should enable you to do local development and run all (automated) tests.
- Some variables should have a unique/specific value for your (local) environment.
- Some are (sensitive) credentials or tokens to access third-party services. (Reach out to the development-team if you need them.)
- Some are feature-switches that enable/disable specific features of the platform.
To start all services, after setup, from the root of this repository, run:
npm run start:services
This will run Docker Compose in "attached" mode. The logs for all containers will be output to stdout.
To stop all services press ⌃ Control + C.
To see the status/logs of all/a specific Docker-container(s), run: (Where <container-name> is optional; See container-names in docker-compose.yml).
npm run logs:services <container-name>
To verify the successful installation and setup of services, access their Swagger UI:
- 121-Service: http://localhost:3000/docs/
- Mock-Service: http://localhost:3001/docs/
Install dependencies for the portal, run:
npm run install:portal
Also, make sure to set the environment-variables. Run:
cp -i interfaces/portal/.env.example interfaces/portal/.env
To start the portal, from the root of this repository, run:
npm run start:portal
Or explore the specific options as defined in the package.json or README.md.
When started, the portal will be available via: http://localhost:8888
When you use VS Code, you can start multiple editor-windows at once, from the root of this repository, run:
npm run code:all
To start an individual interface/service in VS Code:
-
Run: (where
<package>is one ofportal,121-service,mock-service)npm run code:<package>
See for guidelines on how we work with environment-variables, at the top of: .env.example.
All (sensitive) tokens, access keys for third party APIs, infrastructure-specific hostnames and similar values should be injected in the application(s) via environment-variables.
.env-file.
When making changes to the data-model of the 121-service (creating/editing any \*.entity.ts files), you need to create a migration script to take these changes into affect.
The process is:
-
Make the changes in the
\*.entity.tsfile -
To generate a migration-script run:
docker exec 121-service npm run migration:generate src/migration/<descriptive-name-for-migration-script>. This will compare the data-model according to your code with the data-model according to your database, and generate any CREATE, ALTER, etc SQL-statements that are needed to make the database align with code again. -
Restart the 121-service through
docker restart 121-service: this will always run any new migration-scripts (and thus update the data-model in the database), so in this case the just generated migration-script. -
If more changes required, then follow the above process as often as needed.
-
Do NOT import any files from our code base into your migrations. For example, do NOT import seed JSON files to get data to insert into the database, since the migration may break if ever these seed JSON files change. Instead, "hard code" the needed data in your migration file.
-
Do NOT change migration files anymore after they have been merged to main, like commenting out parts of it, since there is a high probability this will result in bugs or faulty data on production instances. Instead, create a new migration file. The exception is bug fixing a migration file, for example if a file was imported that causes the migration to fail (see 5 above).
-
To run this file locally, do:
docker exec -it 121-service npm run migration:run -
If you want to revert one migration you can run:
docker exec -it 121-service npm run migration:revert -
If ever running into issues with migrations locally, the reset process is:
- Delete all tables in the
121-servicedatabase schema - Restart
121-servicecontainer - This will now run all migration-scripts, which starts with the
InitialMigration-script, which creates all tables - (Run seed)
- Delete all tables in the
-
When creating new sequences for tables with existing data be sure to also update it using
setval(example) to the current max id. -
See also TypeORM migration documentation for more info
NOTE: if you're making many data-model changes at once, or are doing a lot of trial and error, there is an alternative option:
- In
services/121-service/src/ormconfig.tssetsynchronizetotrueand restart121-service. - This will make sure that any changes you make to
\*.entity.tsfiles are automatically updated in your database tables, which allows for quicker development/testing. - When you're done with all your changes, you will need to revert all changes temporarily to be able to create a migration script. There are multiple ways to do this, for example by stashing all your changes, or working with a new branch, etc. Either way:
- stashing all your changes (git stash)
- restart 121-service and wait until the data-model changes are actually reverted again
- set
synchronizeback tofalseand restart 121-service - load your stashed changes again (git stash pop)
- generate migration-script (see above)
- restart 121-service (like above, to run the new migration-script)
To test the migrations you are creating you can use this .sh script (unix only) ./services/121-service/src/migration/test-migration.sh example usage ./services/121-service/src/migration/test-migration.sh main feat.new-awesome-entity
This script performs the following steps:
- Checks out the old branch and stops the specified Docker containers.
- Starts the Docker containers to apply the migration and load some data.
- Waits for the service to be up and running, then resets the database with mock data.
- Checks out the new branch, applies any stashed changes, and restarts the Docker containers to run the migrations again.
All services use JSON Web Token (JWT) to handle authentication. The token should be passed with each request by the browser via an access_token cookie. The JWT authentication middleware handles the validation and authentication of the token.
To make development of all components of the 121-Platform easier, we recommend using VSCode with some specific extensions.
They are listed in .vscode/extensions.json-files, in each component's sub-folder, next to the root of the repository.
-
Main/root-folder
Generic extensions for code-style, linting, formatting, etc. Also GitHub Actions-workflow and Azure related ones. -
Portal
Additional extensions for working with Angular, Tailwind, etc. -
121-Service / Mock-Service
Additional extensions for working with Node.js, Jest Unit-tests, etc.
When you open a folder in VSCode and go to: "Extensions" and use the filter: "Recommended"(@recommended);
A list should be shown and each extension can be installed individually.
In VSCode, you can add a new recommended extension by selecting "Add to Workspace Recommendations" from the context-menu in the Extensions sidebar.
Make sure to add an extension to all (other) relevant extensions.json-files, so that it is available in all components of the 121-platform. Angular/CSS-specific extensions don't need to be shared, but TypeScript/Formatting/Developer-convenience-related ones do.
If the Swagger-UI is not accessible after installing Docker and setting up the services, you can take the following steps to debug:
docker compose psto list running containers and their statusdocker compose logs -f <container-name>to check their logs/console output (or leave out the<container-name>to get ALL output)
If there are issues with Docker commands, it could be due to permissions. Prefix your commands with sudo docker....
If the errors are related to not being able to access/connect to the database then reset/recreate the database by:
- Setting
dropSchema: trueinsrc/ormconfig.tsof the specific service. - Restarting that service will reset/recreate its database(-schema)
When considering upgrading the (LTS) version of the Node.js runtime, take into account:
- The Node.js Release schedule: https://github.com/nodejs/release#release-schedule
- The (specific) version supported by Microsoft Azure App Services,
in their Node.js Support Timeline: https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md - Angular's Actively supported versions: https://angular.io/guide/versions#actively-supported-versions
When new Node.js dependencies are added to a service since it is last build on you local machine, you can:
-
Verify if everything is installed properly:
docker compose exec <container-name> npm ls -
If that produces errors or reports missing dependencies, try to build the service from a clean slate with:
npm run install:services -- --no-cache <container-name>Or similarly:
npm run start:services -- --force-recreate <container-name>
- Scenarios of e2e and integration/API-tests for the whole platform are described in
Azure Test Plan. - Each component has its own individual tests:
- Unit-tests and UI-tests for all interfaces; Run with
npm testin eachinterfaces/*-folder. - Unit-tests and API/integration-tests for all services; Run with
npm testin eachservices/*-folder.
See the README: 121-service / Testing for details.
- Unit-tests and UI-tests for all interfaces; Run with
- For how to write and execute Playwright E2E tests see
e2e/README.md/E2E testing suite. - For how to write and maintain Azure Test Plan suites see
wiki/Creating and maintaining E2E tests.
- Is it to test query-magic?
- Is it to test essential endpoints (FSP integrations) and import/exports/etc?
- Often used (with different parameters) endpoints:
PATCH /registrationetc. - Is there actual business-logic performed?
- Not necessary:
- update single (program) properties?
- Examples:
- import Registrations -> change status (with list of
referenceIds) -> export included registrations - update Registration-attributes: all different content-types + possible values (including edge cases)
- import Registrations -> change status (with list of
These tests are still expensive (to bootstrap app + database)
There are a few reasons why we write unit tests cases:
- Unit tests are written to ensure the integrity the functional level aspect of the code written. It helps us identify mistakes, unnecessary code and also when there is room for improvement to make the code more intuitive and efficient.
- We also write unit test cases to clearly state what the method is supposed to do, so it is smoother for new joiners to be onboarded
- It helps us achieve recommended devOps protocols for maintaining code base while working within teams.
How are Unit Tests affected when we make changes within the code in future?
- We should aim to write and update unit tests along side the current development, so that our tests are up to date and also reflect the changes done. Helps us stay in track
- Unit tests in this case differ from manual or automated UI testing. While UI may not exhibit any changes on the surface it is possible code itself might be declaring new variables or making new method calls upon modifications, all of those need to be tested and the new test-scenario or spec-file should be committed together with the feature change.
We are using jasmine for executing unit tests within interfaces and jest within services. However, while writing the unit test cases, the writing style and testing paradigm do not differ since jest is based on jasmine.
See the Guide: Writing tests
Test coverage is collected and reported to the QLTY dashboard. This information is then used to determine whether a PR is decreasing test coverage or not.
Refer to the README file of the 121-service or the portal interface for more detailed information on how each coverage report is generated.
See notable changes and the currently released version on the Releases page.
This project uses the CalVer-format: YY.MM-MICRO.
This is how we create and publish a new release of the 121-platform. (See the glossary for definitions of some terms.)
- Define what code gets released.
(Is the current state of themain-branch what we want? Or a specific commit/point-in-the-past?) - Check the changes since the last release, by replacing
vX.X-Xwith the latest release in this URL:https://github.com/global-121/121-platform/compare/vX.X-X...main
Check any changes to:services/.env.example:
If there are:- related to the 121-service: Use the GitHub Actions-workflow:
infrastructure: Set Azure WebApp Environment variable (Set the instance tostaging!) - related to the Mock-Service: Make any configuration changes in the Azure Portal to its
staging-instance(s).
- related to the 121-service: Use the GitHub Actions-workflow:
interfaces/portal/.env.example
If there are, then make any configuration changes to the "staging"-environment settings on GitHub.
- Define the
version-name for the upcoming release. - "Draft a release" on GitHub
- For "Choose a tag": Insert the
versionto create a new tag - For "Target": Choose the commit which you would like to release (defined in the first step).
- Set the title of the release to
<version>. - Use the "Generate release notes" button and double-check the contents.
This will be the basis of the "Inform stakeholders"-message to be posted on Teams
- For "Choose a tag": Insert the
- Publish the release on GitHub (as 'latest', not 'pre-release')
This will trigger the deployment-workflow that can be monitored under GitHub Action runs - Check the deployed release on the staging-environment (this can take some time...)
- Now, and throughout the release process, it is wise to monitor the combined CPU usage of our App-Services.
- If all looks fine, proceed with deploying the release to all other production-instances.
- Make any configuration changes (
ENV-variables, etc.) on each App-Service just before deployment.- For the 121-service: Use the GitHub Actions-workflow:
infrastructure: Set Azure WebApp Environment variable - For the Mock-Service: Use the Azure Portal to update its
production-instance.
- For the 121-service: Use the GitHub Actions-workflow:
- Make any configuration changes for the Portal in each client's GitHub-environment-settings.
- Use the "Deploy
<client name>All" deployment-workflows on GitHub Actions to deploy theversion-tag to each production-instance.⚠️ Note:
Start with deployment of the "Demo"-instance.
This will also deploy the Mock-Service to its production-environment.
- Send the "Inform stakeholders"-message to the "121 Releases" Teams channel.
Azure App Service uses deployment slots that can cause temporary database errors during releases:
- What happens: New instance starts and runs migrations while the old instance still serves requests
- Symptoms: Database query errors (missing columns/tables) that only occur during deployments
- Solution: Retry affected endpoints to see if it was caused by this temporary state
- Note: This is expected behavior and resolves automatically once deployment completes
Recognition: Errors only appear during releases and disappear when you recheck the same endpoint.
Only in rare, very specific circumstances it might be required to do a "hotfix-release". For example when:
- A new (incomplete) feature, that is not yet communicated to end-users, is already in
main - A (complex) migration-script, that would require an out-of-office-hours deployment, is already in
main - Another issue that would pose (too big) a risk to deploy to any of the currently running instances.
Consider however, that it is also possible to postpone the deployment of a regular release, for a specific instance.
This follows a similar process to regular release + deployment, with some small changes.
- Checkout the
<version>tag which contains the code that you want to hotfix. - Create a new local hotfix-branch using that tag as the
HEAD(e.g.hotfix/<vX.X-X>, with an increased finalMICRO-number) and make the changes. - Push this branch to the upstream/origin repository on GitHub.
Verify the test-runs(s) on the hotfix-branch by looking at the status of the last commit on the branches-overview. - Create a new release + tag (see above) selecting the
hotfix/v*-branch as target, and publish it. - Use the deployment-workflows on GitHub Actions to deploy the newly created tag (not the branch). For each required instance.
- After the hotfix has been released to production, follow standard procedures to get the hotfix-code into the
main-branch.
Note: Do not rebase/update the hotfix/v*-branch onto the main-branch until AFTER you have successfully deployed the hotfix to production.
The hotfix branch is created from a "dangling" commit, this makes the GitHub UI confused when you look at a PR between the newly created hotfix-branch and the main-branch. Any conflict warnings shown on GitHub are not relevant for the hotfix-deployment, they'll only need to be addressed to merge the hotfix into the main-branch afterwards.
If you deploy the 121-platform to a server for the first time it is recommended to setup a separate Postgres database server. The connection to this database can be made by editing the POSTGRES_* variables in services/.env.
See: (via GitHub Action(s); i.e. deploy_test_*.yml )
- PR's to the branch
mainare automatically deployed to an individual preview-environment. - When merged, a separate deployment is done to the test-environment; for that interface only.
See: (via GitHub Action(s); i.e. deploy_staging_*.yml )
- Created/published releases are automatically deployed to the staging-environment
- A manual deploy can be done using the GitHub UI, using "Run workflow/
workflow_dispatch" and selecting the preferred release-versiontag(orbranchfor testing on the staging-environment).
See: (via GitHub Action(s); i.e. deploy_test_service.yml, deploy_test_mock-service.yml )
- When merged, a separate deployment is done to the test-environment.
- Make sure to update any environment-configuration in the Azure-portal as soon as possible, preferably before the merge & deploy.
- Follow the steps from "Create an instance" in the:
infrastructure-repository. - Build/Deploy the platform via the GitHub Action(s) by selecting the target release-version
tag
- Decide on what
versionto deploy - Prepare the environment accordingly (Setting all environment-variables etc.)
- A manual deploy can be done using the GitHub UI, using "Run workflow/
workflow_dispatch" and selecting the preferred release-versiontag(orbranchfor testing on the staging-environment).
| Term | Definition (we use) |
|---|---|
version |
A name specified in the CalVer-format: YY.MM-MICRO |
tag |
A specific commit or point-in-time on the git-timeline; named after a version, i.e. v22.1.0 |
release |
A fixed 'state of the code-base', published on GitHub |
deployment |
An action performed to get (released) code running on an environment |
environment |
A machine that can run code (with specified settings); i.e. a service, or your local machine |
Released under the Apache 2.0 License. See LICENSE.