Requirements:
- Git
- Postgres + PostGIS
- UV
- Pre-commit
- 1Password CLI
pysync(optional for database syncing)
Setup:
- install tools (
brew install git uv pre-commit 1password-cli postgis pgsync) - configure access to private dependencies [1]
- clone and setup project [2]
- generate an
.envfile - configure local databases [3]
[1]
You will need a BAS GitLab access token to install privately published app dependencies set in ~/.netrc:
machine gitlab.data.bas.ac.uk login __token__ password {{token}}
[2]
% git clone https://gitlab.data.bas.ac.uk/MAGIC/assets-tracking-service.git
% cd assets-tracking-service/
% pre-commit install
% uv sync --all-groups
[3]
% psql -d postgres -c "CREATE USER assets_tracking_owner WITH PASSWORD 'xxx';"
% psql -d postgres -c "CREATE USER assets_tracking_service_ro WITH PASSWORD 'xxx';"
Where xxx are placeholder values.
Then run the reset-db Development Task to create databases and required extensions.
Tip
You can also Populate the development database from production.
% uv run ats-ctl --help
Or if within the project virtual environment:
% ats-ctl --help
See the CLI Reference documentation for available commands.
Taskipy is used to define development tasks, such as running tests and resetting local databases. These tasks are akin to NPM scripts or similar concepts.
Run task --list (or uv run task --list) for available commands.
Run task [task] (uv run task [task]) to run a specific task.
See Adding development tasks for how to add new tasks.
Tip
If offline, use uv run --offline task ... to avoid lookup errors trying to the unconstrained build system
requirements in pyproject.toml, which is a Known Issue within UV.
All changes except minor tweaks (typos, comments, etc.) MUST:
- be associated with an issue (either directly or by reference)
- be included in the Change Log
- all deployable code should be contained in the
assets-tracking-servicepackage - use
Path.resolve()if displaying or logging file/directory paths - use logging to record how actions progress, using the app logger (
logger = logging.getLogger('app')) - extensions to third party dependencies should be:
- created in
assets_tracking_service.lib - documented in Libraries
- tested in
tests.lib_tests/
- created in
In the assets_tracking_service.Config class:
- define a new property
- add property to
ConfigDumpSafetyped dict - add property to
dumps_safe()method - if needed, add logic to
validate()method
In the Configuration documentation:
- add to Options Table in alphabetical order
- if needed, add a subsection to explain the option in more detail
If configurable:
- update the
.env.tpltemplate and any existing.envfiles - update the
[tool.pytest_env]section inpyproject.toml
In the tests.assets_tracking_service_tests.config module:
- update the expected response in the
test_dumps_safemethod - if validated, update the
test_validate(valid) method and add newtest_validate_(invalid) tests - update or create other tests as needed
To create a migration run the migration [slug] Development Task where [slug] is a short, -
separated, identifier (e.g. foo-bar).
This will create an up and down migration in the
db_migrations resource directory. Migration are numbered
(ascending for up migrations, descending for down) to ensure they are applied in the correct order.
- include a related GitLab issue wherever applicable to these mirations
- views should be named with a
v_prefix. - if adding a new table with static data, add to the
exclusionsin.pgsync.yml&tpl/.pgsync.yml.tpl - update the Data Model documentation as necessary
- migrations should be grouped into logical units:
- e.g. for a new entity, define the table and it's indexes, triggers, etc. in a single migration
- multiple entities (even if related and part of the same change/feature) SHOULD use separate migrations
Caution
Existing migrations MUST NOT be amended. Use an ALTER command in new migrations if a column type changes,
Note
The application database role does not have privileges to create other roles.
See the Implementation documentation for more information on migrations.
If a new CLI command group is needed:
- create a new module within the
assets_tracking_service.clipackage - create a corresponding test module in
tests.asset_tracking_service_tests.cli - import and add the new command CLI in the
assets_tracking_service.climodule
In the relevant command group module, create a new method:
- make sure the command decorator name and help are set correctly
- follow the conventions established in other commands for error handling and presenting data to the user
- add corresponding tests
In the CLI Reference documentation:
- if needed, create a new command group section
- list and summarise the new command in the relevant group section
- add
ENABLE_PROVIDER_FOOConfig Option for enabling/disabling provider - update
ENABLED_PROVIDERScomputed config property to include new provider - add provider specific Config Options as needed
- create a new module in the
assets_tracking_service.providerspackage - create a new class inheriting from the [
assets_tracking_service.providers.base_provider.BaseProviderclass - implement methods required by the base class
- include in the
assets_tracking_service.providers.providers_manager.ProvidersManagerclass and update the_make_providers()method - add tests as needed
Caution
This section is Work in Progress (WIP) and may be incomplete/inaccurate.
- add a
ENABLE_EXPORTER_FOOConfig Option for enabling/disabling the exporter - update the
ENABLED_EXPORTERScomputed config option to include new exporter - add exporter specific Config Options as needed
- if the exporter relies on another, update the
Config.validate()method to ensure the dependent exporter is enabled - create a new module in the
assets_tracking_service.exporterspackage - create a new class inheriting from the
assets_tracking_service.exporters.base_exporter.BaseExporterclass - implement methods required by the base class
- integrate into the
assets_tracking_service.exporters.exporters_manager.ExportersManagerclass and update the_make_exporters()method - add tests as needed, including:
- creating a new module in the
tests.assets_tracking_service_tests.exporterspackage - the
tests.assets_tracking_service_tests.exporters.test_exporters_manager.test_make_each_exportermethod - adding a mock in
/tests/assets_tracking_service_tests/exporters/test_exporters_manager.test_export
- creating a new module in the
Caution
This section is Work in Progress (WIP) and may be incomplete/inaccurate.
- agree a slug to use to identify the new layer (e.g.
foo) - create a new Database Migration that:
- creates a source view, selecting data for the new layer (named
v_{slug}) - creates a GeoJSON view, selecting from source view into a feature collection (named
v_{slug}_geojson) - inserts rows into
layerandrecordwith relevant details
- creates a source view, selecting data for the new layer (named
- create resource files for the record associated with the new layer:
resources/records/{slug}/abstract.mdresources/records/{slug}/lineage.md
- run the
data exportcommand to provision and publish the new layer and it's record - configure symbology, fields and popups for the ArcGIS feature layer as needed
- capture this portrayal information in
resources/arcgis_layers/{slug}/portrayal.json:- use https://ago-assistant.esri.com/ and view the relevant item
- copy the contents of the Data file into the relevant
portrayal.jsonfile
- document new layer in the Data Access documentation
See the Taskipy documentation.
The Python version is limited to 3.11 due to the arcgis dependency.
The Safety package checks dependencies for known vulnerabilities.
Warning
As with all security tools, Safety is an aid for spotting common mistakes, not a guarantee of secure code. In particular this is using the free vulnerability database, which is updated less frequently than paid options.
Checks are run automatically in Continuous Integration.
Tip
To check locally run the safety Development Task.
- create an issue and switch to branch
- run the
outdatedDevelopment Task to list outdated direct packages - follow https://docs.astral.sh/uv/concepts/projects/sync/#upgrading-locked-package-versions
- note upgrades in the issue
- review any major/breaking upgrades
- run Tests manually
- commit changes
Ruff is used to lint and format Python files. Specific checks and config options are
set in pyproject.toml. Linting checks are run automatically in
Continuous Integration and the Pre-Commit Hook.
Tip
To check linting manually run the lint Development Task, for formatting run the format task.
SQLFluff is used to lint and format SQL files. Specific checks and config options are set
in pyproject.toml. Linting checks are run automatically in
Continuous Integration and the Pre-Commit Hook.
Tip
To check SQL linting manually run the sql Development Task.
ST06- where select elements should be ordered by complexity rather than preference/opinion
ST10- where a condition such as
WHERE elem.label ->> 'scheme' = 'ats:last_fetched'is incorrectly seen as a constant
- where a condition such as
RF04- identifiers that overlap with non-reserved keywords ('label', 'summary', 'publication' explicitly)
Ruff is configured to run Bandit, a static analysis tool for Python.
Warning
As with all security tools, Bandit is an aid for spotting common mistakes, not a guarantee of secure code. In particular this tool can't check for issues that are only be detectable when running code.
PyMarkdown is used to lint Markdown files. Specific checks and config
options are set in pyproject.toml. Linting checks are run automatically in
Continuous Integration and the Pre-Commit Hook.
Tip
To check linting manually run the markdown Development Task.
Wide tables will fail rule MD013 (max line length). Wrap such tables with pragma disable/enable exceptions:
<!-- pyml disable md013 -->
| Header | Header |
|--------|--------|
| Value | Value |
<!-- pyml enable md013 -->Stacked admonitions will fail rule MD028 (blank lines in blockquote) as it's ambiguous whether a new blockquote has
started where another element isn't inbetween. Wrap such instances with pragma disable/enable exceptions:
<!-- pyml disable md028 -->
> [!NOTE]
> ...
> [!NOTE]
> ...
<!-- pyml enable md028 -->For consistency, it's strongly recommended to configure your IDE or other editor to use the
EditorConfig settings defined in .editorconfig.
A Pre-Commit hook is configured in .pre-commit-config.yaml.
To update Pre-Commit and configured hooks:
% pre-commit autoupdateTip
To run pre-commit checks against all files manually run the pre-commit Development Task.
pytest with a number of plugins is used for testing the application. Config options are set
in pyproject.toml. Tests are defined in the tests package.
Tests are run automatically in Continuous Integration.
Tip
To run tests manually run the test Development Task.
Tip
To run a specific test:
% uv run pytest tests/path/to/test_module.py::<class>.<method>If a test run fails with a NotImplementedError exception run the test-reset Development Task.
This occurs where:
- a test fails and the failed test is then renamed or parameterised options changed
- the reference to the previously failed test has been cached to enable the
--failed-firstruntime option - the cached reference no longer exists triggering an error which isn't handled by the
pytest-random-orderplugin
Running this task clears Pytest's cache and re-runs all tests, skipping the --failed-first option.
Fixtures SHOULD be defined in tests.conftest prefixed with fx_ to indicate they are a fixture when used in tests.
E.g.:
import pytest
@pytest.fixture()
def fx_foo() -> str:
"""Example test fixture."""
return 'foo'pytest-cov checks test coverage. We aim for 100% coverage but exemptions are
fine with good justification:
# pragma: no cover- for general exemptions# pragma: no branch- where a conditional branch can never be called
Continuous Integration will check coverage automatically.
Tip
To check coverage manually run the test-cov Development Task.
Tip
To run tests for a specific module locally:
% uv run pytest --cov=asets_tracking_service.some.module --cov-report=html tests/asets_tracking_service_tests/some/moduleWhere tests are added to ensure coverage, use the cov mark, e.g:
import pytest
@pytest.mark.cov()
def test_foo():
assert 'foo' == 'foo'pytest-env sets environment variables used by the Config
class to fake values when testing. Values are configured in the [tool.pytest_env] section of pyproject.toml.
pytest-recording is used to mock HTTP calls to provider APIs (ensuring known values are used in tests).
Caution
Review recorded responses to check for any sensitive information.
To update a specific test:
% uv run pytest --record-mode=once tests/path/to/test_module.py::<class>::<method>
To incrementally build up a set of related tests (including parameterised tests) use the new_episodes recording mode:
% uv run pytest --record-mode=new_episodes tests/path/to/test_module.py::<class>::<method>
All commits will trigger Continuous Integration using GitLab's CI/CD platform, configured in .gitlab-ci.yml.
If using a local Postgres database installed through homebrew (assuming @17 is the version installed):
- manage service:
brew services [command] postgresql@14 - view logs:
/usr/local/var/log/postgresql@17.log
To check current DB sessions with psql -d postgres:
select *
from pg_catalog.pg_stat_activity
where datname = 'assets_tracking_dev';
\qTip
To drop and recreate local databases run the reset-db Development Task.
Then recreate as per Local Development Environment steps.
To sync production data to the Development database:
- run the
pgsync-initDevelopment Task if needed to create a.pgsync.ymlconfig - run the
pgsyncDevelopment Task