Variable | Required | Example Value |
---|---|---|
SPARQL_ENDPOINT |
yes | https://lindas.admin.ch/query |
SPARQL_EDITOR |
yes | https://lindas.admin.ch/sparql |
SPARQL_ENDPOINT_SUPPORTS_CACHING_PER_CUBE |
no | false |
GITLAB_WIKI_TOKEN |
yes | xyz |
GITLAB_WIKI_URL |
yes | https://gitlab.ldbar.ch/api/v4/projects/9999/wikis |
I18N_DOMAINS |
{"de": "www.elcom.local", "fr": "fr.elcom.local", "it": "it.elcom.local"} |
|
BASIC_AUTH_CREDENTIALS |
user:password |
|
MATOMO_ID |
123 |
|
CURRENT_PERIOD |
2022 |
|
FIRST_PERIOD |
2009 |
|
PUBLIC_URL |
no | http://localhost:3000 |
EIAM_CERTIFICATE_PASSWORD |
yes | See in Elcom PWD certificates in 1Password |
EIAM_CERTIFICATE_CONTENT |
yes | See in Elcom PWD certificates in 1Password. Result of cat certificate.p12 | base64 |
GEVER_BINDING_IPSTS |
https://idp-cert.gate-r.eiam.admin.ch/auth/sts/v14/certificatetransport . Ask Roger Flurry. |
|
GEVER_BINDING_RPSTS |
https://feds-r.eiam.admin.ch/adfs/services/trust/13/issuedtokenmixedsymmetricbasic256 |
|
GEVER_BINDING_SERVICE |
https://api-bv.egov-abn.uvek.admin.ch/BusinessManagement/GeverService/GeverServiceAdvanced.svc |
To start the development environment, you need Node.js (v12 LTS recommended) and Yarn as package manager.
The usage of Nix to install system-level packages is recommended.
Ensure that Node.js and Yarn are available in your environment
Either use the installers
Or – if using Nix – entering a new Nix shell will install Node.js and Yarn automatically:
nix develop
Run the setup script:
yarn setup
This will install npm dependencies and run setup scripts.
Once the application's set up, you can start the development server with
yarn dev
👉 In Visual Studio Code, you also can run the default build task (CMD-SHIFT-B) to start the dev server, database server, and TypeScript checker (you'll need Nix for that to work).
New versions of package.json
are built on CI into a separate image that will be deployed to the test environment.
yarn version
This will prompt for a new version. The postversion
script will automatically try to push the created version tag to the origin repo.
Docker TBD
New localizable strings can be extracted from the source code with
yarn locales:extract
This will update the translation files in src/locales/*/messages.po
.
After updating the translation PO files, run
yarn locales:compile
To make the translations available to the application. Note: this step will run automatically on yarn build
.
Run
make geodata
Since Switzerland's municipalities can change each year, the yearly Shapefiles from 2010 on prepared by BFS will be downloaded and transformed into TopoJSON format which can be loaded efficiently client-side.
The detailed transformation steps are described in this project's Makefile
.
On the government infrastructure, an HTTP proxy is used for external requests. This is for example used to fetch
the gitlab content. The proxy is configured via the ./configure-proxy.js
script that is
required in the package.json
start command. It uses the HTTP_PROXY
environment variable
- For some of the server requests (SAML requests), we must not use this proxy, and the agent is configured there manually.
- For external requests that should use the proxy, we can use
https.globalAgent
.
const https = require('https')
const data = fetch(url, {
agent: https.globalAgent
})
EIAM certificates are used to authenticate against the GEVER API serving the electricity provider documents.
They are stored in 1Password as "Elcom PWD certificates".
EIAM certificate content and password are passed as environment variable. The certificate content is a p12 certificate encoded as base 64.
In dev, you have to edit env.local to add the EIAM_CERTIFICATE_CONTENT
and EIAM_CERTIFICATE_PASSWORD
variables.
# Get the base64 certificate content that can be put in EIAM_CERTIFICATE_CONTENT
cat ../../../vault/svc.spw-d.elcom.admin.ch.p12 | base64
To load test, we use the k6 platform and its ability to import HAR session recordings. We generate automatically a HAR via a Playwright test designed to mimick a typical user journey and import it into k6.
After an update to the application, it is necessary to update the test on k6 so that the chunks URL are correct. To make the update painless, Playwright is used to automatically navigate across the site, and then save the session requests as an HAR file.
- Record the HAR
The HAR is generated automatically from a Playwright test.
yarn run e2e:k6:har
You can also generate an HAR from a different environment than ref by using the ELCOM_ENV env variable.
ELCOM_ENV=abn yarn run e2e:k6:har
The command will open a browser and will navigate through various pages. After the test, an HAR will be generated in the root directory.
- Import the HAR file into K6
yarn e2e:k6:update
ℹ️ Check the command in package.json
if you want to change the HAR uploaded or the
test being updated
Make sure the options of the Scenario correspond to what you want as k6 resets them when you import the HAR (you might want to increase the number of VUs to 50 for example).
The preferred way to edit the test is to use the Recorder inside VSCode. This way it is easy to quickly generate a test.
- Add testIds in case the generated selectors are not understandable.
- Add sleeps to make sure the test is not too quick and "human like"
Both the frontend and the screenshot service can be built as docker images and spin up through docker-compose.
yarn docker:build # Build the image and tag it as interactivethings/electricity-prices-switzerland
docker compose up
Trivy is used for vulnerability scanning and must pass for the image to be accepted on the Federal Administration infrastructure. A check is done before publishing the docker image on GHCR. It can also be run against a local image.
yarn docker:trivy # Runs trivy against interactivethings/electricity-prices-switzerland:latest
A notebook containing Elcom specific SPARQL queries is available at ./book.sparqlbook. You need the SPARQL Notebook Extension to open it.
There are two different sources of data, that are abstracted behind the Sunshine Data Service interface.
Currently, the Sunshine pages rely on mocked data since the real data is not yet ready for production use. The mock data system is designed to protect operator anonymity while still providing realistic data for development and testing.
-
SPARQL: Data from Lindas, this is expected to be the production data at some point. At the moment, the data is not yet published
-
SQL: Mock data but currently more filled in that Lindas.
The default data service is for now "sql" since we need it to test all the flows of the application, with data as close as possible to the real data.
We want also to be able to test the data from Lindas. To do that, it's
possible to switch data service by loading the /api/sunshineDataService?serviceKey=sparql
page. You should see a JSON message of success.
Then, when you navigate in Sunshine pages, a debug message indicates that you are currently viewing data through a data service which is not the default.
To return back to the SQL data service, visit /api/sunshineDataService?serviceKey=sql
.
The data from the SQL Sunshine Data Service is based on real CSV files provided by Elcom. For privacy and security reasons, these CSV files are encrypted in the repository and can only be accessed by decrypting them with the correct password (PREVIEW_PASSWORD
environment variable).
Key aspects of the mocked data system:
-
Data sources: The original data is stored as encrypted CSV files in the repository:
energy
: Energy data prepared in the elcom-sunshine-data-analysis projectpeer-groups
: Peer groups for each operator, derived from the energy dataSunshine 2024/2025
: Yearly sunshine data files
-
Server-side processing: When the application runs, the CSV files are:
- Decrypted on first request using the
PREVIEW_PASSWORD
- Loaded into a DuckDB instance (an in-memory database)
- Processed through SQL queries to extract and transform relevant data
- You can see which SQL views are created through
npm run mocks:debug-views
. You can also see sample data, for examplenpm run mocks:debug-views -- --view stability_metrics --sample
will show you sample data for thestability_metrics
view.
- Decrypted on first request using the
-
Anonymized operator data: To preserve anonymity while the data is not yet public:
- Operator names are replaced with fictional names
- Operator IDs are also anonymized
- The actual data values remain intact to preserve statistical accuracy
-
Mock file generation: Mock files can be regenerated using the CLI command:
npm run mocks -- -o <operatorId>
This creates JSON files in the
mocks/
directory that can be used in Storybook or for testing.
You can work with the encrypted data directly using the yarn sunshine-csv
script:
# Encrypt/decrypt observation data
yarn sunshine-csv encrypt -i "Sunshine 2025 28.05.2025"
yarn sunshine-csv decrypt -i "Sunshine 2025 28.05.2025"
# Decrypt peer groups data
yarn sunshine-csv decrypt peer-groups
The peer groups CSV is generated from energy.csv
using DuckDB queries and can be regenerated via:
yarn data:peer-groups
The Sunshine pages fetch data server-side in getServerSideProps
, where:
- The encrypted data is decrypted and loaded into DuckDB
- SQL queries retrieve and format the data for the front-end components
- The data is passed as props to the React components
For components testing in Storybook, the mock files from mocks/
can be imported to simulate the data flow without needing the decryption key.
It is possible to regenerate the home map screenshots automatically using Playwright.
yarn design:generate-home-maps