This repository hosts the code to run a ParaView visualization server for 3D heliospheric output from codes such as Enlil and Euhforia, as well as a way to fetch the associated metadata and host the frontend.
Live deployment for interactive use: https://swx-trec.com/h3lioviz/
The H3lioViz server is a containerized application that provides:
- ParaView Web Service: 3D visualization of heliospheric simulation data
- Flask Metadata API: Endpoints for retrieving run information and time-series data
- Apache Web Server: Serves the frontend and acts as a reverse proxy
- ParaView Pre-Processing: Scripts that generate the data used by the ParaView Web Service (not containerized)
Apache serves as the main entry point and handles routing:
/-> Reroutes to/h3lioviz//h3lioviz/*-> All routes except the below endpoints route to the Frontend Web Application/h3lioviz/paraview-> ParaView Web service (creates visualization sessions)/h3lioviz/proxy-> WebSocket proxy (maps session IDs to ParaView ports)/h3lioviz/metadata/-> Flask API for metadata operations
The docker/scripts/server.sh script initializes the docker container by using certain environment variables to update configuration files.
server.sh (paraview/websockets) environment variables:
- SERVER_NAME: the server name to use for the session url. Gets returned from paraview-web to tell the frontend where to connect to the websocket. Note that the current routing expects {domain}/h3lioviz.
- PROTOCOL: the protocol to use for the session url
ws -> websocket
wss -> websocket secure - EXTRA_PVPYTHON_ARGS: extra arguments to pass to pvpython (comma-separated, no extra spaces) Example: "-dr,--mesa-swr"
Note: If SERVER_NAME and PROTOCOL are not specified, the container defaults to ws://localhost.
Paraview & Flask environment variables:
- S3_BUCKET_NAME: The s3 bucket to use for on-the-fly run downloads and for flask server to access for api calls.
- AWS_DEFAULT_REGION: Required by flask for dynamodb access.
- TABLE_NAME: The name of the table that will store run metadata
Note: If AWS_DEFAULT_REGION and TABLE_NAME are not specified, any calls to flask (except /h3lioviz/metadata/health) will fail. If S3_BUCKET_NAME is not specified, paraview will not be able to download new runs on-the-fly, but will still be able to utilize runs on disk.
- Flask:
/data/launcher/log/flask.log - Paraview:
/data/launcher/log/<hashed_session_id>.log&/data/launcher/log/launcherLog.log - Apache:
/var/log/apache2/001-pvw_access.log&/var/log/apache2/001-pvw_error.log
To work with this repository, you will need:
- Docker.
- AWS CLI (for AWS deployments)
- Access to the SWx-TREC AWS ECR repository (for production/development deployments), or a personal one (for deployments to an account other than prod or dev).
[!NOTE]
The Dockerfile now includes steps to build the frontend. You can build the image with an included environment file by selecting one of the currently supported options (dev, prod, or swpc) via a Docker build argument (default is dev if unspecified). Other environments (for example, a future noaa option) are not yet available.
Build the image locally:
docker build --build-arg FRONTEND_ENVIRONMENT=prod .For advanced usage and to build a different version of the frontend locally, you can follow steps 1-7 to build and modify the frontend code directly.
The frontend is in our WEBAPPS Bitbucket for LASP internal users, but there is also a public mirror available on Github.
Enter the repo and open src/environments/environment.dev.ts and src/environments/environment.prod.ts.
Edit the following configuration items:
- Set
environment.aws.apitohttps://h3lioviz-api.{your-domain}/(an API Gateway created by swxtrec-cdk). - Set
environmentConfig.sessionManagerURLtohttps://paraview-web.{your-domain}/h3lioviz/paraview.
npm installDepending on whether or not you're trying to build the frontend for dev or prod run:
npm build:devor
npm build:prodgit clone https://github.com/SWxTREC/h3lioviz-server.git
cd h3lioviz-serverCopy everything in the frontends dist directory into pvw/www
Note: After the copy you should have the following directory pvw/www/h3lioviz/
cp -r {frontend-path}/dist/* pvw/wwwBuild the image locally:
docker build -t h3lioviz .This will run the h3lioviz docker image you have build locally. If you want to pull the latest dev/prod image, replace h3lioviz:latest with public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa: with the dev or prod tag.
NOTE: This does not currently entirely work due to the container's dependency on AWS resources. The paraview code will serve the websocket just fine, and the frontend will be accessible, but the flask routes served by the container as well as on-the-fly run downloading will not work.
docker run -p 0.0.0.0:8080:80 \
-e SERVER_NAME=127.0.0.1:8080/h3lioviz \
-e PROTOCOL=ws \
-v ${PWD}/pvw:/pvw \
-v ${PWD}/test-data:/data \
-it h3lioviz:latestThe server will be available at http://127.0.0.1:8080.
The pvw directory is mounted directly within the docker container, so any changes to the code within will be reflected next time the connection is refreshed.
The official ECR repository is: public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa
- Authenticate with AWS ECR:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/swx-trec/- Tag your image:
docker tag h3lioviz:latest public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa:dev- Push to ECR:
docker push public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa:devNote that the EC2 instance will not update the docker image until it has fully rebooted. If you want to manually force the update, remotely connect to the instance and run:
sudo su
export PATH="$PATH:/usr/local/bin"
/docker/docker-launch.shFollow the same steps as above, but use the :prod tag instead of :dev:
docker tag h3lioviz:latest public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa:prod
docker push public.ecr.aws/swx-trec/pvw-h3lioviz-osmesa:prodFor testing on legacy, create an ECR (or use a pre-existing one) and run the above commands referencing that ECR.
- Authenticate with the legacy account:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin <ECR Name>- Tag and push:
docker tag h3lioviz:latest <ECR Name>
docker push <ECR Name>
docker logout public.ecr.awsNote: Remember to log out of the public ECR after pushing, as credentials can interfere with accessing public AWS ECR repositories.
You can download test data from one of the dev account data buckets. Thinned runs are around 1.2GB.
- Locate a bucket called
h3lioviz.<domain_name>.com - Identify a run to download:
/data/h3lioviz/pv-ready-data-<run_id> - Download the run:
s5cmd cp "s3://<bucket_name>/data/h3lioviz/pv-ready-data-<run_id>/\*" ./test-data/pv-ready-data-<run_id>/
Reference the README in scripts/ for instructions on how to generate these runs and make them visible to h3lioviz-server.
The primary data files used by ParaView for visualization are:
pv-tim.XXXX.nc- Time-step files (where XXXX is the time-step number)
These NetCDF files contain the 3D heliospheric simulation data. The test-data directory contains additional files (satellite evolution files, metadata, etc.) that are graphed by the frontend but are not directly loaded by ParaView.
Python packages are installed via a virtual environment during the Docker build process.
- Add the required packages to
pvw/requirements.txt - Rebuild the Docker image
The virtual environment is created by the Dockerfile and is utilized for flask in server.sh. Note that there is code in place to allow paraview to access the venv, but is currently not in use.
Note: While paraview can get access to additional python packages, our current version of paraview was built with a now outdated SSL version, which makes it close to impossible to use libraries like boto3. We currently rely on using AWS CLI commands for downloading runs. This restriction only applies to python scripts invoked using the pvpython command.
h3lioviz-server/
├── docker/
│ ├── config/
│ │ └── apache/ # Apache configuration
│ └── scripts/ # Container initialization script
├── docs/ # More documentation
├── pvw/
│ ├── flask/ # Flask metadata API server
│ ├── launcher/ # ParaView Web launcher configuration
│ ├── server/ # ParaView Web visualization server
│ ├── www/ # Frontend web application
│ └── requirements.txt # Python dependencies
├── scripts/ # Data processing scripts
├── test-data/ # Test simulation data
├── Dockerfile # Container build definition
├── README.md # Root documentation page
The container's entrypoint is /opt/paraviewweb/scripts/server.sh, which:
- Updates the paraview-web launcher config (pvw/launcher/config.json) based on docker environment variables
- Starts flask webserver
- Starts/restarts apache service
- Starts the paraview websockets launcher