Install uv and npm.
Install the required packages:
npm install
uv sync
uv pip install -e .
Build the JS/CSS:
npm run build
In most cases, you don't need to actually log in locally.
In those cases you can set the PUDL_VIEWER_LOGIN_DISABLED environment variable to true.
This will let you access the whole site,
including things that are usually gated behind login like the table previews.
But if you're testing something related to the login flow, you will need to set that up:
- Set up your auth0 environment variables (see below)
- Set up the database:
docker compose up -d
docker compose exec eel_hole uv run flask db upgrade
We have a docker compose file, but make sure to build the JS/CSS first:
$ npm run build
...
$ docker compose build && docker compose upIf you want to skip the auth0 setup and just disable the authentication altogether, set the PUDL_VIEWER_LOGIN_DISABLED env var:
$ PUDL_VIEWER_LOGIN_DISABLED=true docker compose upYou won't be able to log in, but you won't have to, to see the preview functionality.
You will have to set some auth0 environment variables -
see the [envrc-template](./envrc-template) for which ones.
If you are using a tool like [direnv](https://direnv.net/),
you probably want to just copy that template to .envrc
and update the values with ones you get from
the auth0 dashboard.
Finding the values of the variables depends on whether you're using the shared Catalyst auth0 tenant account or making your own tenant account.
Log in to https://manage.auth0.com/dashboard using the inframundo auth0 credentials in our password manager.
Go to Applications (it's the triple stack on the left) -> [dev-dx] PUDL Viewer and click "Settings" to find the env variables you need from envrc-template.
Go to https://manage.auth0.com/dashboard and register as a tenant.
Register your local development environment as an application with the following settings:
- Name: whatever you like, but f"eelhole@{your_dev_machine}" is easy to remember
- Application Type: Regular Web App
- Configure options for user authentication: Social (& select whatever you prefer for dev)
Once at the application dashboard, go to Settings to find the env variables you need from envrc-template.
While you're there, set the Application URIs to localhost addresses as follows:
- Allowed Callback URLs: http://127.0.0.1:8080/callback
- Allowed Logout URLs: http://127.0.0.1:8080
If you are using localhost instead of 127.0.0.1 to access your app,
set those callback and logout URLs with localhost.
To run the unit tests:
$ uv run pytest tests/test_*
To run the integration tests:
We haven't hooked up the integration tests to the login system yet, so you have to turn off logins.
$ # If you haven't gotten playwright set up, you'll need to run `playwright install chromium` to give it *some* browser to drive
$ uv run playwright install chromium
$ PUDL_VIEWER_LOGIN_DISABLED=true docker compose up
$ uv run pytest tests/integration
This uses playwright to run through some user flows.
Make sure you have a stable Internet connection, otherwise you'll hit a bunch of timeouts.
In order to make testing out features more convenient, you can toggle feature flags in a query parameter in the url or via a config file.
To enable a feature flag temporarily during development, append it as a query string in the URL:
http://localhost:8080/somepage?my_feature=true
You can also define persistent feature flags via the Flask config:
app.config["FEATURE_FLAGS"] = {
"my_feature": True,
}
This allows you to add conditional logic in your code:
def some_function():
if is_flag_enabled('my_feature'):
# behavior of the feature we want to test
else:
# regular behavior
To conditionally guard routes with feature flags, use the @require_feature_flag("my_feature") decorator. If the flag is not enabled, the route will return a 404.
For example:
@app.route("/new-feature")
@require_feature_flag("my_feature")
def new_feature():
return "This feature is gated!"
Note that a feature flag added in the URL is only accessible after the app has been loaded.
See the Terraform file for infrastructure setup details.
- run
make gcp-latestto push the image up to GCP. - re-deploy the service on Cloud Run.
- run
make gcp-latestto push the image up to GCP. - Run the Cloud Run job that runs a db migration.
We have a standard client-server-database situation going on.
For search:
- The client sends search query to the server
- The server queries against an in-memory search index. See the
/searchendpoint and thesearch.pyfile. - The server sends a list of matches back to the client
Via the magic of htmx, if the search wasn't triggered by a whole page load, we only send back an HTML fragment.
For preview:
- Client sends the filters that the user's applied to the server, and gets a DuckDB query back. See the
/api/duckdbendpoint andduckdb_query.pyfiles. - Client queries DuckDB (using duckdb-wasm), which can read data from remote Parquet files.
- The data comes back as Apache Arrow tables, which we convert into JS arrays to feed into AG Grid viewer.
The database is only used for storing users right now.