Skip to content

Commit 2077d1b

Browse files
author
Alam, Maksudul
committed
Updated README
1 parent 70a89b1 commit 2077d1b

File tree

5 files changed

+41
-3299
lines changed

5 files changed

+41
-3299
lines changed

viz/README.md

Lines changed: 39 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -11,24 +11,27 @@ ExaGO has an experimental visualization platform for visualizing the results of
1111

1212
## Installation
1313
ExaGO visualization uses the following tools to generate the visuals.
14-
- [Node.js@v16.13.0](https://nodejs.org/es/blog/release/v16.13.0)
14+
- [Node.js@v24.10.0](https://nodejs.org/es/blog/release/v24.10.0)
1515
- Facebook's [React](https://reactjs.org/) framework
1616
- Uber's [Deck.gl](https://deck.gl/docs) visualization
1717
- [React-map-gl](https://visgl.github.io/react-map-gl/) framework
1818
- [Chart.js](https://www.chartjs.org/)
19+
- Yarn 1.22.22
1920

2021
Before launching the visualization, one needs to install these packages. This can be done with the following steps:
21-
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
22-
2. Install [Node.js] version 16 (https://nodejs.org/en/) using `nvm install 16`
23-
3. Do `npm install --legacy-peer-deps` in this directory (`viz`)to install all the dependencies.
2422

25-
> [!WARNING]
26-
> Per https://github.com/pnnl/ExaGO/issues/129 `--legacy-peer-deps` is required as an argumnet to `npm install`. This will ideally be removed once the vizualization is no longer experimnetal.
23+
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
24+
2. Install [Node.js] version 24 (https://nodejs.org/en/) using `nvm install 24`
25+
3. Select Node 24 by `nvm use 24`
26+
4. Install Yarn. Do `npm install --global yarn`
27+
5. Do `yarn install` in this directory (`viz`) to install all the dependencies.
28+
6. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
2729

2830

2931
## Preparing input data files for visualization
3032
The visualization uses a `JSON` formatted file as an input. This `JSON` file has a specific structure (To do: explain structure for the file) and there are several sample files for different network in the `data` subdirectory.
31-
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file.
33+
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file. The generated file will be name as `opflowout.json`.
34+
3235
```
3336
./opflow -netfile <netfile> -save_output -opflow_output_format JSON -gicfile <gicfilename>
3437
```
@@ -43,22 +46,24 @@ For example, with Texas 2000 bus synthetic data, executing the following `opflow
4346
opflow -netfile case_ACTIVSg2000.m -save_output -opflow_output_format JSON -gicfile ACTIVSg2000_GIC_data.gic
4447
```
4548

46-
Copy over the `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script.
49+
Copy over the newly generated `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script. Note, the python script only takes the name of the file `opflowout.json` as an argument but does not open the file so the full file path need not be provided. The visualization tool will expect the file (`opflowout.json`) to be present in `viz/data` forlder. The following code will create/overwrite a file named `viz/src/module_casedata.js`. The `module_casedata.js` file is an application source file to load the data file `opflowout.json`.
4750

4851
```
4952
python geninputfile.py opflowout.json
5053
```
5154

52-
You are ready to launch the visualization now.
55+
Now this creates the `viz/src/module_casedata.js` file. You are ready to launch the visualization now.
5356

5457
Note: If you have created the JSON file externally then simply copy it over in the `viz/data` subdirectory and run the `geninputfile.py` script using the above command.
5558

5659
## Launch visualization
5760
To launch the visualization, run
61+
5862
```
59-
npm start
63+
yarn start
6064
```
61-
This will open a webpage with the visualization of the given network.
65+
66+
This will open a webpage with the visualization of the given network. If the network is large, it may take a while to load the visualization. The browser may show option to terminate or Wait and you should click on Wait button.
6267

6368

6469
The figures show the visualization of the synthetic electric grid. The data for developing this visualization was created by merging the synthetic dataset for the [Eastern](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg70k/), [Western](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg10k/), and [Texas](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg2000/) interconnects from the [Electric Grid Test Case Repository](https://electricgrids.engr.tamu.edu/)
@@ -86,6 +91,7 @@ ChatGrid is a natural language query tool for ExaGO visualizations. It is powere
8691

8792
### Dependencies
8893
ChatGrid is built upon the following services and tools.
94+
8995
- [OpenAI LLMs](https://platform.openai.com/docs/models/overview)
9096
- [Langchain@0.0.233](https://python.langchain.com/docs/get_started/introduction.html) framework
9197
- [PostGreSQL](https://www.postgresql.org/download/) database
@@ -97,19 +103,28 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
97103
1. Convert data formats.
98104

99105
First, we need to convert the ExaGO output `.json` files to `.csv` files. The difference between the two data formats is that JSON stores attributes and values as dictionary pairs but CSV stores attributes and values as tables. You can write your own script for this conversion or use the provided script.
100-
101-
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory and simply run the following script in the `viz/data` subdirectory (replace the example filename with your json filename). This will output three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`.
102106

103-
To install all required python packages go to `backend` directory and run `pip install -r requirements.txt`
107+
* Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps. (Note: These steps are tested with Python 3.13.)
104108

105-
109+
110+
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory (if not already performed) and simply run the following script in the `viz/backend` subdirectory (replace the example filename with your json filename). This will create three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`. We are assuming `opflowout.json` is the data json file present in `viz/data` folder.
111+
106112
```
107-
python jsontocsv.py opflowout.json
113+
python ../data/jsontocsv.py ../data/opflowout.json
108114
```
109115
116+
Now there should be 5 CSV files in `viz/backend` folder:
117+
118+
* `bus.csv`
119+
* `generation.csv`
120+
* `us_states.csv`
121+
* `counties.csv`
122+
* `transmission_line.csv`
123+
124+
110125
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
111126
112-
* For MAC using brew you can install postgresql 15 using: `brew install postgresql@14`
127+
* For MAC using brew you can install postgresql 14 using: `brew install postgresql@14`
113128
* Start the postgressql service: `rew services start postgresql@14`
114129
* Create a role: `psql -U "$USER" -d postgres`
115130
* Execute the create role query: `CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD 'ExaGO.2025';` Here `ExaGO.2025` is a password. Change to your preference.
@@ -128,7 +143,10 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
128143
b. Please be informative and accurate about your table names, and attribute names. Because this information can help LLM understand the dataset and performs better when dealing with user queries.
129144
130145
c. Include US state and county information in your database to support spatial queries that related to state or county.
131-
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_70k --csv test.csv --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
146+
147+
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_db --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
148+
149+
e. This will create a database named `exago_db` with password `ExaGO.2025`. This information will be used to update the `config.py` file.
132150
133151
134152
4. Connect to your database.
@@ -146,11 +164,12 @@ Open the `config.py` in the `viz/backend` subdirectory replace `YOUR_OPENAI_KEY`
146164
### Launch backend
147165
ChatGrid uses Flask to host the service of receiving user queries and returning the data output and text summaries to update the visualizations on the frontend. Please follow the steps below to run the backend server.
148166
149-
1. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps.
150-
2. Run the following command in the `viz/backend` subdirectory
167+
168+
* Run the following command in the `viz/backend` subdirectory
151169
```
152170
python server.py
153171
```
172+
154173
This will start the backend server for receiving user queries, processing it by LLM and returning data outputs to the frontend.
155174
156175
Now open the chat window on the frontend, type your queries, and enjoy ChatGrid!

viz/backend/create_db.sh

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ PGPASSWORD="${PGPASSWORD:-}" # export PGPASSWORD=... if needed
99
DB_NAME=""
1010
SCHEMA_NAME="public"
1111
TABLE_NAME=""
12-
CSV_PATH=""
1312
DELIM=","
1413
QUOTE='"'
1514
NULLSTR=""
@@ -25,7 +24,6 @@ Usage: $(basename "$0") --db NAME --table NAME --csv /path/file.csv [options]
2524
Required:
2625
--db NAME Database name to create/use
2726
--table NAME Target table name (without schema; uses --schema)
28-
--csv PATH Path to CSV file (must have header row)
2927
3028
Optional:
3129
--schema NAME Schema name (default: public)
@@ -74,8 +72,7 @@ while [[ $# -gt 0 ]]; do
7472
esac
7573
done
7674

77-
[[ -z "${DB_NAME}" || -z "${CSV_PATH}" ]] && usage
78-
# [[ -f "${CSV_PATH}" ]] || { echo "CSV not found: ${CSV_PATH}"; exit 1; }
75+
[[ -z "${DB_NAME}" ]] && usage
7976

8077
export PGHOST PGPORT PGUSER PGPASSWORD
8178

0 commit comments

Comments
 (0)