Skip to content

Commit 17ed660

Browse files
maksudAlam, Maksudulpelesh
authored
Line Weight Remains Constant, Line Color is Divergent (#181)
* Line Weight Remains Constant, Line Color is Divergent * Line Color Update, Legend Update, JSONtoCSV function modified to incorporate County and State information * Added instructions to DATABASE setup. Fixed line weights and colors. * Update viz/README.md * Updated legacy pre-generated json files * Updated React + Chat plugins + FlowMap --------- Co-authored-by: Alam, Maksudul <alamm@ornl.gov> Co-authored-by: pelesh <peless@ornl.gov>
1 parent 4c60b04 commit 17ed660

28 files changed

+822724
-968398
lines changed

viz/.gitignore

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# ----------------------------
2+
# React + Vite .gitignore
3+
# ----------------------------
4+
5+
# Node dependencies
6+
node_modules/
7+
8+
# Vite build output
9+
dist/
10+
11+
# Local environment variables
12+
.env
13+
.env.local
14+
.env.development.local
15+
.env.test.local
16+
.env.production.local
17+
18+
# Logs
19+
npm-debug.log*
20+
yarn-debug.log*
21+
yarn-error.log*
22+
pnpm-debug.log*
23+
lerna-debug.log*
24+
25+
# Editor settings
26+
.vscode/
27+
.idea/
28+
*.suo
29+
*.ntvs*
30+
*.njsproj
31+
*.sln
32+
33+
# OS-specific files
34+
.DS_Store
35+
Thumbs.db
36+
37+
# Optional caches
38+
.cache/
39+
vite-cache/
40+
.parcel-cache/
41+
coverage/
42+
43+
# Temporary files
44+
*.tmp
45+
*.log
46+
*.bak
47+
48+
# TypeScript (optional if using TS)
49+
*.tsbuildinfo
50+
51+
# Optional lock files — uncomment if you want to ignore them
52+
# package-lock.json
53+
yarn.lock
54+
# pnpm-lock.yaml
55+
56+
# Local preview build
57+
*.local

viz/README.md

Lines changed: 49 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -11,24 +11,27 @@ ExaGO has an experimental visualization platform for visualizing the results of
1111

1212
## Installation
1313
ExaGO visualization uses the following tools to generate the visuals.
14-
- [Node.js@v16.13.0](https://nodejs.org/es/blog/release/v16.13.0)
14+
- [Node.js@v24.10.0](https://nodejs.org/es/blog/release/v24.10.0)
1515
- Facebook's [React](https://reactjs.org/) framework
1616
- Uber's [Deck.gl](https://deck.gl/docs) visualization
1717
- [React-map-gl](https://visgl.github.io/react-map-gl/) framework
1818
- [Chart.js](https://www.chartjs.org/)
19+
- Yarn 1.22.22
1920

2021
Before launching the visualization, one needs to install these packages. This can be done with the following steps:
21-
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
22-
2. Install [Node.js] version 16 (https://nodejs.org/en/) using `nvm install 16`
23-
3. Do `npm install --legacy-peer-deps` in this directory (`viz`)to install all the dependencies.
2422

25-
> [!WARNING]
26-
> Per https://github.com/pnnl/ExaGO/issues/129 `--legacy-peer-deps` is required as an argumnet to `npm install`. This will ideally be removed once the vizualization is no longer experimnetal.
23+
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
24+
2. Install [Node.js] version 24 (https://nodejs.org/en/) using `nvm install 24`
25+
3. Select Node 24 by `nvm use 24`
26+
4. Install Yarn. Do `npm install --global yarn`
27+
5. Do `yarn install` in this directory (`viz`) to install all the dependencies.
28+
6. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
2729

2830

2931
## Preparing input data files for visualization
3032
The visualization uses a `JSON` formatted file as an input. This `JSON` file has a specific structure (To do: explain structure for the file) and there are several sample files for different network in the `data` subdirectory.
31-
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file.
33+
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file. The generated file will be name as `opflowout.json`.
34+
3235
```
3336
./opflow -netfile <netfile> -save_output -opflow_output_format JSON -gicfile <gicfilename>
3437
```
@@ -43,24 +46,24 @@ For example, with Texas 2000 bus synthetic data, executing the following `opflow
4346
opflow -netfile case_ACTIVSg2000.m -save_output -opflow_output_format JSON -gicfile ACTIVSg2000_GIC_data.gic
4447
```
4548

46-
Copy over the `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script.
49+
Copy over the newly generated `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script. Note, the python script only takes the name of the file `opflowout.json` as an argument but does not open the file so the full file path need not be provided. The visualization tool will expect the file (`opflowout.json`) to be present in `viz/data` forlder. The following code will create/overwrite a file named `viz/src/module_casedata.js`. The `module_casedata.js` file is an application source file to load the data file `opflowout.json`.
4750

4851
```
4952
python geninputfile.py opflowout.json
5053
```
5154

52-
You are ready to launch the visualization now.
55+
Now this creates the `viz/src/module_casedata.js` file. You are ready to launch the visualization now.
5356

5457
Note: If you have created the JSON file externally then simply copy it over in the `viz/data` subdirectory and run the `geninputfile.py` script using the above command.
5558

5659
## Launch visualization
57-
The visualization expects a file name `case_data.json` in the `viz/data` subdirectory. Copy/Rename the file as `case_data.json` in that subdirectory to be used by the visualization tool.
58-
5960
To launch the visualization, run
61+
6062
```
61-
npm start
63+
yarn start
6264
```
63-
This will open a webpage with the visualization of the given network.
65+
66+
This will open a webpage with the visualization of the given network. If the network is large, it may take a while to load the visualization. The browser may show option to terminate or Wait and you should click on Wait button.
6467

6568

6669
The figures show the visualization of the synthetic electric grid. The data for developing this visualization was created by merging the synthetic dataset for the [Eastern](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg70k/), [Western](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg10k/), and [Texas](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg2000/) interconnects from the [Electric Grid Test Case Repository](https://electricgrids.engr.tamu.edu/)
@@ -88,6 +91,7 @@ ChatGrid is a natural language query tool for ExaGO visualizations. It is powere
8891

8992
### Dependencies
9093
ChatGrid is built upon the following services and tools.
94+
9195
- [OpenAI LLMs](https://platform.openai.com/docs/models/overview)
9296
- [Langchain@0.0.233](https://python.langchain.com/docs/get_started/introduction.html) framework
9397
- [PostGreSQL](https://www.postgresql.org/download/) database
@@ -99,13 +103,34 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
99103
1. Convert data formats.
100104

101105
First, we need to convert the ExaGO output `.json` files to `.csv` files. The difference between the two data formats is that JSON stores attributes and values as dictionary pairs but CSV stores attributes and values as tables. You can write your own script for this conversion or use the provided script.
106+
107+
* Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps. (Note: These steps are tested with Python 3.13.)
102108

103-
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory and simply run the following script in the `viz/data` subdirectory (replace the example filename with your json filename). This will output three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`.
109+
110+
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory (if not already performed) and simply run the following script in the `viz/backend` subdirectory (replace the example filename with your json filename). This will create three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`. We are assuming `opflowout.json` is the data json file present in `viz/data` folder.
111+
104112
```
105-
python jsontocsv.py case_ACTIVSg10k.json
113+
python ../data/jsontocsv.py ../data/opflowout.json
106114
```
107115
108-
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
116+
Now there should be 5 CSV files in `viz/backend` folder:
117+
118+
* `bus.csv`
119+
* `generation.csv`
120+
* `us_states.csv`
121+
* `counties.csv`
122+
* `transmission_line.csv`
123+
124+
125+
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
126+
127+
* For MAC using brew you can install postgresql 14 using: `brew install postgresql@14`
128+
* Start the postgressql service: `rew services start postgresql@14`
129+
* Create a role: `psql -U "$USER" -d postgres`
130+
* Execute the create role query: `CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD 'ExaGO.2025';` Here `ExaGO.2025` is a password. Change to your preference.
131+
* Exit to shell by entering `quit` and hitting Enter.
132+
* From command prompt type: `psql -U postgres -d postgres` If it works and you are in `psql` shell you are done. Exit from the shell using `quit`.
133+
109134
110135
3. Create a PostgreSQL database and import the `.csv` files to it.
111136
@@ -117,7 +142,11 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
117142
118143
b. Please be informative and accurate about your table names, and attribute names. Because this information can help LLM understand the dataset and performs better when dealing with user queries.
119144
120-
c. Include US state and county information in your database to support spatial queries that related to state or county.
145+
c. Include US state and county information in your database to support spatial queries that related to state or county.
146+
147+
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_db --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
148+
149+
e. This will create a database named `exago_db` with password `ExaGO.2025`. This information will be used to update the `config.py` file.
121150
122151
123152
4. Connect to your database.
@@ -132,15 +161,15 @@ Open the `config.py` in the `viz/backend` subdirectory replace `YOUR_OPENAI_KEY`
132161
133162
134163
<!-- data script -->
135-
<!-- installation: pip install -r requirements.txt in the backend directory-->
136164
### Launch backend
137165
ChatGrid uses Flask to host the service of receiving user queries and returning the data output and text summaries to update the visualizations on the frontend. Please follow the steps below to run the backend server.
138166
139-
1. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
140-
2. Run the following command in the `viz/backend` subdirectory
167+
168+
* Run the following command in the `viz/backend` subdirectory
141169
```
142170
python server.py
143171
```
172+
144173
This will start the backend server for receiving user queries, processing it by LLM and returning data outputs to the frontend.
145174
146175
Now open the chat window on the frontend, type your queries, and enjoy ChatGrid!

0 commit comments

Comments
 (0)