You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before launching the visualization, one needs to install these packages. This can be done with the following steps:
21
-
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
22
-
2. Install [Node.js] version 16 (https://nodejs.org/en/) using `nvm install 16`
23
-
3. Do `npm install --legacy-peer-deps` in this directory (`viz`)to install all the dependencies.
24
22
25
-
> [!WARNING]
26
-
> Per https://github.com/pnnl/ExaGO/issues/129`--legacy-peer-deps` is required as an argumnet to `npm install`. This will ideally be removed once the vizualization is no longer experimnetal.
23
+
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
24
+
2. Install [Node.js] version 24 (https://nodejs.org/en/) using `nvm install 24`
25
+
3. Select Node 24 by `nvm use 24`
26
+
4. Install Yarn. Do `npm install --global yarn`
27
+
5. Do `yarn install` in this directory (`viz`) to install all the dependencies.
28
+
6. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
27
29
28
30
29
31
## Preparing input data files for visualization
30
32
The visualization uses a `JSON` formatted file as an input. This `JSON` file has a specific structure (To do: explain structure for the file) and there are several sample files for different network in the `data` subdirectory.
31
-
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file.
33
+
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file. The generated file will be name as `opflowout.json`.
Copy over the `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script.
49
+
Copy over the newly generated `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script. Note, the python script only takes the name of the file `opflowout.json` as an argument but does not open the file so the full file path need not be provided. The visualization tool will expect the file (`opflowout.json`) to be present in `viz/data` forlder. The following code will create/overwrite a file named `viz/src/module_casedata.js`. The `module_casedata.js` file is an application source file to load the data file `opflowout.json`.
47
50
48
51
```
49
52
python geninputfile.py opflowout.json
50
53
```
51
54
52
-
You are ready to launch the visualization now.
55
+
Now this creates the `viz/src/module_casedata.js` file. You are ready to launch the visualization now.
53
56
54
57
Note: If you have created the JSON file externally then simply copy it over in the `viz/data` subdirectory and run the `geninputfile.py` script using the above command.
55
58
56
59
## Launch visualization
57
60
To launch the visualization, run
61
+
58
62
```
59
-
npm start
63
+
yarn start
60
64
```
61
-
This will open a webpage with the visualization of the given network.
65
+
66
+
This will open a webpage with the visualization of the given network. If the network is large, it may take a while to load the visualization. The browser may show option to terminate or Wait and you should click on Wait button.
62
67
63
68
64
69
The figures show the visualization of the synthetic electric grid. The data for developing this visualization was created by merging the synthetic dataset for the [Eastern](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg70k/), [Western](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg10k/), and [Texas](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg2000/) interconnects from the [Electric Grid Test Case Repository](https://electricgrids.engr.tamu.edu/)
@@ -86,6 +91,7 @@ ChatGrid is a natural language query tool for ExaGO visualizations. It is powere
86
91
87
92
### Dependencies
88
93
ChatGrid is built upon the following services and tools.
@@ -97,19 +103,28 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
97
103
1. Convert data formats.
98
104
99
105
First, we need to convert the ExaGO output `.json` files to `.csv` files. The difference between the two data formats is that JSON stores attributes and values as dictionary pairs but CSV stores attributes and values as tables. You can write your own script for this conversion or use the provided script.
100
-
101
-
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory and simply run the following script in the `viz/data` subdirectory (replace the example filename with your json filename). This will output three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`.
102
106
103
-
To install all required python packages go to `backend`directory and run `pip install -r requirements.txt`
107
+
* Go to the `viz/backend`subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps. (Note: These steps are tested with Python 3.13.)
104
108
105
-
109
+
110
+
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory (if not already performed) and simply run the following script in the `viz/backend` subdirectory (replace the example filename with your json filename). This will create three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`. We are assuming `opflowout.json` is the data json file present in `viz/data` folder.
Now there should be 5 CSV files in `viz/backend` folder:
117
+
118
+
* `bus.csv`
119
+
* `generation.csv`
120
+
* `us_states.csv`
121
+
* `counties.csv`
122
+
* `transmission_line.csv`
123
+
124
+
110
125
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
111
126
112
-
* For MAC using brew you can install postgresql 15 using: `brew install postgresql@14`
127
+
* For MAC using brew you can install postgresql 14 using: `brew install postgresql@14`
113
128
* Start the postgressql service: `rew services start postgresql@14`
114
129
* Create a role: `psql -U "$USER" -d postgres`
115
130
* Execute the create role query: `CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD 'ExaGO.2025';` Here `ExaGO.2025` is a password. Change to your preference.
@@ -128,7 +143,10 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
128
143
b. Please be informative and accurate about your table names, and attribute names. Because this information can help LLM understand the dataset and performs better when dealing with user queries.
129
144
130
145
c. Include US state and county information in your database to support spatial queries that related to state or county.
131
-
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_70k --csv test.csv --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
146
+
147
+
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_db --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
148
+
149
+
e. This will create a database named `exago_db` with password `ExaGO.2025`. This information will be used to update the `config.py` file.
132
150
133
151
134
152
4. Connect to your database.
@@ -146,11 +164,12 @@ Open the `config.py` in the `viz/backend` subdirectory replace `YOUR_OPENAI_KEY`
146
164
### Launch backend
147
165
ChatGrid uses Flask to host the service of receiving user queries and returning the data output and text summaries to update the visualizations on the frontend. Please follow the steps below to run the backend server.
148
166
149
-
1. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps.
150
-
2. Run the following command in the `viz/backend` subdirectory
167
+
168
+
* Run the following command in the `viz/backend` subdirectory
151
169
```
152
170
python server.py
153
171
```
172
+
154
173
This will start the backend server for receiving user queries, processing it by LLM and returning data outputs to the frontend.
155
174
156
175
Now open the chat window on the frontend, type your queries, and enjoy ChatGrid!
0 commit comments